WorldWideScience

Sample records for void probability distribution

  1. Void probability as a function of the void's shape and scale-invariant models. [in studies of spacial galactic distribution

    Science.gov (United States)

    Elizalde, E.; Gaztanaga, E.

    1992-01-01

    The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.

  2. Luminosity distance in Swiss cheese cosmology with randomized voids. II. Magnification probability distributions

    CERN Document Server

    Flanagan, Éanna É; Wasserman, Ira; Vanderveld, R Ali

    2011-01-01

    We study the fluctuations in luminosity distances due to gravitational lensing by large scale (> 35 Mpc) structures, specifically voids and sheets. We use a simplified "Swiss cheese" model consisting of a \\Lambda -CDM Friedman-Robertson-Walker background in which a number of randomly distributed non-overlapping spherical regions are replaced by mass compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz & Wald (1998), which includes the effect of lensing shear. The standard deviation of this distribution is ~ 0.027 magnitudes and the mean is ~ 0.003 magnitudes for voids of radius 35 Mpc, sources at redshift z_s=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thic...

  3. New Statistical Perspective to The Cosmic Void Distribution

    CERN Document Server

    Pycke, Jean-Renaud

    2016-01-01

    In this study, we obtain the size distribution of voids as a 3-parameter redshift independent log-normal void probability function (VPF) directly from the Cosmic Void Catalog (CVC). Although many statistical models of void distributions are based on the counts in randomly placed cells, the log-normal VPF that we here obtain is independent of the shape of the voids due to the parameter-free void finder of the CVC. We use three void populations drawn from the CVC generated by the Halo Occupation Distribution (HOD) Mocks which are tuned to three mock SDSS samples to investigate the void distribution statistically and the effects of the environments on the size distribution. As a result, it is shown that void size distributions obtained from the HOD Mock samples are satisfied by the 3-parameter log-normal distribution. In addition, we find that there may be a relation between hierarchical formation, skewness and kurtosis of the log-normal distribution for each catalog. We also show that the shape of the 3-paramet...

  4. Study of Void Probability Scaling of Pions in Ultrarelativistic Nuclear Collision in Fractal Scenario

    CERN Document Server

    Bhaduri, Susmita

    2016-01-01

    Various existing works on multiplicity fluctuation probed about the dynamics of particle production process and eventually signature of phase transition in ultrarelativistic nuclear collision. Fluctuation of spatial pattern has also been analyzed in terms of the scaling behavior of voids. However, analysis of the scaling behavior of the void from fractal scenario has not been explored yet. This work attempts to study the fractality of void probability distribution from a radically different and rigorous method called Visibility Graph analysis, analyzing the data of 32-S-AgBr(200 GeV) interaction. The analysis reveals strong scaling behavior of void probability distribution in all pseurorapidity region. The scaling exponent-PSVG(Power of the Scale-freeness in Visibility Graph), a quantitative parameter related to Hurst exponent, is strongly dependent on rapidity window.

  5. Size effects on void growth in single crystals with distributed voids

    DEFF Research Database (Denmark)

    Borg, Ulrik; Niordson, Christian Frithiof; Kysar, J.W.

    2008-01-01

    The effect of void size on void growth in single crystals with uniformly distributed cylindrical voids is studied numerically using a finite deformation strain gradient crystal plasticity theory with an intrinsic length parameter. A plane strain cell model is analyzed for a single crystal...

  6. Analysis of Cosmological Generalized Reduced Void Probability Functions Constrained by Observations and Numerical Simulations

    CERN Document Server

    Andrew, Keith; Taylor, Lisa

    2013-01-01

    Using survey data and numerical LCDM modeling we establish an optimized fit to the generalized Reduced Void Probability Function, RVPF, of Mikjian used to establish a statistical foundation to any physical process associated with hierarchical clustering. We use a numerical N-body cosmological simulation code, GADGET-2, to investigate the sensitivity of the distribution of voids characterized by the RVPF to a general hierarchical reduced void parameter, a. The void parameter is related to the Levy stability index of the distribution and Fischer critical exponent used in clustering models. We numerically simulate the evolution of the universe from a redshift of z=50 to the current epoch at z=0 in order to generate RVPFs. GADGET-2 is an N-body/smoothed particle hydrodynamics, SPH, code that we ran in MPI parallelizable mode on an HPC Beowulf cluster. The numerical data sets are compared to observational data from the Sloan digital sky Survey, SDSS, CfA, the Deep2 Galaxy Redshift Survey, and the 2dF Survey. We fi...

  7. Probability distributions for magnetotellurics

    Energy Technology Data Exchange (ETDEWEB)

    Stodt, John A.

    1982-11-01

    Estimates of the magnetotelluric transfer functions can be viewed as ratios of two complex random variables. It is assumed that the numerator and denominator are governed approximately by a joint complex normal distribution. Under this assumption, probability distributions are obtained for the magnitude, squared magnitude, logarithm of the squared magnitude, and the phase of the estimates. Normal approximations to the distributions are obtained by calculating mean values and variances from error propagation, and the distributions are plotted with their normal approximations for different percentage errors in the numerator and denominator of the estimates, ranging from 10% to 75%. The distribution of the phase is approximated well by a normal distribution for the range of errors considered, while the distribution of the logarithm of the squared magnitude is approximated by a normal distribution for a much larger range of errors than is the distribution of the squared magnitude. The distribution of the squared magnitude is most sensitive to the presence of noise in the denominator of the estimate, in which case the true distribution deviates significantly from normal behavior as the percentage errors exceed 10%. In contrast, the normal approximation to the distribution of the logarithm of the magnitude is useful for errors as large as 75%.

  8. Superpositions of probability distributions

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  9. Superpositions of probability distributions.

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  10. Probability distribution relationships

    Directory of Open Access Journals (Sweden)

    Yousry Abdelkader

    2013-05-01

    Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.

  11. Scaling of voids and fractality in the galaxy distribution

    CERN Document Server

    Gaite, J; Gaite, Jose; Manrubia, Susanna C.

    2002-01-01

    We study here, from first principles, what properties of voids are to be expected in a fractal point distribution and how the void distribution is related to its morphology. We show this relation in various examples and apply our results to the distribution of galaxies. If the distribution of galaxies forms a fractal set, then this property results in a number of scaling laws to be fulfilled by voids. Consider a fractal set of dimension $D$ and its set of voids. If voids are ordered according to decreasing sizes (largest void has rank R=1, second largest R=2 and so on), then a relation between size $\\Lambda$ and rank of the form $\\Lambda (R) \\propto R^{-z}$ must hold, with $z = d/D$, and where $d$ is the euclidean dimension of the space where the fractal is embedded. The physical restriction $D 1$ in a fractal set. The average size $\\bar \\Lambda$ of voids depends on the upper ($\\Lambda_u$) and the lower ($\\Lambda_l$) cut-off as ${\\bar \\Lambda} \\propto \\Lambda_u^{1-D/d} \\Lambda_l^{D/d}$. Current analysis of v...

  12. Diurnal distribution of sunshine probability

    Energy Technology Data Exchange (ETDEWEB)

    Aydinli, S.

    1982-01-01

    The diurnal distribution of the sunshine probability is essential for the predetermination of average irradiances and illuminances by solar radiation on sloping surfaces. The most meteorological stations have only monthly average values of the sunshine duration available. It is, therefore, necessary to compute the diurnal distribution of sunshine probability starting from the average monthly values. It is shown how the symmetric component of the distribution of the sunshine probability which is a consequence of a ''sidescene effect'' of the clouds can be calculated. The asymmetric components of the sunshine probability depending on the location and the seasons and their influence on the predetermination of the global radiation are investigated and discussed.

  13. Effects of Heterogeneous Sink Distribution on Void Swelling

    DEFF Research Database (Denmark)

    Leffers, Torben; Volobuyev, A. V.; Gann, V. V.

    1986-01-01

    Swelling rates are calculated for two types of material with heterogeneous distributions of dislocations and voids, namely copper irradiated with neutrons to low dose at 250 degree C and heavily cold-worked copper irradiated with 1 MeV electrons in a HVEM at 250 degree C. Both materials...... are considered to consist of non-interacting spherical components with a wall and an inner cell with different dislocation and/or void densities. We subdivide the sphere (wall plus cell) in a number of concentric shells and find a quasi-static solution for the interstitial and vacancy concentrations...

  14. Magnetic pattern at supergranulation scale: the Void Size Distribution

    CERN Document Server

    Berrilli, Francesco; Del Moro, Dario

    2014-01-01

    The large-scale magnetic pattern of the quiet sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large scale cells of overturning plasma and exhibits voids in magnetic organization. Such voids include internetwork fields, a mixed-polarity sparse field that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern a fast circle packing based algorithm is applied to 511 SOHO/MDI high resolution magnetograms acquired during the outstanding solar activity minimum between 23 and 24 cycles. The computed Void Distribution Function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in such a range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay we have found that the voids reveal departure from a simple exponential decay around 35 Mm.

  15. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  16. Exact Probability Distribution versus Entropy

    Directory of Open Access Journals (Sweden)

    Kerstin Andersson

    2014-10-01

    Full Text Available The problem  addressed concerns the determination of the average number of successive attempts  of guessing  a word of a certain  length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations  to a natural language are considered.  The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations  are necessary in order to estimate the number of guesses.  Several  kinds of approximations  are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU time. When considering realistic  sizes of alphabets and words (100, the number of guesses can be estimated  within minutes with reasonable accuracy (a few percent and may therefore constitute an alternative to, e.g., various entropy expressions.  For many probability  distributions,  the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion  of guesses needed on average compared to the total number  decreases almost exponentially with the word length. The leading term in an asymptotic  expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.

  17. Probability distributions for multimeric systems.

    Science.gov (United States)

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  18. Enthalpy and void distributions in subchannels of PHWR fuel bundles

    Energy Technology Data Exchange (ETDEWEB)

    Park, J. W.; Choi, H.; Rhee, B. W. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    Two different types of the CANDU fuel bundles have been modeled for the ASSERT-IV code subchannel analysis. From calculated values of mixture enthalpy and void fraction distribution in the fuel bundles, it is found that net buoyancy effect is pronounced in the central region of the DUPIC fuel bundle when compared with the standard CANDU fuel bundle. It is also found that the central region of the DUPIC fuel bundle can be cooled more efficiently than that of the standard fuel bundle. From the calculated mixture enthalpy distribution at the exit of the fuel channel, it is found that the mixture enthalpy and void fraction can be highest in the peripheral region of the DUPIC fuel bundle. On the other hand, the enthalpy and the void fraction were found to be highest in the central region of the standard CANDU fuel bundle at the exit of the fuel channel. This study shows that the subchannel analysis is very useful in assessing thermal behavior of the fuel bundle that could be used in CANDU reactors. 10 refs., 4 figs., 2 tabs. (Author)

  19. Experimental and numerical investigation of voids distribution in VPI for ITER correction coil

    Energy Technology Data Exchange (ETDEWEB)

    Li, Juping, E-mail: ljping@ipp.ac.cn; Wu, Jiefeng; Yu, Xiaowu

    2015-06-15

    Highlights: • A sample of correction coil was treated by vacuum pressure impregnation. • The voids in sample were observed by computed tomography. • The voids distributions were simulated in 2-D and 3-D model. • The calculated voids locations had a good agreement with experiment. • The simulation was not accurate in calculating the voids content. - Abstract: The experimental and numerical investigations were conducted to study the voids distribution in VPI (Vacuum Pressure Impregnation) process for correction coil. A sample of correction coil was manufactured by VPI. The voids in sample were observed with computed tomography and the average voids content was tested. The voids content is closely related to infiltration velocity and fluid properties. In former researches, the parameters affecting voids content were combined into a single parameter, namely capillary number. By calculating the capillary numbers in different areas of the sample, the voids distribution could be acquired. The corresponding numerical analyses based on Darcy law were conducted in 2-D and 3-D models. The 2-D case was used to simulate the voids distribution on the section as a simplified model, while the 3-D case demonstrated the spatial distribution of voids. The voids locations were similar in 2-D and 3-D cases, but the voids contents were different. The numerical results were compared with the actual voids distribution in sample. It was found the voids locations were close in numerical and experimental results, but the voids content did not match. The numerical simulations are available for predicting the voids locations in VPI, but not accurate in calculating the voids content.

  20. Geometry and scaling of cosmic voids

    CERN Document Server

    Gaite, Jose

    2008-01-01

    CONTEXT: Cosmic voids are observed in the distribution of galaxies and, to some extent, in the dark matter distribution. If these distributions have fractal geometry, it must be reflected in the geometry of voids; in particular, we expect scaling sizes of voids. However, this scaling is not well demonstrated in galaxy surveys yet. AIMS: Our objective is to understand the geometry of cosmic voids in relation to a fractal structure of matter. We intend to distinguish monofractal voids from multifractal voids, regarding their scaling properties. We plan to analyse voids in the distributions of mass concentrations (halos) in a multifractal and their relation to galaxy voids. METHODS: We make a statistical analysis of point distributions based on the void probability function and correlation functions. We assume that voids are spherical and devise a simple spherical void finder. For continuous mass distributions, we employ the methods of fractal geometry. We confirm the analytical predictions with numerical simula...

  1. ASYMPTOTIC QUANTIZATION OF PROBABILITY DISTRIBUTIONS

    Institute of Scientific and Technical Information of China (English)

    Klaus P(o)tzelberger

    2003-01-01

    We give a brief introduction to results on the asymptotics of quantization errors.The topics discussed include the quantization dimension,asymptotic distributions of sets of prototypes,asymptotically optimal quantizations,approximations and random quantizations.

  2. The Multivariate Gaussian Probability Distribution

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2005-01-01

    This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical...

  3. Cosmological Black Holes as Seeds of Voids in Galaxy Distribution

    CERN Document Server

    Capozziello, S; Stornaiolo, C; Capozziello, Salvatore; Funaro, Maria; Stornaiolo, Cosimo

    2004-01-01

    Deep surveys indicate a bubbly structure of cosmological large scale which should be the result of evolution of primordial density perturbations. Several models have been proposed to explain origin and dynamics of such features but, till now, no exhaustive and fully consistent theory has been found. We discuss a model where cosmological black holes, deriving from primordial perturbations, are the seeds for large-scale-structure voids. We give details of dynamics and accretion of the system voids-cosmological black holes from the epochs $(z\\simeq10^{3})$ till now finding that void of $40h^{-1}Mpc$ of diameter and under-density of -0.9 will fits the observations without conflicting with the homogeneity and isotropy of cosmic microwave background radiation.

  4. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  5. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  6. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  7. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  8. Numerical Study of Void Fraction Distribution Propagation in Gas-Liquid Two-Phase Flow

    Institute of Scientific and Technical Information of China (English)

    YANG Jianhui; LI Qing; LU Wenqiang

    2005-01-01

    A dynamic propagation model was developed for waves in two-phase flows by assuming that continuity waves and dynamic waves interact nonlinearly for certain flow conditions. The drift-flux model is solved with the one-dimensional continuity equation for gas-liquid two-phase flows as an initial-boundary value problem solved using the characteristic-curve method. The numerical results give the void fraction distribution propagation in a gas-liquid two-phase flow which shows how the flow pattern transition occurs. The numerical simulations of different flow patterns show that the void fraction distribution propagation is determined by the characteristics of the drift-flux between the liquid and gas flows and the void fraction range. Flow pattern transitions begin around a void fraction of 0.27 and end around 0.58. Flow pattern transitions do not occur for very high void concentrations.

  9. Exact probability distribution functions for Parrondo's games

    Science.gov (United States)

    Zadourian, Rubina; Saakian, David B.; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  10. In search of empty places: Voids in the distribution of galaxies

    Science.gov (United States)

    Bucklein, Brian K.

    2010-12-01

    We investigate several techniques to identify voids in the galaxy distribution of matter in the universe. We utilize galaxy number counts as a function of apparent magnitude and Wolf plots to search a two- or three-dimensional data set in a pencil-beam fashion to locate voids within the field of view. The technique is able to distinguish between voids that represent simply a decrease in density as well as those that show a build up of galaxies on the front or back side of the void. This method turns out to be primarily useable only at relatively short range (out to about 200 Mpc). Beyond this distance, the characteristics indicating a void become increasingly difficult to separate from the statistical background noise. We apply the technique to a very simplified model as well as to the Millennium Run dark matter simulation. We then compare results with those obtained on the Sloan Digital Sky Survey. We also created the Watershed Void Examiner (WaVE) which treats densities in a fashion similar to elevation on a topographical map, and then we allow the "terrain" to flood. The flooded low-lying regions are identified as voids, which are allowed to grow and merge as the level of flooding becomes higher (the overdensity threshold increases). Void statistics can be calculated for each void. We also determine that within the Millennium Run semi-analytic galaxy catalog, the walls that separate the voids are permeable at a scale of 4 Mpc. For each resolution that we tested, there existed a characteristic density at which the walls could be penetrated, allowing a single void to grow to dominate the volume. With WaVE, we are able to get comparable results to those previously published, but often with fewer choices of parameters that could bias the results. We are also able to determine the the density at which the number of voids peaks for different resolutions as well as the expected number of void galaxies. The number of void galaxies is amazingly consistent at an

  11. A variational constitutive model for the distribution and interactions of multi-sized voids

    KAUST Repository

    Liu, Jinxing

    2013-07-29

    The evolution of defects or voids, generally recognized as the basic failure mechanism in most metals and alloys, has been intensively studied. Most investigations have been limited to spatially periodic cases with non-random distributions of the radii of the voids. In this study, we use a new form of the incompressibility of the matrix to propose the formula for the volumetric plastic energy of a void inside a porous medium. As a consequence, we are able to account for the weakening effect of the surrounding voids and to propose a general model for the distribution and interactions of multi-sized voids. We found that the single parameter in classical Gurson-type models, namely void volume fraction is not sufficient for the model. The relative growth rates of voids of different sizes, which can in principle be obtained through physical or numerical experiments, are required. To demonstrate the feasibility of the model, we analyze two cases. The first case represents exactly the same assumption hidden in the classical Gurson\\'s model, while the second embodies the competitive mechanism due to void size differences despite in a much simpler manner than the general case. Coalescence is implemented by allowing an accelerated void growth after an empirical critical porosity in a way that is the same as the Gurson-Tvergaard-Needleman model. The constitutive model presented here is validated through good agreements with experimental data. Its capacity for reproducing realistic failure patterns is shown by simulating a tensile test on a notched round bar. © 2013 The Author(s).

  12. Microstructural characterization of XLPE electrical insulation in power cables: determination of void size distributions using TEM

    Science.gov (United States)

    Markey, L.; Stevens, G. C.

    2003-10-01

    In an effort to progress in our understanding of the ageing mechanisms of high voltage cables submitted to electrical and thermal stresses, we present a quantitative study of voids, the defects which are considered to be partly responsible for cable failure. We propose a method based on large data sets of transmission electron microscopy (TEM) observations of replicated samples allowing for the determination of void concentration distribution as a function of void size in the mesoscopic to microscopic range at any point in the cable insulation. A theory is also developed to calculate the effect of etching on the apparent size of the voids observed. We present the first results of this sort ever obtained on two industrial cables, one of which was aged in an AC field. Results clearly indicate that a much larger concentration of voids occur near the inner semiconductor compared to the bulk of the insulation, independently of ageing. An effect of ageing can also be seen near the inner semiconductor, resulting in an increase in the total void internal surface area and a slight shift of the concentration curve towards larger voids, with the peak moving from about 40 nm to about 50 nm.

  13. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  14. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  15. Generating pseudo-random discrete probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Maziero, Jonas, E-mail: jonasmaziero@gmail.com [Universidade Federal de Santa Maria (UFSM), RS (Brazil). Departamento de Fisica

    2015-08-15

    The generation of pseudo-random discrete probability distributions is of paramount importance for a wide range of stochastic simulations spanning from Monte Carlo methods to the random sampling of quantum states for investigations in quantum information science. In spite of its significance, a thorough exposition of such a procedure is lacking in the literature. In this article, we present relevant details concerning the numerical implementation and applicability of what we call the iid, normalization, and trigonometric methods for generating an unbiased probability vector p=(p{sub 1},⋯ ,p{sub d}). An immediate application of these results regarding the generation of pseudo-random pure quantum states is also described. (author)

  16. Why a steady state void size distribution in irradiated UO2? A modeling approach

    Science.gov (United States)

    Maillard, S.; Martin, G.; Sabathier, C.

    2016-05-01

    In UO2 pellets irradiated in standard water reactor, Xe nano-bubbles nucleate, grow, coarsen and finally reach a quasi steady state size distribution: transmission electron microscope (TEM) observations typically report a concentration around 10-4 nm-3 and a radius around 0.5 nm. This phenomenon is often considered as a consequence of radiation enhanced diffusion, precipitation of gas atoms and ballistic mixing. However, in UO2 thin foils irradiated with energetic ions at room temperature, a nano-void population whose size distribution reaches a similar steady state can be observed, although quasi no foreign atoms are implanted nor significant cation vacancy diffusion expected in conditions. Atomistic simulations performed at low temperature only address the first stage of the process, supporting the assumption of void heterogeneous nucleation: 25 keV sub-cascades directly produce defect aggregates (loops and voids) even in the absence of gas atoms and thermal diffusion. In this work a semi-empirical stochastic model is proposed to enlarge the time scale covered by simulation up to damage levels where every point in the material undergoes the superposition of a large number of sub-cascade impacts. To account for the accumulation of these impacts, simple rules inferred from the atomistic simulation results are used. The model satisfactorily reproduces the TEM observations of nano-voids size and concentration, which paves the way for the introduction of a more realistic damage term in rate theory models.

  17. Distribution of void fraction for gas-liquid slug flow in an inclined pipe

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to investigate the effect of inclination angle on the spatial distribution of phases, experiments on gas-liquid two-phase slug flow in an inclined pipe were carried out by using the optical probe and an EKTAPRO 1000 high speed motion analyzer. It has been demonstrated that the inclination angle and the mixture velocity are important parameters to influence the distribution of void fraction for upward slug flow in the inclined pipe. At high mixture velocity, the gas phase profile is axial symmetry in the cross-section of the pipe. This is similar to that for vertical slug flow. In contrast, most of the gas phase is located near the upper pipe wall at low mixture velocity. By measuring the axial variation of void fraction along the liquid slug, it can be concluded that there is a high void fraction wake region with length of 3~4D in the front of liquid slug. In the fully developed zone of liquid slug, the peak value of the void fraction is near the upper wall.

  18. Probability distributions with summary graph structure

    CERN Document Server

    Wermuth, Nanny

    2010-01-01

    A set of independence statements may define the independence structure of interest in a family of joint probability distributions. This structure is often captured by a graph that consists of nodes representing the random variables and of edges that couple node pairs. One important class are multivariate regression chain graphs. They describe the independences of stepwise processes, in which at each step single or joint responses are generated given the relevant explanatory variables in their past. For joint densities that then result after possible marginalising or conditioning, we use summary graphs. These graphs reflect the independence structure implied by the generating process for the reduced set of variables and they preserve the implied independences after additional marginalising and conditioning. They can identify generating dependences which remain unchanged and alert to possibly severe distortions due to direct and indirect confounding. Operators for matrix representations of graphs are used to de...

  19. Neutron Tomography Using Mobile Neutron Generators for Assessment of Void Distributions in Thermal Hydraulic Test Loops

    OpenAIRE

    Andersson, Peter; Bjelkenstedt, Tom; Andersson Sundén, Erik; Sjöstrand, Henrik; Jacobsson, Staffan

    2015-01-01

    Detailed knowledge of the lateral distribution of steam (void) and water in a nuclear fuel assembly is of great value for nuclear reactor operators and fuel manufacturers, with consequences for both reactor safety and economy of operation. Therefore, nuclear relevant two-phase flows are being studied at dedicated thermal-hydraulic test loop, using twophase flow systems ranging from simplified geometries such as heated circular pipes to full scale mock-ups of nuclear fuel assemblies. Neutron t...

  20. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  1. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  2. Ultrasonic Techniques for Air Void Size Distribution and Property Evaluation in Both Early-Age and Hardened Concrete Samples

    Directory of Open Access Journals (Sweden)

    Shuaicheng Guo

    2017-03-01

    Full Text Available Entrained air voids can improve the freeze-thaw durability of concrete, and also affect its mechanical and transport properties. Therefore, it is important to measure the air void structure and understand its influence on concrete performance for quality control. This paper aims to measure air void structure evolution at both early-age and hardened stages with the ultrasonic technique, and evaluates its influence on concrete properties. Three samples with different air entrainment agent content were specially prepared. The air void structure was determined with optimized inverse analysis by achieving the minimum error between experimental and theoretical attenuation. The early-age sample measurement showed that the air void content with the whole size range slightly decreases with curing time. The air void size distribution of hardened samples (at Day 28 was compared with American Society for Testing and Materials (ASTM C457 test results. The air void size distribution with different amount of air entrainment agent was also favorably compared. In addition, the transport property, compressive strength, and dynamic modulus of concrete samples were also evaluated. The concrete transport decreased with the curing age, which is in accordance with the air void shrinkage. The correlation between the early-age strength development and hardened dynamic modulus with the ultrasonic parameters was also evaluated. The existence of clustered air voids in the Interfacial Transition Zone (ITZ area was found to cause severe compressive strength loss. The results indicated that this developed ultrasonic technique has potential in air void size distribution measurement, and demonstrated the influence of air void structure evolution on concrete properties during both early-age and hardened stages.

  3. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd;

    report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...

  4. Polarization Mode Dispersion Probability Distribution for Arbitrary Mode Coupling

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The probability distribution of the differential group delay for arbitrary mode coupling is simulated with Monte-Carlo method. Fitting the simulation results, we obtain probability distribution function for arbitrary mode coupling.

  5. Gas-liquid Phase Distribution and Void Fraction Measurements Using the MRI

    Science.gov (United States)

    Daidzic, N. E.; Schmidt, E.; Hasan, M. M.; Altobelli, S.

    2004-01-01

    We used a permanent-magnet MRI system to estimate the integral and spatially- and/or temporally-resolved void-fraction distributions and flow patterns in gas-liquid two-phase flows. Air was introduced at the bottom of the stagnant liquid column using an accurate and programmable syringe pump. Air flow rates were varied between 1 and 200 ml/min. The cylindrical non-conducting test tube in which two-phase flow was measured was placed in a 2.67 kGauss MRI with MRT spectrometer/imager. Roughly linear relationship has been obtained for the integral void-fraction, obtained by volume-averaging of the spatially-resolved signals, and the air flow rate in upward direction. The time-averaged spatially-resolved void fraction has also been obtained for the quasi-steady flow of air in a stagnant liquid column. No great accuracy is claimed as this was an exploratory proof-of-concept type of experiment. Preliminary results show that MRI a non-invasive and non-intrusive experimental technique can indeed provide a wealth of different qualitative and quantitative data and is especially well suited for averaged transport processes in adiabatic and diabatic multi-phase and/or multi-component flows.

  6. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  7. Some New Approaches to Multivariate Probability Distributions.

    Science.gov (United States)

    1986-12-01

    Forte, B. (1985). Mutual dependence of random variables and maximum discretized entropy , Ann. Prob., 13, 630-637. .. 3. Billingsley, P. (1968...characterizations of distributions, such as the Marshall-Olkin bivariate distribution or Frechet’s multi- variate distribution with continuous marginals or a...problem mentioned in Remark 8. He has given in this context a uniqueness theorem in the bivariate case under certain assump- tions. The following

  8. Foundations of quantization for probability distributions

    CERN Document Server

    Graf, Siegfried

    2000-01-01

    Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.

  9. Eliciting Subjective Probability Distributions with Binary Lotteries

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd;

    2015-01-01

    We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment.......We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment....

  10. Probability distribution fitting of schedule overruns in construction projects

    OpenAIRE

    P E D Love; C-P Sing; WANG, X; Edwards, D.J.; H Odeyinka

    2013-01-01

    The probability of schedule overruns for construction and engineering projects can be ascertained using a ‘best fit’ probability distribution from an empirical distribution. The statistical characteristics of schedule overruns occurring in 276 Australian construction and engineering projects were analysed. Skewness and kurtosis values revealed that schedule overruns are non-Gaussian. Theoretical probability distributions were then fitted to the schedule overrun data; including the Kolmogorov–...

  11. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2017-01-01

    Subjective beliefs are elicited routinely in economics experiments. However, such elicitation often suffers from two possible disadvantages. First, beliefs are recovered in the form of a summary statistic, usually the mean, of the underlying latent distribution. Second, recovered beliefs are bias...

  12. Acoustic characterization of void distributions across carbon-fiber composite layers

    Science.gov (United States)

    Tayong, Rostand B.; Smith, Robert A.; Pinfield, Valerie J.

    2016-02-01

    Carbon Fiber Reinforced Polymer (CFRP) composites are often used as aircraft structural components, mostly due to their superior mechanical properties. In order to improve the efficiency of these structures, it is important to detect and characterize any defects occurring during the manufacturing process, removing the need to mitigate the risk of defects through increased thicknesses of structure. Such defects include porosity, which is well-known to reduce the mechanical performance of composite structures, particularly the inter-laminar shear strength. Previous work by the authors has considered the determination of porosity distributions in a fiber-metal laminate structure [1]. This paper investigates the use of wave-propagation modeling to invert the ultrasonic response and characterize the void distribution within the plies of a CFRP structure. Finite Element (FE) simulations are used to simulate the ultrasonic response of a porous composite laminate to a typical transducer signal. This simulated response is then applied as input data to an inversion method to calculate the distribution of porosity across the layers. The inversion method is a multi-dimensional optimization utilizing an analytical model based on a normal-incidence plane-wave recursive method and appropriate mixture rules to estimate the acoustical properties of the structure, including the effects of plies and porosity. The effect of porosity is defined through an effective wave-number obtained from a scattering model description. Although a single-scattering approach is applied in this initial study, the limitations of the method in terms of the considered porous layer, percentage porosity and void radius are discussed in relation to single- and multiple-scattering methods. A comparison between the properties of the modeled structure and the void distribution obtained from the inversion is discussed. This work supports the general study of the use of ultrasound methods with inversion to

  13. NONPARAMETRIC ESTIMATION OF CHARACTERISTICS OF PROBABILITY DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-10-01

    Full Text Available The article is devoted to the nonparametric point and interval estimation of the characteristics of the probabilistic distribution (the expectation, median, variance, standard deviation, variation coefficient of the sample results. Sample values are regarded as the implementation of independent and identically distributed random variables with an arbitrary distribution function having the desired number of moments. Nonparametric analysis procedures are compared with the parametric procedures, based on the assumption that the sample values have a normal distribution. Point estimators are constructed in the obvious way - using sample analogs of the theoretical characteristics. Interval estimators are based on asymptotic normality of sample moments and functions from them. Nonparametric asymptotic confidence intervals are obtained through the use of special output technology of the asymptotic relations of Applied Statistics. In the first step this technology uses the multidimensional central limit theorem, applied to the sums of vectors whose coordinates are the degrees of initial random variables. The second step is the conversion limit multivariate normal vector to obtain the interest of researcher vector. At the same considerations we have used linearization and discarded infinitesimal quantities. The third step - a rigorous justification of the results on the asymptotic standard for mathematical and statistical reasoning level. It is usually necessary to use the necessary and sufficient conditions for the inheritance of convergence. This article contains 10 numerical examples. Initial data - information about an operating time of 50 cutting tools to the limit state. Using the methods developed on the assumption of normal distribution, it can lead to noticeably distorted conclusions in a situation where the normality hypothesis failed. Practical recommendations are: for the analysis of real data we should use nonparametric confidence limits

  14. Stable Probability Distributions and their Domains of Attraction

    NARCIS (Netherlands)

    J.L. Geluk (Jaap); L.F.M. de Haan (Laurens)

    1997-01-01

    textabstractThe theory of stable probability distributions and their domains of attraction is derived in a direct way (avoiding the usual route via infinitely divisible distributions) using Fourier transforms. Regularly varying functions play an important role in the exposition.

  15. Semi-stable distributions in free probability theory

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Semi-stable distributions, in classical probability theory, are characterized as limiting distributions of subsequences of normalized partial sums of independent and identically distributed random variables. We establish the noncommutative counterpart of semi-stable distributions. We study the characterization of noncommutative semi-stability through free cumulant transform and develop the free semi-stability and domain of semi-stable attraction in free probability theory.

  16. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  17. CFD Analysis of a Void Distribution Benchmark of the NUPEC PSBT Tests: Model Calibration and Influence of Turbulence Modelling

    Directory of Open Access Journals (Sweden)

    E. Krepper

    2012-01-01

    Full Text Available The paper presents CFD calculations of the void distribution tests of the PSBT benchmark using ANSYS CFX-12.1. First, relevant aspects of the implemented wall boiling model are reviewed highlighting the uncertainties in several model parameters. It is then shown that the measured cross-sectionally averaged values can be reproduced well with a single set of calibrated model parameters for different test cases. For the reproduction of patterns of void distribution cross-sections, attention has to be focussed on the modelling of turbulence in the narrow channel. Only a turbulence model with the capability to resolve turbulent secondary flows is able to reproduce at least qualitatively the observed void distribution patterns.

  18. Probability distributions in risk management operations

    CERN Document Server

    Artikis, Constantinos

    2015-01-01

    This book is about the formulations, theoretical investigations, and practical applications of new stochastic models for fundamental concepts and operations of the discipline of risk management. It also examines how these models can be useful in the descriptions, measurements, evaluations, and treatments of risks threatening various modern organizations. Moreover, the book makes clear that such stochastic models constitute very strong analytical tools which substantially facilitate strategic thinking and strategic decision making in many significant areas of risk management. In particular the incorporation of fundamental probabilistic concepts such as the sum, minimum, and maximum of a random number of continuous, positive, independent, and identically distributed random variables in the mathematical structure of stochastic models significantly supports the suitability of these models in the developments, investigations, selections, and implementations of proactive and reactive risk management operations. The...

  19. Research on segregation evaluation methods of asphalt pavement based on air voids distribution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Eye observation was used to evaluate the segregation degree of asphalt pavement, which was not much creditable. To the asphalt pavement, road surface texture measuring method which has appeared recently can identify gradational segregation; but it can't reflect the influence of the temperature segregation. However,using infrared temperature detector to evaluate the segregation must be taken during paving, which brings much inconvenience. In this paper, measuring the air voids distribution using non-nuclear density gauge to evaluate asphalt pavement segregation was introduced. Result shows that this method can directly reflect the comprehensive results of the two types of segregation in a high efficient and accurate way. Moreover, using the sketch map of segregation area can help to analyze the segregation reason visually.

  20. Study on void fraction distribution in the moderator cell of Cold Neutron Source systems in China Advanced Research Reactor

    Science.gov (United States)

    Li, Liangxing; Li, Huixiong; Hu, Jinfeng; Bi, Qincheng; Chen, Tingkuan

    2007-04-01

    A physical model is developed for analyzing and evaluating the void fraction profiles in the moderator cell of the Cold Neutron Source (CNS) of the China Advanced Research Reactor (CARR), which is now constructing in the China Institute of Atomic Energy (CIAE). The results derived from the model are compared with the related experimental data and its propriety is verified. The model is then used to explore the influence of various factors, including the diameter of boiling vapor bubbles, liquid density, liquid viscosity and the total heating power acted on the moderator cell, on the void fraction profiles in the cell. The results calculated with the present model indicate that the void fraction in the moderator cell increases linearly with heating power, and increases with the liquid viscosity, but decreases as the size of bubbles increases, and increases linearly with heating power. For the case where hydrogen is being used as a moderator, calculation results show that the void fraction in the moderator cell may be less than 30%, which is the maximum void fraction permitted from the nuclear physics point of view. The model and the calculation results will help to obtain insight of the mechanism that controls the void fraction distribution in the moderator cell, and provide theoretical supports for the moderator cell design.

  1. Some explicit expressions for the probability distribution of force magnitude

    Indian Academy of Sciences (India)

    Saralees Nadarajah

    2008-08-01

    Recently, empirical investigations have suggested that the components of contact forces follow the exponential distribution. However, explicit expressions for the probability distribution of the corresponding force magnitude have not been known and only approximations have been used in the literature. In this note, for the first time, I provide explicit expressions for the probability distribution of the force magnitude. Both two-dimensional and three-dimensional cases are considered.

  2. The Void Galaxy Survey

    CERN Document Server

    van de Weygaert, R; Platen, E; Beygu, B; van Gorkom, J H; van der Hulst, J M; Aragon-Calvo, M A; Peebles, P J E; Jarrett, T; Rhee, G; Kovac, K; Yip, C -W

    2011-01-01

    The Void Galaxy Survey (VGS) is a multi-wavelength program to study $\\sim$60 void galaxies. Each has been selected from the deepest interior regions of identified voids in the SDSS redshift survey on the basis of a unique geometric technique, with no a prior selection of intrinsic properties of the void galaxies. The project intends to study in detail the gas content, star formation history and stellar content, as well as kinematics and dynamics of void galaxies and their companions in a broad sample of void environments. It involves the HI imaging of the gas distribution in each of the VGS galaxies. Amongst its most tantalizing findings is the possible evidence for cold gas accretion in some of the most interesting objects, amongst which are a polar ring galaxy and a filamentary configuration of void galaxies. Here we shortly describe the scope of the VGS and the results of the full analysis of the pilot sample of 15 void galaxies.

  3. The estimation of tree posterior probabilities using conditional clade probability distributions.

    Science.gov (United States)

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  4. Void Dynamics

    Science.gov (United States)

    Padilla, Nelson D.; Paz, Dante; Lares, Marcelo; Ceccarelli, Laura; Lambas, Diego Garcí A.; Cai, Yan-Chuan; Li, Baojiu

    2016-10-01

    Cosmic voids are becoming key players in testing the physics of our Universe.Here we concentrate on the abundances and the dynamics of voids as these are among the best candidatesto provide information on cosmological parameters. Cai, Padilla & Li (2014)use the abundance of voids to tell apart Hu & Sawicki f(R) models from General Relativity. An interestingresult is that even though, as expected, voids in the dark matter field are emptier in f(R) gravity due to the fifth force expellingaway from the void centres, this result is reversed when haloes are used to find voids. The abundance of voids in this casebecomes even lower in f(R) compared to GR for large voids. Still, the differences are significant and thisprovides a way to tell apart these models. The velocity field differences between f(R) and GR, on the other hand, arethe same for halo voids and for dark matter voids.Paz et al. (2013), concentrate on the velocity profiles around voids. First they show the necessityof four parameters to describe the density profiles around voids given two distinct voidpopulations, voids-in-voids and voids-in-clouds. This profile is used to predict peculiar velocities around voids,and the combination of the latter with void density profiles allows the construction of modelvoid-galaxy cross-correlation functions with redshift space distortions. When these modelsare tuned to fit the measured correlation functions for voids and galaxies in the SloanDigital Sky Survey, small voids are found to be of the void-in-cloud type, whereas largerones are consistent with being void-in-void. This is a novel result that is obtaineddirectly from redshift space data around voids. These profiles can be used toremove systematics on void-galaxy Alcock-Pacinsky tests coming from redshift-space distortions.

  5. How Can Histograms Be Useful for Introducing Continuous Probability Distributions?

    Science.gov (United States)

    Derouet, Charlotte; Parzysz, Bernard

    2016-01-01

    The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…

  6. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  7. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  8. Application-dependent Probability Distributions for Offshore Wind Speeds

    Science.gov (United States)

    Morgan, E. C.; Lackner, M.; Vogel, R. M.; Baise, L. G.

    2010-12-01

    The higher wind speeds of the offshore environment make it an attractive setting for future wind farms. With sparser field measurements, the theoretical probability distribution of short-term wind speeds becomes more important in estimating values such as average power output and fatigue load. While previous studies typically compare the accuracy of probability distributions using R2, we show that validation based on this metric is not consistent with validation based on engineering parameters of interest, namely turbine power output and extreme wind speed. Thus, in order to make the most accurate estimates possible, the probability distribution that an engineer picks to characterize wind speeds should depend on the design parameter of interest. We introduce the Kappa and Wakeby probability distribution functions to wind speed modeling, and show that these two distributions, along with the Biweibull distribution, fit wind speed samples better than the more widely accepted Weibull and Rayleigh distributions based on R2. Additionally, out of the 14 probability distributions we examine, the Kappa and Wakeby give the most accurate and least biased estimates of turbine power output. The fact that the 2-parameter Lognormal distribution estimates extreme wind speeds (i.e. fits the upper tail of wind speed distributions) with least error indicates that not one single distribution performs satisfactorily for all applications. Our use of a large dataset composed of 178 buoys (totaling ~72 million 10-minute wind speed observations) makes these findings highly significant, both in terms of large sample size and broad geographical distribution across various wind regimes. Boxplots of R2 from the fit of each of the 14 distributions to the 178 boy wind speed samples. Distributions are ranked from left to right by ascending median R2, with the Biweibull having the closest median to 1.

  9. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Ginestra Bianconi

    2008-06-01

    The structural entropy is the entropy of the ensemble of uncorrelated networks with given degree sequence. Here we derive the most probable degree distribution emerging when we distribute stubs (or half-edges) randomly through the nodes of the network by keeping fixed the structural entropy. This degree distribution is found to decay as a Poisson distribution when the entropy is maximized and to have a power-law tail with an exponent → 2 when the entropy is minimized.

  10. PROBABILITY DISTRIBUTION FUNCTION OF NEAR-WALL TURBULENT VELOCITY FLUCTUATIONS

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    By large eddy simulation (LES), turbulent databases of channel flows at different Reynolds numbers were established. Then, the probability distribution functions of the streamwise and wall-normal velocity fluctuations were obtained and compared with the corresponding normal distributions. By hypothesis test, the deviation from the normal distribution was analyzed quantitatively. The skewness and flatness factors were also calculated. And the variations of these two factors in the viscous sublayer, buffer layer and log-law layer were discussed. Still illustrated were the relations between the probability distribution functions and the burst events-sweep of high-speed fluids and ejection of low-speed fluids-in the viscous sub-layer, buffer layer and loglaw layer. Finally the variations of the probability distribution functions with Reynolds number were examined.

  11. DIVE in the cosmic web: voids with Delaunay triangulation from discrete matter tracer distributions

    Science.gov (United States)

    Zhao, Cheng; Tao, Charling; Liang, Yu; Kitaura, Francisco-Shu; Chuang, Chia-Hsun

    2016-07-01

    We present a novel parameter-free cosmological void finder (DIVE, Delaunay TrIangulation Void findEr) based on Delaunay Triangulation (DT), which efficiently computes the empty spheres constrained by a discrete set of tracers. We define the spheres as DT voids, and describe their properties, including a universal density profile together with an intrinsic scatter. We apply this technique on 100 halo catalogues with volumes of 2.5 h-1Gpc side each, with a bias and number density similar to the Baryon Oscillation Spectroscopic Survey CMASS luminous red galaxies, performed with the PATCHY code. Our results show that there are two main species of DT voids, which can be characterized by the radius: they have different responses to halo redshift space distortions, to number density of tracers, and reside in different dark matter environments. Based on dynamical arguments using the tidal field tensor, we demonstrate that large DT voids are hosted in expanding regions, whereas the haloes used to construct them reside in collapsing ones. Our approach is therefore able to efficiently determine the troughs of the density field from galaxy surveys, and can be used to study their clustering. We further study the power spectra of DT voids, and find that the bias of the two populations are different, demonstrating that the small DT voids are essentially tracers of groups of haloes.

  12. Void effects and the determination of ''patches'' for radiation distribution in heterogeneous multilayer shields

    Energy Technology Data Exchange (ETDEWEB)

    Sayedahmed, F.M.; Makarious, A.S.; Kansouh, W.A. (Atomic Energy Authority, Cairo (Egypt). Reactor and Neutron Physics Dept.)

    1989-01-01

    The effect on radiation distribution in heterogeneous multilayer shield configurations containing cylindrical air-filled voids of different diameters have been investigated. The heterogeneous shield assemblies were placed in front of one of the horizontal channels of the ET-RR-1 reactor. The measurements of {delta}-rays and slow neutrons were carried out using LiF-7 and LiF-6 Teflon disc dosimeters, respectively. It was found that the presence of air-filled voids increases the radiation along and perpendicular to the void axis. An empirical formula has been derived to calculate the radiation distribution in the multilayer shields and a good agreement between the measured and calculated values was obtained. The formulae developed by Chase have been checked experimentally to determine the minimum amount of ''patching'' required on the outside of the voided shields to maintain a uniform emergent radiation distribution on the outer surface of the shielding assembly. The applicability of this formula has been defined and a semi-empirical formula developed to describe the experimental results obtained for the required ''patching''. (author).

  13. Structure in the 3D Galaxy Distribution. II. Voids and Watersheds of Local Maxima and Minima

    Science.gov (United States)

    Way, M. J.; Gazis, P. R.; Scargle, Jeffrey D.

    2015-01-01

    The major uncertainties in studies of the multi-scale structure of the universe arise not from observational errors but from the variety of legitimate definitions and detection methods for individual structures. To facilitate the study of these methodological dependencies, we have carried out 12 different analyses defining structures in various ways. This has been done in a purely geometrical way by utilizing the HOP algorithm as a unique parameter-free method of assigning groups of galaxies to local density maxima or minima. From three density estimation techniques (smoothing kernels, Bayesian blocks, and self-organizing maps) applied to three data sets (the Sloan Digital Sky Survey Data Release 7, the Millennium simulation, and randomly distributed points) we tabulate information that can be used to construct catalogs of structures connected to local density maxima and minima. We also introduce a void finder that utilizes a method to assemble Delaunay tetrahedra into connected structures and characterizes regions empty of galaxies in the source catalog.

  14. STRUCTURE IN THE 3D GALAXY DISTRIBUTION. II. VOIDS AND WATERSHEDS OF LOCAL MAXIMA AND MINIMA

    Energy Technology Data Exchange (ETDEWEB)

    Way, M. J. [Also at NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025, USA. (United States); Gazis, P. R.; Scargle, Jeffrey D., E-mail: Michael.J.Way@nasa.gov, E-mail: PGazis@sbcglobal.net, E-mail: Jeffrey.D.Scargle@nasa.gov [NASA Ames Research Center, Space Science Division, Moffett Field, CA 94035 (United States)

    2015-01-20

    The major uncertainties in studies of the multi-scale structure of the universe arise not from observational errors but from the variety of legitimate definitions and detection methods for individual structures. To facilitate the study of these methodological dependencies, we have carried out 12 different analyses defining structures in various ways. This has been done in a purely geometrical way by utilizing the HOP algorithm as a unique parameter-free method of assigning groups of galaxies to local density maxima or minima. From three density estimation techniques (smoothing kernels, Bayesian blocks, and self-organizing maps) applied to three data sets (the Sloan Digital Sky Survey Data Release 7, the Millennium simulation, and randomly distributed points) we tabulate information that can be used to construct catalogs of structures connected to local density maxima and minima. We also introduce a void finder that utilizes a method to assemble Delaunay tetrahedra into connected structures and characterizes regions empty of galaxies in the source catalog.

  15. Generating Probability Distributions using Multivalued Stochastic Relay Circuits

    CERN Document Server

    Lee, David

    2011-01-01

    The problem of random number generation dates back to von Neumann's work in 1951. Since then, many algorithms have been developed for generating unbiased bits from complex correlated sources as well as for generating arbitrary distributions from unbiased bits. An equally interesting, but less studied aspect is the structural component of random number generation as opposed to the algorithmic aspect. That is, given a network structure imposed by nature or physical devices, how can we build networks that generate arbitrary probability distributions in an optimal way? In this paper, we study the generation of arbitrary probability distributions in multivalued relay circuits, a generalization in which relays can take on any of N states and the logical 'and' and 'or' are replaced with 'min' and 'max' respectively. Previous work was done on two-state relays. We generalize these results, describing a duality property and networks that generate arbitrary rational probability distributions. We prove that these network...

  16. Comparative Analysis of CTF and Trace Thermal-Hydraulic Codes Using OECD/NRC PSBT Benchmark Void Distribution Database

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The international OECD/NRC PSBT benchmark has been established to provide a test bed for assessing the capabilities of thermal-hydraulic codes and to encourage advancement in the analysis of fluid flow in rod bundles. The benchmark was based on one of the most valuable databases identified for the thermal-hydraulics modeling developed by NUPEC, Japan. The database includes void fraction and departure from nucleate boiling measurements in a representative PWR fuel assembly. On behalf of the benchmark team, PSU in collaboration with US NRC has performed supporting calculations using the PSU in-house advanced thermal-hydraulic subchannel code CTF and the US NRC system code TRACE. CTF is a version of COBRA-TF whose models have been continuously improved and validated by the RDFMG group at PSU. TRACE is a reactor systems code developed by US NRC to analyze transient and steady-state thermal-hydraulic behavior in LWRs and it has been designed to perform best-estimate analyses of LOCA, operational transients, and other accident scenarios in PWRs and BWRs. The paper presents CTF and TRACE models for the PSBT void distribution exercises. Code-to-code and code-to-data comparisons are provided along with a discussion of the void generation and void distribution models available in the two codes.

  17. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  18. Neutron Tomography Using Mobile Neutron Generators for Assessment of Void Distributions in Thermal Hydraulic Test Loops

    Science.gov (United States)

    Andersson, P.; Bjelkenstedt, T.; Sundén, E. Andersson; Sjöstrand, H.; Jacobsson-Svärd, S.

    Detailed knowledge of the lateral distribution of steam (void) and water in a nuclear fuel assembly is of great value for nuclear reactor operators and fuel manufacturers, with consequences for both reactor safety and economy of operation. Therefore, nuclear relevant two-phase flows are being studied at dedicated thermal-hydraulic test loop, using two-phase flow systems ranging from simplified geometries such as heated circular pipes to full scale mock-ups of nuclear fuel assemblies. Neutron tomography (NT) has been suggested for assessment of the lateral distribution of steam and water in such test loops, motivated by a good ability of neutrons to penetrate the metallic structures of metal pipes and nuclear fuel rod mock-ups, as compared to e.g. conventional X-rays, while the liquid water simultaneously gives comparatively good contrast. However, these stationary test loops require the measurement setup to be mobile, which is often not the case for NT setups. Here, it is acknowledged that fast neutrons of 14 MeV from mobile neutron generators constitute a viable option for a mobile NT system. We present details of the development of neutron tomography for this purpose at the division of Applied Nuclear Physics at Uppsala University. Our concept contains a portable neutron generator, exploiting the fusion reaction of deuterium and tritium, and a detector with plastic scintillator elements designed to achieveadequate spatial and energy resolution, all mounted in a light-weight frame without collimators or bulky moderation to allow for a mobile instrument that can be moved about the stationary thermal hydraulic test sections. The detector system stores event-to-event pulse-height information to allow for discrimination based on the energy deposition in the scintillator elements.

  19. NORMALLY DISTRIBUTED PROBABILITY MEASURE ON THE METRIC SPACE OF NORMS

    Institute of Scientific and Technical Information of China (English)

    Á.G. HORVÁTH

    2013-01-01

    In this paper we propose a method to construct probability measures on the space of convex bodies. For this purpose, first, we introduce the notion of thinness of a body. Then we show the existence of a measure with the property that its pushforward by the thinness function is a probability measure of truncated normal distribution. Finally, we improve this method to find a measure satisfying some important properties in geometric measure theory.

  20. Probability distributions for Poisson processes with pile-up

    CERN Document Server

    Sevilla, Diego J R

    2013-01-01

    In this paper, two parametric probability distributions capable to describe the statistics of X-ray photon detection by a CCD are presented. They are formulated from simple models that account for the pile-up phenomenon, in which two or more photons are counted as one. These models are based on the Poisson process, but they have an extra parameter which includes all the detailed mechanisms of the pile-up process that must be fitted to the data statistics simultaneously with the rate parameter. The new probability distributions, one for number of counts per time bins (Poisson-like), and the other for waiting times (exponential-like) are tested fitting them to statistics of real data, and between them through numerical simulations, and their results are analyzed and compared. The probability distributions presented here can be used as background statistical models to derive likelihood functions for statistical methods in signal analysis.

  1. Probability distribution functions in the finite density lattice QCD

    CERN Document Server

    Ejiri, S; Aoki, S; Kanaya, K; Saito, H; Hatsuda, T; Ohno, H; Umeda, T

    2012-01-01

    We study the phase structure of QCD at high temperature and density by lattice QCD simulations adopting a histogram method. We try to solve the problems which arise in the numerical study of the finite density QCD, focusing on the probability distribution function (histogram). As a first step, we investigate the quark mass dependence and the chemical potential dependence of the probability distribution function as a function of the Polyakov loop when all quark masses are sufficiently large, and study the properties of the distribution function. The effect from the complex phase of the quark determinant is estimated explicitly. The shape of the distribution function changes with the quark mass and the chemical potential. Through the shape of the distribution, the critical surface which separates the first order transition and crossover regions in the heavy quark region is determined for the 2+1-flavor case.

  2. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  3. Probability distribution of arrival times in quantum mechanics

    CERN Document Server

    Delgado, V

    1998-01-01

    In a previous paper [Phys. Rev. A, in press] we introduced a self-adjoint operator $\\hat {{\\cal T}}(X)$ whose eigenstates can be used to define consistently a probability distribution of the time of arrival at a given spatial point. In the present work we show that the probability distribution previously proposed can be well understood on classical grounds in the sense that it is given by the expectation value of a certain positive definite operator $\\hat J^{(+)}(X)$ which is nothing but a straightforward quantum version of the modulus of the classical current. For quantum states highly localized in momentum space about a certain momentum $p_0 \

  4. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  5. DIVE in the cosmic web: voids with Delaunay Triangulation from discrete matter tracer distributions

    CERN Document Server

    Zhao, Cheng; Liang, Yu; Kitaura, Francisco-Shu; Chuang, Chia-Hsun

    2015-01-01

    We present a novel parameter-free cosmological void finder (\\textsc{dive}, Delaunay TrIangulation Void findEr) based on Delaunay Triangulation (DT), which efficiently computes the empty spheres constrained by a discrete set of tracers. We define the spheres as DT voids, and describe their properties, including an universal density profile together with an intrinsic scatter. We apply this technique on 100 halo catalogues with volumes of 2.5\\,$h^{-1}$Gpc side each, with a bias and number density similar to the BOSS CMASS Luminous Red Galaxies, performed with the \\textsc{patchy} code. Our results show that there are two main species of DT voids, which can be characterised by the radius: they have different responses to halo redshift space distortions, to number density of tracers, and reside in different dark matter environments. Based on dynamical arguments using the tidal field tensor, we demonstrate that large DT voids are hosted in expanding regions, whereas the haloes used to construct them reside in collap...

  6. Probability distributions of the electroencephalogram envelope of preterm infants.

    Science.gov (United States)

    Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro

    2015-06-01

    To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Augmenting momentum resolution with well tuned probability distributions

    CERN Document Server

    Landi, Gregorio

    2016-01-01

    The realistic probability distributions of a previous article are applied to the reconstruction of tracks in constant magnetic field. The complete forms and their schematic approximations produce excellent momentum estimations, drastically better than standard fits. A simplified derivation of one of our probability distributions is illustrated. The momentum reconstructions are compared with standard fits (least squares) with two different position algorithms: the eta-algorithm and the two-strip center of gravity. The quality of our results are expressed as the increase of the magnetic field and signal-to-noise ratio that overlap the standard fit reconstructions with ours best distributions. The data and the simulations are tuned on the tracker of a running experiment and its double sided microstrip detectors, here each detector side is simulated to measure the magnetic bending. To overlap with our best distributions, the magnetic field must be increased by a factor 1.5 for the least squares based on the eta-a...

  8. Computer routines for probability distributions, random numbers, and related functions

    Science.gov (United States)

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  9. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  10. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete and mul...

  11. Probability Measure of Navigation pattern predition using Poisson Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Dr.V.Valli Mayil

    2012-06-01

    Full Text Available The World Wide Web has become one of the most important media to store, share and distribute information. The rapid expansion of the web has provided a great opportunity to study user and system behavior by exploring web access logs. Web Usage Mining is the application of data mining techniques to large web data repositories in order to extract usage patterns. Every web server keeps a log of all transactions between the server and the clients. The log data which are collected by web servers contains information about every click of user to the web documents of the site. The useful log information needs to be analyzed and interpreted in order to obtainknowledge about actual user preferences in accessing web pages. In recent years several methods have been proposed for mining web log data. This paper addresses the statistical method of Poisson distribution analysis to find out the higher probability session sequences which is then used to test the web application performance.The analysis of large volumes of click stream data demands the employment of data mining methods. Conducting data mining on logs of web servers involves the determination of frequently occurring access sequences. A statistical poisson distribution shows the frequency probability of specific events when the average probability of a single occurrence is known. The Poisson distribution is a discrete function wich is used in this paper to find out the probability frequency of particular page is visited by the user.

  12. On Probability Distributions for Trees: Representations, Inference and Learning

    CERN Document Server

    Denis, François; Gilleron, Rémi; Tommasi, Marc; Gilbert, Édouard

    2008-01-01

    We study probability distributions over free algebras of trees. Probability distributions can be seen as particular (formal power) tree series [Berstel et al 82, Esik et al 03], i.e. mappings from trees to a semiring K . A widely studied class of tree series is the class of rational (or recognizable) tree series which can be defined either in an algebraic way or by means of multiplicity tree automata. We argue that the algebraic representation is very convenient to model probability distributions over a free algebra of trees. First, as in the string case, the algebraic representation allows to design learning algorithms for the whole class of probability distributions defined by rational tree series. Note that learning algorithms for rational tree series correspond to learning algorithms for weighted tree automata where both the structure and the weights are learned. Second, the algebraic representation can be easily extended to deal with unranked trees (like XML trees where a symbol may have an unbounded num...

  13. Probability distributions of continuous measurement results for conditioned quantum evolution

    Science.gov (United States)

    Franquet, A.; Nazarov, Yuli V.

    2017-02-01

    We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.

  14. Convolutions Induced Discrete Probability Distributions and a New Fibonacci Constant

    CERN Document Server

    Rajan, Arulalan; Rao, Vittal; Rao, Ashok

    2010-01-01

    This paper proposes another constant that can be associated with Fibonacci sequence. In this work, we look at the probability distributions generated by the linear convolution of Fibonacci sequence with itself, and the linear convolution of symmetrized Fibonacci sequence with itself. We observe that for a distribution generated by the linear convolution of the standard Fibonacci sequence with itself, the variance converges to 8.4721359... . Also, for a distribution generated by the linear convolution of symmetrized Fibonacci sequences, the variance converges in an average sense to 17.1942 ..., which is approximately twice that we get with common Fibonacci sequence.

  15. Measuring Robustness of Timetables at Stations using a Probability Distribution

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex

    Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time....... This paper proposes to use the exponential distribution, only taking non-negative delays into account, but any probability distribution can be used. Furthermore, the paper proposes that the calculation parameters are estimated from existing delay data, at a station, to achieve a higher precision. As delay...

  16. Probability Distribution Function of Passive Scalars in Shell Models

    Institute of Scientific and Technical Information of China (English)

    LIU Chun-Ping; ZHANG Xiao-Qiang; LIU Yu-Rong; WANG Guang-Rui; HE Da-Ren; CHEN Shi-Gang; ZHU Lu-Jin

    2008-01-01

    A shell-model version of passive scalar problem is introduced, which is inspired by the model of K. Ohkitani and M. Yakhot [K. Ohkitani and M. Yakhot, Phys. Rev. Lett. 60 (1988) 983; K. Ohkitani and M. Yakhot, Prog. Theor. Phys. 81 (1988) 329]. As in the original problem, the prescribed random velocity field is Gaussian and 5 correlated in time. Deterministic differential equations are regarded as nonlinear Langevin equation. Then, the Fokker-Planck equations of PDF for passive scalars axe obtained and solved numerically. In energy input range (n < 5, n is the shell number.), the probability distribution function (PDF) of passive scalars is near the Gaussian distribution. In inertial range (5 < n < 16) and dissipation range (n ≥ 17), the probability distribution function (PDF) of passive scalars has obvious intermittence. And the scaling power of passive scalar is anomalous. The results of numerical simulations are compared with experimental measurements.

  17. Distribution probability of large-scale landslides in central Nepal

    Science.gov (United States)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  18. Polynomial probability distribution estimation using the method of moments.

    Science.gov (United States)

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  19. Unitary equilibrations: probability distribution of the Loschmidt echo

    CERN Document Server

    Venuti, Lorenzo Campos

    2009-01-01

    Closed quantum systems evolve unitarily and therefore cannot converge in a strong sense to an equilibrium state starting out from a generic pure state. Nevertheless for large system size one observes temporal typicality. Namely, for the overwhelming majority of the time instants, the statistics of observables is practically indistinguishable from an effective equilibrium one. In this paper we consider the Loschmidt echo (LE) to study this sort of unitary equilibration after a quench. We draw several conclusions on general grounds and on the basis of an exactly-solvable example of a quasi-free system. In particular we focus on the whole probability distribution of observing a given value of the LE after waiting a long time. Depending on the interplay between the initial state and the quench Hamiltonian, we find different regimes reflecting different equilibration dynamics. When the perturbation is small and the system is away from criticality the probability distribution is Gaussian. However close to criticali...

  20. Measurement of probability distributions for internal stresses in dislocated crystals

    Energy Technology Data Exchange (ETDEWEB)

    Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)

    2014-11-03

    Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.

  1. Fibonacci Sequence, Recurrence Relations, Discrete Probability Distributions and Linear Convolution

    CERN Document Server

    Rajan, Arulalan; Rao, Ashok; Jamadagni, H S

    2012-01-01

    The classical Fibonacci sequence is known to exhibit many fascinating properties. In this paper, we explore the Fibonacci sequence and integer sequences generated by second order linear recurrence relations with positive integer coe?cients from the point of view of probability distributions that they induce. We obtain the generalizations of some of the known limiting properties of these probability distributions and present certain optimal properties of the classical Fibonacci sequence in this context. In addition, we also look at the self linear convolution of linear recurrence relations with positive integer coefficients. Analysis of self linear convolution is focused towards locating the maximum in the resulting sequence. This analysis, also highlights the influence that the largest positive real root, of the "characteristic equation" of the linear recurrence relations with positive integer coefficients, has on the location of the maximum. In particular, when the largest positive real root is 2,the locatio...

  2. Outage probability of distributed beamforming with co-channel interference

    KAUST Repository

    Yang, Liang

    2012-03-01

    In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.

  3. Cosmological constraints from the convergence 1-point probability distribution

    OpenAIRE

    Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric

    2016-01-01

    We examine the cosmological information available from the 1-point probability distribution (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the $\\Omega_m$-$\\sigma_8$ plane from the convergence PDF with $188\\ arcmin^2$ pixels compared to the cosmic shear power spectrum with an equivalent number of modes ($\\ell < 886$). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that...

  4. Testing for the maximum cell probabilities in multinomial distributions

    Institute of Scientific and Technical Information of China (English)

    XIONG Shifeng; LI Guoying

    2005-01-01

    This paper investigates one-sided hypotheses testing for p[1], the largest cell probability of multinomial distribution. A small sample test of Ethier (1982) is extended to the general cases. Based on an estimator of p[1], a kind of large sample tests is proposed. The asymptotic power of the above tests under local alternatives is derived. An example is presented at the end of this paper.

  5. Estimating probable flaw distributions in PWR steam generator tubes

    Energy Technology Data Exchange (ETDEWEB)

    Gorman, J.A.; Turner, A.P.L. [Dominion Engineering, Inc., McLean, VA (United States)

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.

  6. Steady-state distributions of probability fluxes on complex networks

    Science.gov (United States)

    Chełminiak, Przemysław; Kurzyński, Michał

    2017-02-01

    We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.

  7. The Probability Distribution Model of Wind Speed over East Malaysia

    Directory of Open Access Journals (Sweden)

    Nurulkamal Masseran

    2013-07-01

    Full Text Available Many studies have found that wind speed is the most significant parameter of wind power. Thus, an accurate determination of the probability distribution of wind speed is an important parameter to measure before estimating the wind energy potential over a particular region. Utilizing an accurate distribution will minimize the uncertainty in wind resource estimates and improve the site assessment phase of planning. In general, different regions have different wind regimes. Hence, it is reasonable that different wind distributions will be found for different regions. Because it is reasonable to consider that wind regimes vary according to the region of a particular country, nine different statistical distributions have been fitted to the mean hourly wind speed data from 20 wind stations in East Malaysia, for the period from 2000 to 2009. The values from Kolmogorov-Smirnov statistic, Akaike’s Information Criteria, Bayesian Information Criteria and R2 correlation coefficient were compared with the distributions to determine the best fit for describing the observed data. A good fit for most of the stations in East Malaysia was found using the Gamma and Burr distributions, though there was no clear pattern observed for all regions in East Malaysia. However, the Gamma distribution was a clear fit to the data from all stations in southern Sabah.

  8. Research on probability distribution of port cargo throughput

    Institute of Scientific and Technical Information of China (English)

    SUN Liang; TAN De-rong

    2008-01-01

    In order to more accurately examine developing trends in gross cargo throughput, we have modeled the probability distribution of cargo throughput. Gross cargo throughput is determined by the time spent by cargo ships in the port and the operating efficiency of handling equipment. Gross cargo throughput is the sum of all compound variables determining each aspect of cargo throughput for every cargo ship arriving at the port. Probability distribution was determined using the Wald equation. The results show that the variability of gross cargo throughput primarily depends on the different times required by different cargo ships arriving at the port. This model overcomes the shortcoming of previous models: inability to accurately determine the probability of a specific value of future gross cargo throughput. Our proposed model of cargo throughput depends on the relationship between time required by a cargo ship arriving at the port and the operational capacity of handling equipment at the port. At the same time, key factors affecting gross cargo throughput are analyzed. In order to test the efficiency of the model, the cargo volume of a port in Shandong Province was used as an example. In the case study the actual results matched our theoretical analysis.

  9. Methods for fitting a parametric probability distribution to most probable number data.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2012-07-01

    Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two

  10. Modeling cosmic void statistics

    Science.gov (United States)

    Hamaus, Nico; Sutter, P. M.; Wandelt, Benjamin D.

    2016-10-01

    Understanding the internal structure and spatial distribution of cosmic voids is crucial when considering them as probes of cosmology. We present recent advances in modeling void density- and velocity-profiles in real space, as well as void two-point statistics in redshift space, by examining voids identified via the watershed transform in state-of-the-art ΛCDM n-body simulations and mock galaxy catalogs. The simple and universal characteristics that emerge from these statistics indicate the self-similarity of large-scale structure and suggest cosmic voids to be among the most pristine objects to consider for future studies on the nature of dark energy, dark matter and modified gravity.

  11. Phase diagram of epidemic spreading - unimodal vs. bimodal probability distributions

    CERN Document Server

    Lancic, Alen; Sikic, Mile; Stefancic, Hrvoje

    2009-01-01

    The disease spreading on complex networks is studied in SIR model. Simulations on empirical complex networks reveal two specific regimes of disease spreading: local containment and epidemic outbreak. The variables measuring the extent of disease spreading are in general characterized by a bimodal probability distribution. Phase diagrams of disease spreading for empirical complex networks are introduced. A theoretical model of disease spreading on m-ary tree is investigated both analytically and in simulations. It is shown that the model reproduces qualitative features of phase diagrams of disease spreading observed in empirical complex networks. The role of tree-like structure of complex networks in disease spreading is discussed.

  12. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  13. Net-proton probability distribution in heavy ion collisions

    CERN Document Server

    Braun-Munzinger, P; Karsch, F; Redlich, K; Skokov, V

    2011-01-01

    We compute net-proton probability distributions in heavy ion collisions within the hadron resonance gas model. The model results are compared with data taken by the STAR Collaboration in Au-Au collisions at sqrt(s_{NN})= 200 GeV for different centralities. We show that in peripheral Au-Au collisions the measured distributions, and the resulting first four moments of net-proton fluctuations, are consistent with results obtained from the hadron resonance gas model. However, data taken in central Au-Au collisions differ from the predictions of the model. The observed deviations can not be attributed to uncertainties in model parameters. We discuss possible interpretations of the observed deviations.

  14. Maximum-entropy probability distributions under Lp-norm constraints

    Science.gov (United States)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  15. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  16. Probability Distribution and Projected Trends of Daily Precipitation in China

    Institute of Scientific and Technical Information of China (English)

    CAO; Li-Ge; ZHONG; Jun; SU; Bu-Da; ZHAI; Jian-Qing; Macro; GEMMER

    2013-01-01

    Based on observed daily precipitation data of 540 stations and 3,839 gridded data from the high-resolution regional climate model COSMO-Climate Limited-area Modeling(CCLM)for 1961–2000,the simulation ability of CCLM on daily precipitation in China is examined,and the variation of daily precipitation distribution pattern is revealed.By applying the probability distribution and extreme value theory to the projected daily precipitation(2011–2050)under SRES A1B scenario with CCLM,trends of daily precipitation series and daily precipitation extremes are analyzed.Results show that except for the western Qinghai-Tibetan Plateau and South China,distribution patterns of the kurtosis and skewness calculated from the simulated and observed series are consistent with each other;their spatial correlation coefcients are above 0.75.The CCLM can well capture the distribution characteristics of daily precipitation over China.It is projected that in some parts of the Jianghuai region,central-eastern Northeast China and Inner Mongolia,the kurtosis and skewness will increase significantly,and precipitation extremes will increase during 2011–2050.The projected increase of maximum daily rainfall and longest non-precipitation period during flood season in the aforementioned regions,also show increasing trends of droughts and floods in the next 40 years.

  17. Dark matter in voids

    Science.gov (United States)

    Fong, Richard; Doroshkevich, Andrei G.; Turchaninov, Victor I.

    1995-07-01

    The theory of the formation of large-scale structure in the universe through the action of gravitational instability imply the existence of substantial amounts of baryonic dark matter, of the order of 50% of the total baryon content in the universe, in the ``voids'' or under-dense regions seen in the large-scale distribution of galaxies. We discuss also the large-scale structure of dark matter expected in voids and the present and future possibilities for the observation of this baryonic dark matter in ``voids.''

  18. Dark matter in voids

    Energy Technology Data Exchange (ETDEWEB)

    Fong, R. [Department of Physics, University of Durham, Durham, DH1 3LE (United Kingdom); Doroshkevich, A.G. [Keldysh Institute of Applied Mathematics, 125047 Moscow (Russian Federation)]|[Teoretical Astrophysics Centrum, Blegsdamsvej 17, Copenhagen DK 2100 (Denmark); Turchaninov, V.I. [Keldysh Institute of Applied Mathematics, 125047 Moscow (Russian Federation)

    1995-07-01

    The theory of the formation of large-scale structure in the universe through the action of gravitational instability imply the existence of substantial amounts of baryonic dark matter, of the order of 50% of the total baryon content in the universe, in the ``voids`` or under-dense regions seen in the large-scale distribution of galaxies. We discuss also the large-scale structure of dark matter expected in voids and the present and future possibilities for the observation of this baryonic dark matter in ``voids.`` {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

  19. Uncertainty squared: Choosing among multiple input probability distributions and interpreting multiple output probability distributions in Monte Carlo climate risk models

    Science.gov (United States)

    Baer, P.; Mastrandrea, M.

    2006-12-01

    Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly

  20. Non-Gaussian probability distributions of solar wind fluctuations

    Directory of Open Access Journals (Sweden)

    E. Marsch

    Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

  1. Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Huiwu Luo

    2015-01-01

    Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.

  2. Some Useful Distributions and Probabilities for Cellular Networks

    CERN Document Server

    Yu, Seung Min

    2011-01-01

    The cellular network is one of the most useful networks for wireless communications and now universally used. There have been a lot of analytic results about the performance of the mobile user at a specific location such as the cell center or edge. On the other hand, there have been few analytic results about the performance of the mobile user at an arbitrary location. Moreover, to the best of our knowledge, there is no analytic result on the performance of the mobile user at an arbitrary location considering the mobile user density. In this paper, we use the stochastic geometry approach and derive useful distributions and probabilities for cellular networks. Using those, we analyze the performance of the mobile user, e.g., outage probability at an arbitrary location considering the mobile user density. Under some assumptions, those can be expressed by closed form formulas. Our analytic results will provide a fundamental framework for the performance analysis of cellular networks, which will significantly red...

  3. Characterizing the \\lyaf\\ flux probability distribution function using Legendre polynomials

    CERN Document Server

    Cieplak, Agnieszka M

    2016-01-01

    The Lyman-$\\alpha$ forest is a highly non-linear field with a lot of information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polyonomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, $n$-th coefficient can be expressed as a linear combination of the first $n$ moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities.

  4. Cosmological constraints from the convergence 1-point probability distribution

    CERN Document Server

    Patton, Kenneth; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J; Suchyta, Eric

    2016-01-01

    We examine the cosmological information available from the 1-point probability distribution (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the $\\Omega_m$-$\\sigma_8$ plane from the convergence PDF with $188\\ arcmin^2$ pixels compared to the cosmic shear power spectrum with an equivalent number of modes ($\\ell < 886$). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of $2-3$, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.

  5. A probability distribution approach to synthetic turbulence time series

    Science.gov (United States)

    Sinhuber, Michael; Bodenschatz, Eberhard; Wilczek, Michael

    2016-11-01

    The statistical features of turbulence can be described in terms of multi-point probability density functions (PDFs). The complexity of these statistical objects increases rapidly with the number of points. This raises the question of how much information has to be incorporated into statistical models of turbulence to capture essential features such as inertial-range scaling and intermittency. Using high Reynolds number hot-wire data obtained at the Variable Density Turbulence Tunnel at the Max Planck Institute for Dynamics and Self-Organization, we establish a PDF-based approach on generating synthetic time series that reproduce those features. To do this, we measure three-point conditional PDFs from the experimental data and use an adaption-rejection method to draw random velocities from this distribution to produce synthetic time series. Analyzing these synthetic time series, we find that time series based on even low-dimensional conditional PDFs already capture some essential features of real turbulent flows.

  6. Probability distributions for one component equations with multiplicative noise

    CERN Document Server

    Deutsch, J M

    1993-01-01

    Abstract: Systems described by equations involving both multiplicative and additive noise are common in nature. Examples include convection of a passive scalar field, polymersin turbulent flow, and noise in dye lasers. In this paper the one component version of this problem is studied. The steady state probability distribution is classified into two different types of behavior. One class has power law tails and the other is of the form of an exponential to a power law. The value of the power law exponent is determined analytically for models having colored gaussian noise. It is found to only depend on the power spectrum of the noise at zero frequency. When non-gaussian noise is considered it is shown that stretched exponential tails are possible. An intuitive understanding of the results is found and makes use of the Lyapunov exponents for these systems.

  7. Gesture Recognition Based on the Probability Distribution of Arm Trajectories

    Science.gov (United States)

    Wan, Khairunizam; Sawada, Hideyuki

    The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.

  8. Seismic pulse propagation with constant Q and stable probability distributions

    Directory of Open Access Journals (Sweden)

    M. Tomirotti

    1997-06-01

    Full Text Available The one-dimensional propagation of seismic waves with constant Q is shown to be governed by an evolution equation of fractional order in time, which interpolates the heat equation and the wave equation. The fundamental solutions for the Cauchy and Signalling problems are expressed in terms of entire functions (of Wright type in the similarity variable and their behaviours turn out to be intermediate between those for the limiting cases of a perfectly viscous fluid and a perfectly elastic solid. In view of the small dissipation exhibited by the seismic pulses, the nearly elastic limit is considered. Furthermore, the fundamental solutions for the Cauchy and Signalling problems are shown to be related to stable probability distributions with an index of stability determined by the order of the fractional time derivative in the evolution equation.

  9. Seismic pulse propagation with constant Q and stable probability distributions

    CERN Document Server

    Mainardi, Francesco

    2010-01-01

    The one-dimensional propagation of seismic waves with constant Q is shown to be governed by an evolution equation of fractional order in time, which interpolates the heat equation and the wave equation. The fundamental solutions for the Cauchy and Signalling problems are expressed in terms of entire functions (of Wright type) in the similarity variable and their behaviours turn out to be intermediate between those for the limiting cases of a perfectly viscous fluid and a perfectly elastic solid. In view of the small dissipation exhibited by the seismic pulses, the nearly elastic limit is considered. Furthermore, the fundamental solutions for the Cauchy and Signalling problems are shown to be related to stable probability distributions with index of stability determined by the order of the fractional time derivative in the evolution equation.

  10. probably

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    【说词】1. He can probably tell us the truth.2. Will it rain this afternoong ? Probably【解语】作副词,意为“大概、或许”,表示可能性很大,通常指根据目前情况作出积极推测或判断;

  11. Insights from probability distribution functions of intensity maps

    CERN Document Server

    Breysse, Patrick C; Behroozi, Peter S; Dai, Liang; Kamionkowski, Marc

    2016-01-01

    In the next few years, intensity-mapping surveys that target lines such as CO, Ly$\\alpha$, and CII stand to provide powerful probes of high-redshift astrophysics. However, these line emissions are highly non-Gaussian, and so the typical power-spectrum methods used to study these maps will leave out a significant amount of information. We propose a new statistic, the probability distribution of voxel intensities, which can access this extra information. Using a model of a CO intensity map at $z\\sim3$ as an example, we demonstrate that this voxel intensity distribution (VID) provides substantial constraining power beyond what is obtainable from the power spectrum alone. We find that a future survey similar to the planned COMAP Full experiment could constrain the CO luminosity function to order $\\sim10\\%$. We also explore the effects of contamination from continuum emission, interloper lines, and gravitational lensing on our constraints and find that the VID statistic retains significant constraining power even ...

  12. Alignment of voids in the cosmic web

    NARCIS (Netherlands)

    Platen, Erwin; van de Weygaert, Rien; Jones, Bernard J. T.

    2008-01-01

    We investigate the shapes and mutual alignment of voids in the large-scale matter distribution of a Lambda cold dark matter (Lambda CDM) cosmology simulation. The voids are identified using the novel watershed void finder (WVF) technique. The identified voids are quite non-spherical and slightly pro

  13. Alignment of voids in the cosmic web

    NARCIS (Netherlands)

    Platen, Erwin; van de Weygaert, Rien; Jones, Bernard J. T.

    2008-01-01

    We investigate the shapes and mutual alignment of voids in the large-scale matter distribution of a Lambda cold dark matter (Lambda CDM) cosmology simulation. The voids are identified using the novel watershed void finder (WVF) technique. The identified voids are quite non-spherical and slightly

  14. Experimental Study of Three-Dimensional Void Fraction Distribution in Heated Tight-Lattice Rod Bundles Using Three-Dimensional Neutron Tomography

    Science.gov (United States)

    Kureta, Masatoshi

    Three-dimensional (3D) void fraction distributions in a tight-lattice of heated 7- or 14-rod bundles were measured using 3D neutron tomography. The distribution was also studied parametrically from the thermal-hydraulic point of view in order to elucidate boiling phenomena in a fuel assembly of the FLWR which is being developed as an advanced BWR-type reactor. 7-rod tests were carried out to obtain high void fraction data. 14-rod tests were conducted for visualization and discussion of the 3D distribution extending from the vapor generation region to the high void fraction region at one time. Experimental data were obtained under atmospheric pressure with mass velocity, heater power and inlet quality as the test parameters. It was found from the visualization of data that the void fraction at the channel center became higher than that at the periphery, high void fraction spots appeared in narrow regions at the inlet, and a so-called 'vapor chimney' was generated at the center of a subchannel.

  15. Spectro-ellipsometric studies of sputtered amorphous Titanium dioxide thin films: simultaneous determination of refractive index, extinction coefficient, and void distribution

    CERN Document Server

    Lee, S I; Oh, S G

    1999-01-01

    Amorphous titanium dioxide thin films were deposited onto silicon substrates by using RF magnetron sputtering, and the index of refraction, the extinction coefficient, and the void distribution of these films were simultaneously determined from the analyses of there ellipsometric spectra. In particular, our novel strategy, which combines the merits of multi-sample fitting, the dual dispersion function, and grid search, was proven successful in determining optical constants over a wide energy range, including the energy region where the extinction coefficient was large. Moreover, we found that the void distribution was dependent on the deposition conditions, such as the sputtering power, the substrate temperature, and the substrate surface.

  16. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  17. Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.

    Science.gov (United States)

    Ashby, Neil; Patla, Bijunath

    2016-04-01

    Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.

  18. Responsibility voids

    NARCIS (Netherlands)

    van Hees, M.V.B.P.M; Braham, Matthew

    We present evidence for the existence of 'responsibility voids' in committee decision-making, that is, the existence of situations where no member of a committee can individually be held morally responsible for the outcome. We analyse three types of reasons (causal, normative and epistemic) for the

  19. Development and validation of advanced CFD models for detailed predictions of void distribution in a BWR bundle

    Science.gov (United States)

    Neykov, Boyan

    , it is important to extend validity of the current correlations for the lift coefficients to higher void (gas) phase fractions. After investigating the underlying physics and analyzing a large amount of experimental data, an improved model for lift force at different void fraction levels, including large bubbles and slug flow regime, is proposed. The model is implemented in STAR-CD and validated. The validation of the models is performed against five different experiments, characterized by different geometries at different boundary conditions. Comparison with the already existing models in STAR-CD code is performed and it is found that the newly integrated force models for drag and lift forces leads to more accurate void distribution predictions.

  20. Energy probability distribution zeros: A route to study phase transitions

    Science.gov (United States)

    Costa, B. V.; Mól, L. A. S.; Rocha, J. C. S.

    2017-07-01

    In the study of phase transitions a very few models are accessible to exact solution. In most cases analytical simplifications have to be done or some numerical techniques have to be used to get insight about their critical properties. Numerically, the most common approaches are those based on Monte Carlo simulations together with finite size scaling analysis. The use of Monte Carlo techniques requires the estimation of quantities like the specific heat or susceptibilities in a wide range of temperaturesor the construction of the density of states in large intervals of energy. Although many of these techniques are well developed they may be very time consuming when the system size becomes large enough. It should be suitable to have a method that could surpass those difficulties. In this work we present an iterative method to study the critical behavior of a system based on the partial knowledge of the complex Fisher zeros set of the partition function. The method is general with advantages over most conventional techniques since it does not need to identify any order parameter a priori. The critical temperature and exponents can be obtained with great precision even in the most unamenable cases like the two dimensional XY model. To test the method and to show how it works we applied it to some selected models where the transitions are well known: The 2D Ising, Potts and XY models and to a homopolymer system. Our choices cover systems with first order, continuous and Berezinskii-Kosterlitz-Thouless transitions as well as the homopolymer that has two pseudo-transitions. The strategy can easily be adapted to any model, classical or quantum, once we are able to build the corresponding energy probability distribution.

  1. Performance Probability Distributions for Sediment Control Best Management Practices

    Science.gov (United States)

    Ferrell, L.; Beighley, R.; Walsh, K.

    2007-12-01

    Controlling soil erosion and sediment transport can be a significant challenge during the construction process due to the extent and conditions of bare, disturbed soils. Best Management Practices (BMPs) are used as the framework for the design of sediment discharge prevention systems in stormwater pollution prevention plans which are typically required for construction sites. This research focuses on commonly-used BMP systems for perimeter control of sediment export: silt fences and fiber rolls. Although these systems are widely used, the physical and engineering parameters describing their performance are not well understood. Performance expectations are based on manufacturer results, but due to the dynamic conditions that exist on a construction site performance expectations are not always achievable in the field. Based on experimental results product performance is shown to be highly variable. Experiments using the same installation procedures show inconsistent sediment removal performances ranging from (>)85 percent to zero. The goal of this research is to improve the determination of off-site sediment yield based on probabilistic performance results of perimeter control BMPs. BMPs are evaluated in the Soil Erosion Research Laboratory (SERL) in the Civil and Environmental Engineering department at San Diego State University. SERL experiments are performed on a 3-m by 10-m tilting soil bed with a soil depth of 0.5 meters and a slope of 33 percent. The simulated storm event consists of 17 mm/hr for 20 minutes followed by 51 mm/hr for 30 minutes. The storm event is based on an ASTM design storm intended to simulate BMP failures. BMP performance is assessed based on experiments where BMPs are installed per manufacture specifications, less than optimal installations, and no treatment conditions. Preliminary results from 30 experiments are presented and used to develop probability distributions for BMP sediment removal efficiencies. The results are then combined with

  2. Structure in the 3D Galaxy Distribution: II. Voids and Watersheds of Local Maxima and Minima

    CERN Document Server

    Way, M J; Scargle, Jeffrey D

    2014-01-01

    The major uncertainties in studies of the multi-scale structure of the Universe arise not from observational errors but from the variety of legitimate definitions and detection methods for individual structures. To facilitate the study of these methodological dependencies we have carried out 12 different analyses defining structures in various ways. This has been done in a purely geometrical way by utilizing the HOP algorithm as a unique parameter-free method of assigning groups of galaxies to local density maxima or minima. From three density estimation techniques (smoothing kernels, Bayesian Blocks and self organizing maps) applied to three data sets (the Sloan Digital Sky Survey Data Release 7, the Millennium Simulation and randomly distributed points) we tabulate information that can be used to construct catalogs of structures connected to local density maxima and minima. The resulting sizes follow continuous multi-scale distributions with no indication of the presence of a discrete hierarchy. We also int...

  3. The Probability Distribution of Inter-car Spacings

    Science.gov (United States)

    Xian, Jin Guo; Han, Dong

    In this paper, the celluar automation model with Fukui-Ishibashi-type acceleration rule is used to study the inter-car spacing distribution for traffic flow. The method used in complex network analysis is applied to study the spacings distribution. By theoretical analysis, we obtain the result that the distribution of inter-car spacings follows power law when vehicle density is low and spacing is not large, while, when the vehicle density is high or the spacing is large, the distribution can be described by exponential distribution. Moreover, the numerical simulations support the theoretical result.

  4. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    Science.gov (United States)

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  5. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    Science.gov (United States)

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  6. Measuring Robustness of Timetables in Stations using a Probability Distribution

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex

    of a station based on the plan of operation and the minimum headway times However, none of the above methods take a given timetable into account when the complexity of the station is calculated. E.g. two timetable candidates are given following the same plan of operation in a station; one will be more...... vulnerable to delays (less robust) while the other will be less vulnerable (more robust), but this cannot be measured by the above methods. In the light of this, the article will describe a new method where the complexity of a given station with a given timetable can be calculated based on a probability...... delays caused by interdependencies, and result in a more robust operation. Currently three methods to calculate the complexity of station exists: 1. Complexity of a station based on the track layout 2. Complexity of a station based on the probability of a conflict using a plan of operation 3. Complexity...

  7. Interacting discrete Markov processes with power-law probability distributions

    Science.gov (United States)

    Ridley, Kevin D.; Jakeman, Eric

    2017-09-01

    During recent years there has been growing interest in the occurrence of long-tailed distributions, also known as heavy-tailed or fat-tailed distributions, which can exhibit power-law behaviour and often characterise physical systems that undergo very large fluctuations. In this paper we show that the interaction between two discrete Markov processes naturally generates a time-series characterised by such a distribution. This possibility is first demonstrated by numerical simulation and then confirmed by a mathematical analysis that enables the parameter range over which the power-law occurs to be quantified. The results are supported by comparison of numerical results with theoretical predictions and general conclusions are drawn regarding mechanisms that can cause this behaviour.

  8. Martingale Couplings and Bounds on the Tails of Probability Distributions

    CERN Document Server

    Luh, Kyle J

    2011-01-01

    Hoeffding has shown that tail bounds on the distribution for sampling from a finite population with replacement also apply to the corresponding cases of sampling without replacement. (A special case of this result is that binomial tail bounds apply to the corresponding hypergeometric tails.) We give a new proof of Hoeffding's result by constructing a martingale coupling between the sampling distributions. This construction is given by an explicit combinatorial procedure involving balls and urns. We then apply this construction to create martingale couplings between other pairs of sampling distributions, both without replacement and with "surreplacement" (that is, sampling in which not only is the sampled individual replaced, but some number of "copies" of that individual are added to the population).

  9. Subjective Probability Distribution Elicitation in Cost Risk Analysis: A Review

    Science.gov (United States)

    2007-01-01

    DeGroot , Morris H., Optimal Statistical Decisions, New York: McGraw-Hill, 1970. Dewar, James A., Assumption-Based Planning: A Tool for Reducing...formal decision-analysis point of view. See DeGroot (1970) for a clear exposition of utility in decision analysis. 2 For the triangle distribution, the

  10. Sampling Random Bioinformatics Puzzles using Adaptive Probability Distributions

    DEFF Research Database (Denmark)

    Have, Christian Theil; Appel, Emil Vincent; Bork-Jensen, Jette

    2016-01-01

    We present a probabilistic logic program to generate an educational puzzle that introduces the basic principles of next generation sequencing, gene finding and the translation of genes to proteins following the central dogma in biology. In the puzzle, a secret "protein word" must be found...... by assembling DNA from fragments (reads), locating a gene in this sequence and translating the gene to a protein. Sampling using this program generates random instance of the puzzle, but it is possible constrain the difficulty and to customize the secret protein word. Because of these constraints...... and the randomness of the generation process, sampling may fail to generate a satisfactory puzzle. To avoid failure we employ a strategy using adaptive probabilities which change in response to previous steps of generative process, thus minimizing the risk of failure....

  11. CT measurements of SAP voids in concrete

    DEFF Research Database (Denmark)

    Laustsen, Sara; Bentz, Dale P.; Hasholt, Marianne Tange

    2010-01-01

    X-ray computed tomography (CT) scanning is used to determine the SAP void distribution in hardened concrete. Three different approaches are used to analyse a binary data set created from CT measurement. One approach classifies a cluster of connected, empty voxels (volumetric pixel of a 3D image......) as one void, whereas the other two approaches are able to classify a cluster of connected, empty voxels as a number of individual voids. Superabsorbent polymers (SAP) have been used to incorporate air into concrete. An advantage of using SAP is that it enables control of the amount and size...... of the created air voids. The results indicate the presence of void clusters. To identify the individual voids, special computational approaches are needed. The addition of SAP results in a dominant peak in two of the three air void distributions. Based on the position (void diameter) of the peak, it is possible...

  12. CT measurements of SAP voids in concrete

    DEFF Research Database (Denmark)

    Laustsen, Sara; Bentz, Dale P.; Hasholt, Marianne Tange;

    2010-01-01

    X-ray computed tomography (CT) scanning is used to determine the SAP void distribution in hardened concrete. Three different approaches are used to analyse a binary data set created from CT measurement. One approach classifies a cluster of connected, empty voxels (volumetric pixel of a 3D image......) as one void, whereas the other two approaches are able to classify a cluster of connected, empty voxels as a number of individual voids. Superabsorbent polymers (SAP) have been used to incorporate air into concrete. An advantage of using SAP is that it enables control of the amount and size...... of the created air voids. The results indicate the presence of void clusters. To identify the individual voids, special computational approaches are needed. The addition of SAP results in a dominant peak in two of the three air void distributions. Based on the position (void diameter) of the peak, it is possible...

  13. Extreme Points of the Convex Set of Joint Probability Distributions with Fixed Marginals

    Indian Academy of Sciences (India)

    K R Parthasarathy

    2007-11-01

    By using a quantum probabilistic approach we obtain a description of the extreme points of the convex set of all joint probability distributions on the product of two standard Borel spaces with fixed marginal distributions.

  14. Probability distribution analysis of observational extreme events and model evaluation

    Science.gov (United States)

    Yu, Q.; Lau, A. K. H.; Fung, J. C. H.; Tsang, K. T.

    2016-12-01

    Earth's surface temperatures were the warmest in 2015 since modern record-keeping began in 1880, according to the latest study. In contrast, a cold weather occurred in many regions of China in January 2016, and brought the first snowfall to Guangzhou, the capital city of Guangdong province in 67 years. To understand the changes of extreme weather events as well as project its future scenarios, this study use statistical models to analyze on multiple climate data. We first use Granger-causality test to identify the attribution of global mean temperature rise and extreme temperature events with CO2 concentration. The four statistical moments (mean, variance, skewness, kurtosis) of daily maximum temperature distribution is investigated on global climate observational, reanalysis (1961-2010) and model data (1961-2100). Furthermore, we introduce a new tail index based on the four moments, which is a more robust index to measure extreme temperatures. Our results show that the CO2 concentration can provide information to the time series of mean and extreme temperature, but not vice versa. Based on our new tail index, we find that other than mean and variance, skewness is an important indicator should be considered to estimate extreme temperature changes and model evaluation. Among the 12 climate model data we investigate, the fourth version of Community Climate System Model (CCSM4) from National Center for Atmospheric Research performs well on the new index we introduce, which indicate the model have a substantial capability to project the future changes of extreme temperature in the 21st century. The method also shows its ability to measure extreme precipitation/ drought events. In the future we will introduce a new diagram to systematically evaluate the performance of the four statistical moments in climate model output, moreover, the human and economic impacts of extreme weather events will also be conducted.

  15. Batch Mode Active Sampling based on Marginal Probability Distribution Matching.

    Science.gov (United States)

    Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping

    2012-01-01

    Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20

  16. Exact probability distribution for the Bernoulli-Malthus-Verhulst model driven by a multiplicative colored noise

    Science.gov (United States)

    Calisto, H.; Bologna, M.

    2007-05-01

    We report an exact result for the calculation of the probability distribution of the Bernoulli-Malthus-Verhulst model driven by a multiplicative colored noise. We study the conditions under which the probability distribution of the Malthus-Verhulst model can exhibit a transition from a unimodal to a bimodal distribution depending on the value of a critical parameter. Also we show that the mean value of x(t) in the latter model always approaches asymptotically the value 1.

  17. The equilibrium probability distribution of a conductive sphere's floating charge in a collisionless, drifting Maxwellian plasma

    CERN Document Server

    Thomas, Drew M

    2013-01-01

    A dust grain in a plasma has a fluctuating electric charge, and past work concludes that spherical grains in a stationary, collisionless plasma have an essentially Gaussian charge probability distribution. This paper extends that work to flowing plasmas and arbitrarily large spheres, deriving analytic charge probability distributions up to normalizing constants. We find that these distributions also have good Gaussian approximations, with analytic expressions for their mean and variance.

  18. The probability distribution of the predicted CFM-induced ozone depletion. [Chlorofluoromethane

    Science.gov (United States)

    Ehhalt, D. H.; Chang, J. S.; Bulter, D. M.

    1979-01-01

    It is argued from the central limit theorem that the uncertainty in model predicted changes of the ozone column density is best represented by a normal probability density distribution. This conclusion is validated by comparison with a probability distribution generated by a Monte Carlo technique. In the case of the CFM-induced ozone depletion, and based on the estimated uncertainties in the reaction rate coefficients alone the relative mean standard deviation of this normal distribution is estimated to be 0.29.

  19. Constructing the probability distribution function for the total capacity of a power system

    Energy Technology Data Exchange (ETDEWEB)

    Vasin, V.P.; Prokhorenko, V.I.

    1980-01-01

    The difficulties involved in constructing the probability distribution function for the total capacity of a power system consisting of numerous power plants are discussed. A method is considered for the approximate determination of such a function by a Monte Carlo method and by exact calculation based on special recursion formulas on a particular grid of argument values. It is shown that there may be significant deviations between the true probability distribution and a normal distribution.

  20. Cross-sectional void fraction distribution measurements in a vertical annulus two-phase flow by high speed X-ray computed tomography and real-time neutron radiography techniques

    Energy Technology Data Exchange (ETDEWEB)

    Harvel, G.D. [McMaster Univ., Ontario (Canada)]|[Combustion and Heat Transfer Lab., Takasago (Japan); Hori, K.; Kawanishi, K. [Combustion and Heat Transfer Lab., Takasago (Japan)] [and others

    1995-09-01

    A Real-Time Neutron Radiography (RTNR) system and a high speed X-ray Computed tomography (X-CT) system are compared for measurement of two-phase flow. Each system is used to determine the flow regime, and the void fraction distribution in a vertical annulus flow channel. A standard optical video system is also used to observe the flow regime. The annulus flow channel is operated as a bubble column and measurements obtained for gas flow rates from 0.0 to 30.01/min. The flow regimes observed by all three measurement systems through image analysis shows that the two-dimensional void fraction distribution can be obtained. The X-CT system is shown to have a superior temporal resolution capable of resolving the void fraction distribution in an (r,{theta}) plane in 33.0 ms. Void fraction distribution for bubbly flow and slug flow is determined.

  1. Comparative Analysis of CTF and Trace Thermal-Hydraulic Codes Using OECD/NRC PSBT Benchmark Void Distribution Database

    OpenAIRE

    2013-01-01

    The international OECD/NRC PSBT benchmark has been established to provide a test bed for assessing the capabilities of thermal-hydraulic codes and to encourage advancement in the analysis of fluid flow in rod bundles. The benchmark was based on one of the most valuable databases identified for the thermal-hydraulics modeling developed by NUPEC, Japan. The database includes void fraction and departure from nucleate boiling measurements in a representative PWR fuel assembly. On behalf of the be...

  2. A New Probability of Detection Model for Updating Crack Distribution of Offshore Structures

    Institute of Scientific and Technical Information of China (English)

    李典庆; 张圣坤; 唐文勇

    2003-01-01

    There exists model uncertainty of probability of detection for inspecting ship structures with nondestructive inspection techniques. Based on a comparison of several existing probability of detection (POD) models, a new probability of detection model is proposed for the updating of crack size distribution. Furthermore, the theoretical derivation shows that most existing probability of detection models are special cases of the new probability of detection model. The least square method is adopted for determining the values of parameters in the new POD model. This new model is also compared with other existing probability of detection models. The results indicate that the new probability of detection model can fit the inspection data better. This new probability of detection model is then applied to the analysis of the problem of crack size updating for offshore structures. The Bayesian updating method is used to analyze the effect of probability of detection models on the posterior distribution of a crack size. The results show that different probabilities of detection models generate different posterior distributions of a crack size for offshore structures.

  3. Some possible q-exponential type probability distribution in the non-extensive statistical physics

    Science.gov (United States)

    Chung, Won Sang

    2016-08-01

    In this paper, we present two exponential type probability distributions which are different from Tsallis’s case which we call Type I: one given by pi = 1 Zq[eq(Ei)]-β (Type IIA) and another given by pi = 1 Zq[eq(-β)]Ei (Type IIIA). Starting with the Boltzman-Gibbs entropy, we obtain the different probability distribution by using the Kolmogorov-Nagumo average for the microstate energies. We present the first-order differential equations related to Types I, II and III. For three types of probability distributions, we discuss the quantum harmonic oscillator, two-level problem and the spin-1 2 paramagnet.

  4. Probability distributions for directed polymers in random media with correlated noise

    Science.gov (United States)

    Chu, Sherry; Kardar, Mehran

    2016-07-01

    The probability distribution for the free energy of directed polymers in random media (DPRM) with uncorrelated noise in d =1 +1 dimensions satisfies the Tracy-Widom distribution. We inquire if and how this universal distribution is modified in the presence of spatially correlated noise. The width of the distribution scales as the DPRM length to an exponent β , in good (but not full) agreement with previous renormalization group and numerical results. The scaled probability is well described by the Tracy-Widom form for uncorrelated noise, but becomes symmetric with increasing correlation exponent. We thus find a class of distributions that continuously interpolates between Tracy-Widom and Gaussian forms.

  5. Collective motions of globally coupled oscillators and some probability distributions on circle

    Energy Technology Data Exchange (ETDEWEB)

    Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)

    2017-06-28

    In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.

  6. A measure of mutual divergence among a number of probability distributions

    Directory of Open Access Journals (Sweden)

    J. N. Kapur

    1987-01-01

    major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.

  7. The dark matter of galaxy voids

    CERN Document Server

    Sutter, P M; Wandelt, Benjamin D; Weinberg, David H; Warren, Michael S

    2013-01-01

    How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter, and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ~20%, they have somewhat shallower density profiles, and they have centers offset by ~0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though ...

  8. The dark matter of galaxy voids

    Science.gov (United States)

    Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H.; Warren, Michael S.

    2014-03-01

    How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ˜20 per cent, they have somewhat shallower density profiles and they have centres offset by ˜ 0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though the centres may be offset. When this offset is taken into account, we recover almost identical radial density profiles between galaxies and dark matter. All mock catalogues used in this work are available at http://www.cosmicvoids.net.

  9. The probability distribution of fatigue damage and the statistical moment of fatigue life

    Institute of Scientific and Technical Information of China (English)

    熊峻江; 高镇同

    1997-01-01

    The randomization of deterministic fatigue damage equation leads to the stochastic differential equation and the Fokker-Planck equation affected by random fluctuation. By means of the solution of equation, the probability distribution of fatigue damage with the change of time is obtained. Then the statistical moment of fatigue life in consideration of the stationary random fluctuation is derived. Finally, the damage probability distributions during the fatigue crack initiation and fatigue crack growth are given

  10. The star formation activity in cosmic voids

    CERN Document Server

    Ricciardelli, Elena; Varela, Jesus; Quilis, Vicent

    2014-01-01

    Using a sample of cosmic voids identified in the Sloan Digital Sky Survey Data Release 7, we study the star formation activity of void galaxies. The properties of galaxies living in voids are compared with those of galaxies living in the void shells and with a control sample, representing the general galaxy population. Void galaxies appear to form stars more efficiently than shell galaxies and the control sample. This result can not be interpreted as a consequence of the bias towards low masses in underdense regions, as void galaxy subsamples with the same mass distribution as the control sample also show statistically different specific star formation rates. This highlights the fact that galaxy evolution in voids is slower with respect to the evolution of the general population. Nevertheless, when only the star forming galaxies are considered, we find that the star formation rate is insensitive to the environment, as the main sequence is remarkably constant in the three samples under consideration. This fact...

  11. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... by probability distributions is readily done by means of Monte Carlo simulation. Calculation of non-monotonic functions of possibility distributions is done within the theoretical framework of fuzzy intervals, but straight forward application of fuzzy arithmetic in general results in overestimation of interval...

  12. A simple derivation and classification of common probability distributions based on information symmetry and measurement scale

    CERN Document Server

    Frank, Steven A

    2010-01-01

    Commonly observed patterns typically follow a few distinct families of probability distributions. Over one hundred years ago, Karl Pearson provided a systematic derivation and classification of the common continuous distributions. His approach was phenomenological: a differential equation that generated common distributions without any underlying conceptual basis for why common distributions have particular forms and what explains the familial relations. Pearson's system and its descendants remain the most popular systematic classification of probability distributions. Here, we unify the disparate forms of common distributions into a single system based on two meaningful and justifiable propositions. First, distributions follow maximum entropy subject to constraints, where maximum entropy is equivalent to minimum information. Second, different problems associate magnitude to information in different ways, an association we describe in terms of the relation between information invariance and measurement scale....

  13. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  14. WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS

    Directory of Open Access Journals (Sweden)

    Mr. Vladimir A. Smagin

    2016-12-01

    Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.

  15. A new geometrical approach to void statistics

    CERN Document Server

    Werner, M C

    2014-01-01

    Modelling cosmic voids as spheres in Euclidean space, the notion of a de-Sitter configuration space is introduced. It is shown that a uniform distribution over this configuration space yields a power-law approximating the void size distribution in an intermediate range of volumes, as well as an estimate for the fractal dimension of the large scale structure.

  16. Probability Models for the Distribution of Copepods in Different Coastal Ecosystems Along the Straits of Malacca

    Science.gov (United States)

    Matias-Peralta, Hazel Monica; Ghodsi, Alireza; Shitan, Mahendran; Yusoff, Fatimah Md.

    Copepods are the most abundant microcrustaceans in the marine waters and are the major food resource for many commercial fish species. In addition, changes in the distribution and population composition of copepods may also serve as an indicator of global climate changes. Therefore, it is important to model the copepod distribution in different ecosystems. Copepod samples were collected from three different ecosystems (seagrass area, cage aquaculture area and coastal waters off shrimp aquaculture farm) along the coastal waters of the Malacca Straits over a one year period. In this study the major statistical analysis consisted of fitting different probability models. This paper highlights the fitting of probability distributions and discusses the adequateness of the fitted models. The usefulness of these fitted models would enable one to make probability statements about the distribution of copepods in three different ecosystems.

  17. Computerized voiding diary.

    Science.gov (United States)

    Rabin, J M; McNett, J; Badlani, G H

    1993-01-01

    An electronic, computerized voiding diary, "Compu-Void" (patent pending) was developed in order to simplify, augment, and automate patients' recording of bladder symptomatology. A voiding diary as a tool has the potential to provide essential information for a more complete diagnostic and therefore therapeutic picture for each patient. Two major problems with the standard written voiding diary have been a lack of patient compliance and the limited amount of information it garners. Twenty-five women with various types of voiding dysfunctions were compared to twenty-five age and parity-matched control women in order to determine patient preferences of the Compu-Void when compared to the standard written voiding diary, compliance with each method, and amount and quality of information obtained with each method. Over 90% of subjects and over 70% of control group patients preferred the Compu-Void over the written diary (P Compu-Void exceeded that obtained with the written method.

  18. A Class of Chaotic Sequences with Gauss Probability Distribution for Radar Mask Jamming

    Institute of Scientific and Technical Information of China (English)

    Ni-Ni Rao; Yu-Chuan Huang; Bin Liu

    2007-01-01

    A simple generation approach for chaotic sequences with Gauss probability distribution is proposed. Theoretical analysis and simulation based on Logistic chaotic model show that the approach is feasible and effective. The distribution characteristics of the novel chaotic sequence are comparable to that of the standard normal distribution. Its mean and variance can be changed to the desired values. The novel sequences have also good randomness. The applications for radar mask jamming are analyzed.

  19. Cosmology with void-galaxy correlations.

    Science.gov (United States)

    Hamaus, Nico; Wandelt, Benjamin D; Sutter, P M; Lavaux, Guilhem; Warren, Michael S

    2014-01-31

    Galaxy bias, the unknown relationship between the clustering of galaxies and the underlying dark matter density field is a major hurdle for cosmological inference from large-scale structure. While traditional analyses focus on the absolute clustering amplitude of high-density regions mapped out by galaxy surveys, we propose a relative measurement that compares those to the underdense regions, cosmic voids. On the basis of realistic mock catalogs we demonstrate that cross correlating galaxies and voids opens up the possibility to calibrate galaxy bias and to define a static ruler thanks to the observable geometric nature of voids. We illustrate how the clustering of voids is related to mass compensation and show that volume-exclusion significantly reduces the degree of stochasticity in their spatial distribution. Extracting the spherically averaged distribution of galaxies inside voids from their cross correlations reveals a remarkable concordance with the mass-density profile of voids.

  20. Investigating and improving student understanding of the probability distributions for measuring physical observables in quantum mechanics

    Science.gov (United States)

    Marshman, Emily; Singh, Chandralekha

    2017-03-01

    A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations.

  1. Score distributions of gapped multiple sequence alignments down to the low-probability tail

    Science.gov (United States)

    Fieth, Pascal; Hartmann, Alexander K.

    2016-08-01

    Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.

  2. Probability collectives a distributed multi-agent system approach for optimization

    CERN Document Server

    Kulkarni, Anand Jayant; Abraham, Ajith

    2015-01-01

    This book provides an emerging computational intelligence tool in the framework of collective intelligence for modeling and controlling distributed multi-agent systems referred to as Probability Collectives. In the modified Probability Collectives methodology a number of constraint handling techniques are incorporated, which also reduces the computational complexity and improved the convergence and efficiency. Numerous examples and real world problems are used for illustration, which may also allow the reader to gain further insight into the associated concepts.

  3. The Exit Distribution for Smart Kinetic Walk with Symmetric and Asymmetric Transition Probability

    Science.gov (United States)

    Dai, Yan

    2017-03-01

    It has been proved that the distribution of the point where the smart kinetic walk (SKW) exits a domain converges in distribution to harmonic measure on the hexagonal lattice. For other lattices, it is believed that this result still holds, and there is good numerical evidence to support this conjecture. Here we examine the effect of the symmetry and asymmetry of the transition probability on each step of the SKW on the square lattice and test if the exit distribution converges in distribution to harmonic measure as well. From our simulations, the limiting exit distribution of the SKW with a non-uniform but symmetric transition probability as the lattice spacing goes to zero is the harmonic measure. This result does not hold for asymmetric transition probability. We are also interested in the difference between the SKW with symmetric transition probability exit distribution and harmonic measure. Our simulations provide strong support for a explicit conjecture about this first order difference. The explicit formula for the conjecture will be given below.

  4. Evaluation of probability distributions for concentration fluctuations in a building array

    Science.gov (United States)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2017-10-01

    The wide range of values observed in a measured concentration time series after the release of a dispersing airborne pollutant from a point source in the atmospheric boundary layer, and the hazard level associated with the peak values, demonstrate the necessity of predicting the concentration probability distribution. For this, statistical models describing the probability of occurrence are preferably employed. In this paper a concentration database pertaining to a field experiment of dispersion in an urban-like area (MUST experiment) from a continuously emitting source is used for the selection of the best performing statistical model between the Gamma and the Beta distributions. The skewness, the kurtosis as well as the inverses of the cumulative distribution function were compared between the two statistical models and the experiment. The evaluation is performed in the form of validation metrics such as the Fractional Bias (FB), the Normalized Mean Square Error and the factor-of-2 percentage. The Beta probability distribution agreed with the experimental results better than the Gamma probability distribution except for the 25th percentile. Also according to the significant tests using the BOOT software the Beta model presented FB and NMSE values that are statistical different than the ones of the Gamma model except the 75th percentiles and the FB of the 99th percentiles. The effect of the stability conditions and source heights on the performance of the statistical models is also examined. For both cases the performance of the Beta distribution was slightly better than that of the Gamma.

  5. Ruin Probability and Joint Distributions of Some Actuarial Random Vectors in the Compound Pascal Model

    Institute of Scientific and Technical Information of China (English)

    Xian-min Geng; Shu-chen Wan

    2011-01-01

    The compound negative binomial model, introduced in this paper, is a discrete time version. We discuss the Markov properties of the surplus process, and study the ruin probability and the joint distributions of actuarial random vectors in this model. By the strong Markov property and the mass function of a defective renewal sequence, we obtain the explicit expressions of the ruin probability, the finite-horizon ruin probability,the joint distributions of T, U(T - 1), |U(T)| and inf 0≤n<T1 U(n) (i.e., the time of ruin, the surplus immediately before ruin, the deficit at ruin and maximal deficit from ruin to recovery) and the distributions of some actuariai random vectors.

  6. Net-charge probability distributions in heavy ion collisions at chemical freeze-out

    CERN Document Server

    Braun-Munzinger, P; Karsch, F; Redlich, K; Skokov, V

    2011-01-01

    We explore net charge probability distributions in heavy ion collisions within the hadron resonance gas model. The distributions for strangeness, electric charge and baryon number are derived. We show that, within this model, net charge probability distributions and the resulting fluctuations can be computed directly from the measured yields of charged and multi-charged hadrons. The influence of multi-charged particles and quantum statistics on the shape of the distribution is examined. We discuss the properties of the net proton distribution along the chemical freeze-out line. The model results presented here can be compared with data at RHIC energies and at the LHC to possibly search for the relation between chemical freeze-out and QCD cross-over lines in heavy ion collisions.

  7. Analysis of Void Fraction Distribution and Departure from Nucleate Boiling in Single Subchannel and Bundle Geometries Using Subchannel, System, and Computational Fluid Dynamics Codes

    Directory of Open Access Journals (Sweden)

    Taewan Kim

    2012-01-01

    Full Text Available In order to assess the accuracy and validity of subchannel, system, and computational fluid dynamics codes, the Paul Scherrer Institut has participated in the OECD/NRC PSBT benchmark with the thermal-hydraulic system code TRACE5.0 developed by US NRC, the subchannel code FLICA4 developed by CEA, and the computational fluid dynamic code STAR-CD developed by CD-adapco. The PSBT benchmark consists of a series of void distribution exercises and departure from nucleate boiling exercises. The results reveal that the prediction by the subchannel code FLICA4 agrees with the experimental data reasonably well in both steady-state and transient conditions. The analyses of single-subchannel experiments by means of the computational fluid dynamic code STAR-CD with the CD-adapco boiling model indicate that the prediction of the void fraction has no significant discrepancy from the experiments. The analyses with TRACE point out the necessity to perform additional assessment of the subcooled boiling model and bulk condensation model of TRACE.

  8. The Development of Voiding

    DEFF Research Database (Denmark)

    Olsen, Lars Henning

    2011-01-01

    The thesis addresses some new aspeccts in the development of voiding function from midgestation into early childhood.......The thesis addresses some new aspeccts in the development of voiding function from midgestation into early childhood....

  9. LAGRANGE MULTIPLIERS IN THE PROBABILITY DISTRIBUTIONS ELICITATION PROBLEM: AN APPLICATION TO THE 2013 FIFA CONFEDERATIONS CUP

    Directory of Open Access Journals (Sweden)

    Diogo de Carvalho Bezerra

    2015-12-01

    Full Text Available ABSTRACT Contributions from the sensitivity analysis of the parameters of the linear programming model for the elicitation of experts' beliefs are presented. The process allows for the calibration of the family of probability distributions obtained in the elicitation process. An experiment to obtain the probability distribution of a future event (Brazil vs. Spain soccer game in the 2013 FIFA Confederations Cup final game was conducted. The proposed sensitivity analysis step may help to reduce the vagueness of the information given by the expert.

  10. THE LEBESGUE-STIELJES INTEGRAL AS APPLIED IN PROBABILITY DISTRIBUTION THEORY

    Science.gov (United States)

    bounded variation and Borel measureable functions are set forth in the introduction. Chapter 2 is concerned with establishing a one to one correspondence between LebesgueStieljes measures and certain equivalence classes of functions which are monotone non decreasing and continuous on the right. In Chapter 3 the Lebesgue-Stieljes Integral is defined and some of its properties are demonstrated. In Chapter 4 probability distribution function is defined and the notions in Chapters 2 and 3 are used to show that the Lebesgue-Stieljes integral of any probability distribution

  11. Estimating the Upper Limit of Lifetime Probability Distribution, Based on Data of Japanese Centenarians.

    Science.gov (United States)

    Hanayama, Nobutane; Sibuya, Masaaki

    2016-08-01

    In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years.

  12. Effect of void cluster on ductile failure evolution

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2016-01-01

    The behavior of a non-uniform void distribution in a ductile material is investigated by using a cell model analysis to study a material with a periodic pattern of void clusters. The special clusters considered consist of a number of uniformly spaced voids located along a plane perpendicular...... to the maximum principal tensile stress. A plane strain approximation is used, where the voids are parallel cylindrical holes. Clusters with different numbers of voids are compared with the growth of a single void, such that the total initial volume of the voids, and thus also the void volume fractions...... understanding, different transverse stresses on the unit cell are considered to see the influence of different levels of stress triaxiality. Also considered are different initial ratios of the void spacing to the void radius inside the clusters. And results are shown for different levels of strain hardening...

  13. Supplemental topics on voids

    Energy Technology Data Exchange (ETDEWEB)

    Rood, H.J.

    1988-09-01

    Several topics concerning voids are presented, supplementing the report of Rood (1988). The discovery of the Coma supercluster and void and the recognition of the cosmological significance of superclusters and voids are reviewed. Galaxy redshift surveys and redshift surveys for the Abell clusters and very distant objects are discussed. Solar system and extragalactic dynamics are examined. Also, topics for future observational research on voids are recommended. 50 references.

  14. Comparative assessment of surface fluxes from different sources using probability density distributions

    Science.gov (United States)

    Gulev, Sergey; Tilinina, Natalia; Belyaev, Konstantin

    2015-04-01

    Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions.

  15. Ventilation/perfusion lung scan probability category distributions in university and community hospitals.

    Science.gov (United States)

    Lowe, V J; Bullard, A G; Coleman, R E

    1995-12-01

    The criteria used in the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) study for the interpretation of ventilation/perfusion scans are widely used and the probability of pulmonary embolism is determined from these criteria. The prevalence of pulmonary embolism in the PIOPED study was 33%. To investigate the similarity of patient populations who have ventilation/perfusion scans at one of the medical centers that participated in the PIOPED study and a small community hospital, the authors evaluated the probability category distributions of lung scans at the two institutions. They retrospectively interpreted 54 and 49 ventilation/perfusion lung scans selected from January, 1991, to June, 1992, at Duke University Medical Center and at Central Carolina Hospital, respectively. Studies were interpreted according to the PIOPED criteria. The percentage of studies assigned to each category at Duke University Medical Center and Central Carolina Hospital were 17% and 27% normal or very low probability, 31% and 59% low probability, 39% and 10% intermediate probability, and 13% and 4% high probability, respectively. The different distribution of probability categories between university and community hospitals suggests that the prevalence of disease may also be different. The post-test probability of pulmonary embolism is related to the prevalence of disease and the sensitivity and specificity of the ventilation/perfusion scan. Because these variables may differ in community hospital settings, the post-test probability of pulmonary embolism as determined by data from the PIOPED study should only be used in institutions with similar populations. Clinical management based upon the results of the PIOPED study may not be applicable to patients who have ventilation/perfusion scans performed in a community hospital.

  16. A study of process induced voids in resistance welding of thermoplastic composites

    OpenAIRE

    Shi, H.; Fernandez Villegas, I.; Bersee, H.E.N.

    2015-01-01

    Void formation in resistance welding of woven fabric reinforced thermoplastic composites was investigated. Void contents were measured using optical microscopy and digital image process. Un-even void distributions were observed in the joints, and more voids were found in the middle of the joints than the edges. A higher welding pressure was shown to help reduce the void generation. The mechanisms of void formation, in particular fibre de-compaction induced voids and residual moisture induced ...

  17. Importance measures for imprecise probability distributions and their sparse grid solutions

    Institute of Scientific and Technical Information of China (English)

    WANG; Pan; LU; ZhenZhou; CHENG; Lei

    2013-01-01

    For the imprecise probability distribution of structural system, the variance based importance measures (IMs) of the inputs are investigated, and three IMs are defined on the conditions of random distribution parameters, interval distribution parameters and the mixture of those two types of distribution parameters. The defined IMs can reflect the influence of the inputs on the output of the structural system with imprecise distribution parameters, respectively. Due to the large computational cost of the variance based IMs, sparse grid method is employed in this work to compute the variance based IMs at each reference point of distribution parameters. For the three imprecise distribution parameter cases, the sparse grid method and the combination of sparse grid method with genetic algorithm are used to compute the defined IMs. Numerical and engineering examples are em-ployed to demonstrate the rationality of the defined IMs and the efficiency of the applied methods.

  18. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  19. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  20. Calculation of Radar Probability of Detection in K-Distributed Sea Clutter and Noise

    Science.gov (United States)

    2011-04-01

    Expanded Swerling Target Models, IEEE Trans. AES 39 (2003) 1059-1069. 18. G. Arfken , Mathematical Methods for Physicists, Second Edition, Academic...form solution for the probability of detection in K-distributed clutter, so numerical methods are required. The K distribution is a compound model...the integration, with the nodes and weights calculated using matrix methods , so that a general purpose numerical integration routine is not required

  1. Analysis of low probability of intercept (LPI) radar signals using the Wigner Distribution

    OpenAIRE

    Gau, Jen-Yu

    2002-01-01

    Approved for public release, distribution is unlimited The parameters of Low Probability of Intercept (LPI) radar signals are hard to identify by using traditional periodogram signal processing techniques. Using the Wigner Distribution (WD), this thesis examines eight types of LPI radar signals. Signal to noise ratios of 0 dB and -6dB are also investigated. The eight types LPI radar signals examined include Frequency Modulation Continuous Wave (FMCW), Frank code, P1 code, P2 code, P3 code,...

  2. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  3. Providing probability distributions for the causal pathogen of clinical mastitis using naive Bayesian networks

    NARCIS (Netherlands)

    Steeneveld, W.; Gaag, van der L.C.; Barkema, H.W.; Hogeveen, H.

    2009-01-01

    Clinical mastitis (CM) can be caused by a wide variety of pathogens and farmers must start treatment before the actual causal pathogen is known. By providing a probability distribution for the causal pathogen, naive Bayesian networks (NBN) can serve as a management tool for farmers to decide which t

  4. Using Preferred Outcome Distributions to estimate Value and Probability Weighting Functions in Decisions under Risk

    NARCIS (Netherlands)

    A.C.D. Donkers (Bas); T. Lourenco (Tania); B.G.C. Dellaert (Benedict); D.G. Goldstein (Daniel G.)

    2013-01-01

    textabstract In this paper we propose the use of preferred outcome distributions as a new method to elicit individuals' value and probability weighting functions in decisions under risk. Extant approaches for the elicitation of these two key ingredients of individuals' risk attitude typically rely

  5. Establishment and optimization of project investment risk income models on the basis of probability χ distribution

    Institute of Scientific and Technical Information of China (English)

    吕渭济; 崔巍

    2001-01-01

    In this paper, two kinds of models are presented and optimized for project investment risk income on the basis of probability X distribution. One kind of model being proved has only a maximal value and another kind being proved has no extreme values.

  6. Establishment and optimization of project investment risk income models on the basis of probability χ distribution

    Institute of Scientific and Technical Information of China (English)

    LU Wei-ji; CUI Wei

    2001-01-01

    In this paper, two kinds of models are presented and optimized for pro ject investment risk income on the basis of probability χ distribution. One kin d of model being proved has only a maximal value and another kind being proved h as no extreme values.

  7. Voids in cosmological simulations over cosmic time

    Science.gov (United States)

    Wojtak, Radosław; Powell, Devon; Abel, Tom

    2016-06-01

    We study evolution of voids in cosmological simulations using a new method for tracing voids over cosmic time. The method is based on tracking watershed basins (contiguous regions around density minima) of well-developed voids at low redshift, on a regular grid of density field. It enables us to construct a robust and continuous mapping between voids at different redshifts, from initial conditions to the present time. We discuss how the new approach eliminates strong spurious effects of numerical origin when voids' evolution is traced by matching voids between successive snapshots (by analogy to halo merger trees). We apply the new method to a cosmological simulation of a standard Λ-cold-dark-matter cosmological model and study evolution of basic properties of typical voids (with effective radii 6 h-1 Mpc < Rv < 20 h-1 Mpc at redshift z = 0) such as volumes, shapes, matter density distributions and relative alignments. The final voids at low redshifts appear to retain a significant part of the configuration acquired in initial conditions. Shapes of voids evolve in a collective way which barely modifies the overall distribution of the axial ratios. The evolution appears to have a weak impact on mutual alignments of voids implying that the present state is in large part set up by the primordial density field. We present evolution of dark matter density profiles computed on isodensity surfaces which comply with the actual shapes of voids. Unlike spherical density profiles, this approach enables us to demonstrate development of theoretically predicted bucket-like shape of the final density profiles indicating a wide flat core and a sharp transition to high-density void walls.

  8. Probing cosmology and gravity with redshift-space distortions around voids

    CERN Document Server

    Hamaus, Nico; Lavaux, Guilhem; Wandelt, Benjamin D

    2015-01-01

    Cosmic voids in the large-scale structure of the Universe affect the peculiar motions of objects in their vicinity. Although these motions are difficult to observe directly, the clustering pattern of their surrounding tracers in redshift space is influenced in a unique way. This allows to investigate the interplay between densities and velocities around voids, which is solely dictated by the laws of gravity. With the help of N-body simulations and derived mock-galaxy catalogs we calculate the average density fluctuations inside and outside voids identified with a watershed algorithm in redshift space and compare the results with the expectation from general relativity and the LCDM model of cosmology. We find that simple linear-theory predictions work remarkably well in describing the dynamics of voids even on relatively small scales. Adopting a Bayesian inference framework, we determine the full posterior probability distribution of our model parameters and forecast the achievable accuracy on measurements of ...

  9. Zipf's law for fractal voids and a new void-finder

    CERN Document Server

    Gaite, J

    2005-01-01

    Voids are a prominent feature of fractal point distributions but there is no precise definition of what is a void (except in one dimension). Here we propose a definition of voids that uses methods of discrete stochastic geometry, in particular, Delaunay and Voronoi tessellations, and we construct a new algorithm to search for voids in a point set. We find and rank-order the voids of suitable examples of fractal point sets in one and two dimensions to test whether Zipf's power-law holds. We conclude affirmatively and, furthermore, that the rank-ordering of voids conveys similar information to the number-radius function, as regards the scaling regime and the transition to homogeneity. So it is an alternative tool in the analysis of fractal point distributions with crossover to homogeneity and, in particular, of the distribution of galaxies.

  10. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  11. The Probability Density Functions to Diameter Distributions for Scots Pine Oriental Beech and Mixed Stands

    Directory of Open Access Journals (Sweden)

    Aydın Kahriman

    2011-11-01

    Full Text Available Determine the diameter distribution of a stand and its relations with stand ages, site index, density and mixture percentage is very important both biologically and economically. The Weibull with two parameters, Weibull with three parameters, Gamma with two parameters, Gamma with three parameters, Beta, Lognormal with two parameters, Lognormal with three parameters, Normal, Johnson SB probability density functions were used to determination of diameter distributions. This study aimed to compared based on performance of describing different diameter distribution and to describe the best successful function of diameter distributions. The data were obtaited from 162 temporary sample plots measured Scots pine and Oriental beech mixed stands in Black Sea Region. The results show that four parameter Johnson SB function for both scots pine and oriental beech is the best successful function to describe diameter distributions based on error index values calculated by difference between observed and predicted diameter distributions.

  12. Generalized quantum Fokker-Planck, diffusion and Smoluchowski equations with true probability distribution functions

    CERN Document Server

    Banik, S K; Ray, D S; Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar

    2002-01-01

    Traditionally, the quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasi-probability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using {\\it true probability distribution functions} is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their co-ordinates and momenta we derive a generalized quantum Langevin equation in $c$-numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion and the Smoluchowski equations are the {\\it exact} quantum analogues of their classical counterparts. The present work is {\\it independent} of path integral techniques. The theory as developed here is a natural ext...

  13. Small-Scale Spatio-Temporal Distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) Using Probability Kriging.

    Science.gov (United States)

    Wang, S Q; Zhang, H Y; Li, Z L

    2016-10-01

    Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.

  14. About the probability distribution of a quantity with given mean and variance

    CERN Document Server

    Olivares, Stefano

    2012-01-01

    Supplement 1 to GUM (GUM-S1) recommends the use of maximum entropy principle (MaxEnt) in determining the probability distribution of a quantity having specified properties, e.g., specified central moments. When we only know the mean value and the variance of a variable, GUM-S1 prescribes a Gaussian probability distribution for that variable. When further information is available, in the form of a finite interval in which the variable is known to lie, we indicate how the distribution for the variable in this case can be obtained. A Gaussian distribution should only be used in this case when the standard deviation is small compared to the range of variation (the length of the interval). In general, when the interval is finite, the parameters of the distribution should be evaluated numerically, as suggested by I. Lira, Metrologia, 46 L27 (2009). Here we note that the knowledge of the range of variation is equivalent to a bias of the distribution toward a flat distribution in that range, and the principle of mini...

  15. Earthquake probabilities and magnitude distribution (M≥6.7) along the Haiyuan fault, northwestern China

    Institute of Scientific and Technical Information of China (English)

    冉洪流

    2004-01-01

    In recent years, some researchers have studied the paleoearthquake along the Haiyuan fault and revealed a lot of paleoearthquake events. All available information allows more reliable analysis of earthquake recurrence interval and earthquake rupture patterns along the Haiyuan fault. Based on this paleoseismological information, the recurrence probability and magnitude distribution for M≥6.7 earthquakes in future 100 years along the Haiyuan fault can be obtained through weighted computation by using Poisson and Brownian passage time models and considering different rupture patterns. The result shows that the recurrence probability of MS≥6.7 earthquakes is about 0.035 in future 100 years along the Haiyuan fault.

  16. The darkness that shaped the void: dark energy and cosmic voids

    CERN Document Server

    Bos, E G Patrick; Dolag, Klaus; Pettorino, Valeria

    2012-01-01

    Aims: We assess the sensitivity of void shapes to the nature of dark energy that was pointed out in recent studies. We investigate whether or not void shapes are useable as an observational probe in galaxy redshift surveys. We focus on the evolution of the mean void ellipticity and its underlying physical cause. Methods: We analyse the morphological properties of voids in five sets of cosmological N-body simulations, each with a different nature of dark energy. Comparing voids in the dark matter distribution to those in the halo population, we address the question of whether galaxy redshift surveys yield sufficiently accurate void morphologies. Voids are identified using the parameter free Watershed Void Finder. The effect of redshift distortions is investigated as well. Results: We confirm the statistically significant sensitivity of voids in the dark matter distribution. We identify the level of clustering as measured by \\sigma_8(z) as the main cause of differences in mean void shape . We find that in the h...

  17. Comparative assessment of surface fluxes from different sources: a framework based on probability distributions

    Science.gov (United States)

    Gulev, S.

    2015-12-01

    Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions. Our assessment also discriminated different reanalyses and satellite products with respect to their ability to quantify the role of extreme surface turbulent fluxes in forming ocean heat release in different regions.

  18. Probability distribution of surface wind speed induced by convective adjustment on Venus

    Science.gov (United States)

    Yamamoto, Masaru

    2017-03-01

    The influence of convective adjustment on the spatial structure of Venusian surface wind and probability distribution of its wind speed is investigated using an idealized weather research and forecasting model. When the initially uniform wind is much weaker than the convective wind, patches of both prograde and retrograde winds with scales of a few kilometers are formed during active convective adjustment. After the active convective adjustment, because the small-scale convective cells and their related vertical momentum fluxes dissipate quickly, the large-scale (>4 km) prograde and retrograde wind patches remain on the surface and in the longitude-height cross-section. This suggests the coexistence of local prograde and retrograde flows, which may correspond to those observed by Pioneer Venus below 10 km altitude. The probability distributions of surface wind speed V during the convective adjustment have a similar form in different simulations, with a sharp peak around ∼0.1 m s-1 and a bulge developing on the flank of the probability distribution. This flank bulge is associated with the most active convection, which has a probability distribution with a peak at the wind speed 1.5-times greater than the Weibull fitting parameter c during the convective adjustment. The Weibull distribution P(> V) (= exp[-(V/c)k]) with best-estimate coefficients of Lorenz (2016) is reproduced during convective adjustments induced by a potential energy of ∼7 × 107 J m-2, which is calculated from the difference in total potential energy between initially unstable and neutral states. The maximum vertical convective heat flux magnitude is proportional to the potential energy of the convective adjustment in the experiments with the initial unstable-layer thickness altered. The present work suggests that convective adjustment is a promising process for producing the wind structure with occasionally generating surface winds of ∼1 m s-1 and retrograde wind patches.

  19. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  20. Explicit Expressions for the Ruin Probabilities of Erlang Risk Processes with Pareto Individual Claim Distributions

    Institute of Scientific and Technical Information of China (English)

    Li Wei; Hai-liang Yang

    2004-01-01

    In this paper we first consider a risk process in which claim inter-arrival times and the time until the first claim have an Erlang (2)distribution.An explicit solution is derived for the probability of ultimate ruin,given an initial reserve of u when the claim size follows a Pareto distribution.Follow Ramsay [8] ,Laplace transforms and exponential integrals are used to derive the solution,which involves a single integral of real valued functions along the positive real line,and the integrand is not of an oscillating kind.Then we show that the ultimate ruin probability can be expressed as the sum of expected values of functions of two different Gamma random variables.Finally,the results are extended to the Erlang(n)case.Numerical examples are given to illustrate the main results.

  1. Estimating the probability distribution of von Mises stress for structures undergoing random excitation. Part 1: Derivation

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, D.; Reese, G.

    1998-09-01

    The von Mises stress is often used as the metric for evaluating design margins, particularly for structures made of ductile materials. For deterministic loads, both static and dynamic, the calculation of von Mises stress is straightforward, as is the resulting calculation of reliability. For loads modeled as random processes, the task is different; the response to such loads is itself a random process and its properties must be determined in terms of those of both the loads and the system. This has been done in the past by Monte Carlo sampling of numerical realizations that reproduce the second order statistics of the problem. Here, the authors present a method that provides analytic expressions for the probability distributions of von Mises stress which can be evaluated efficiently and with good precision numerically. Further, this new approach has the important advantage of providing the asymptotic properties of the probability distribution.

  2. Tracing the gravitational potential using cosmic voids

    CERN Document Server

    Nadathur, Seshadri; Crittenden, Robert

    2016-01-01

    The properties of large underdensities in the distribution of galaxies in the Universe, known as cosmic voids, are potentially sensitive probes of fundamental physics. We use data from the MultiDark suite of N-body simulations and multiple halo occupation distribution mocks to study the relationship between galaxy voids and the gravitational potential $\\Phi$. We find that the majority of galaxy voids correspond to local density minima in larger-scale overdensities, and thus lie in potential wells. However, a subset of voids can be identified that closely trace maxima of the gravitational potential and thus stationary points of the velocity field. We identify a new void observable, $\\lambda_v$, which depends on a combination of the void size and the average galaxy density contrast within the void, and show that it provides a good proxy indicator of the potential at the void location. A simple linear scaling of $\\Phi$ as a function of $\\lambda_v$ is found to hold, independent of the redshift and properties of t...

  3. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    Science.gov (United States)

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.

  4. An Undersea Mining Microseism Source Location Algorithm Considering Wave Velocity Probability Distribution

    OpenAIRE

    2014-01-01

    The traditional mine microseism locating methods are mainly based on the assumption that the wave velocity is uniform through the space, which leads to some errors for the assumption goes against the laws of nature. In this paper, the wave velocity is regarded as a random variable, and the probability distribution information of the wave velocity is fused into the traditional locating method. This paper puts forwards the microseism source location method for the undersea mining on condition o...

  5. Utilizing Probability Distribution Functions and Ensembles to Forecast lonospheric and Thermosphere Space Weather

    Science.gov (United States)

    2016-04-26

    created using probability distribution functions. This new model performs as well or better than other modern models of the solar wind velocity. In... Physics , 120: 7987-8001, doi: 10.1002/2014JA020962. Abstract: The temporal and spatial variations of the thermospheric mass density during a series of...2015), Theoretical study of zonal differences of electron density at midlatitudes with GITM simulation, J. Geophys. Res. Space Physics , 120, 2951

  6. Pauling resonant structures in real space through electron number probability distributions.

    Science.gov (United States)

    Pendas, A Martín; Francisco, E; Blanco, M A

    2007-02-15

    A general hierarchy of the coarsed-grained electron probability distributions induced by exhaustive partitions of the physical space is presented. It is argued that when the space is partitioned into atomic regions the consideration of these distributions may provide a first step toward an orbital invariant treatment of resonant structures. We also show that, in this case, the total molecular energy and its components may be partitioned into structure contributions, providing a fruitful extension of the recently developed interacting quantum atoms approach (J. Chem. Theory Comput. 2005, 1, 1096). The above ideas are explored in the hydrogen molecule, where a complete statistical and energetic decomposition into covalent and ionic terms is presented.

  7. Evolving Molecular Cloud Structure and the Column Density Probability Distribution Function

    CERN Document Server

    Ward, Rachel L; Sills, Alison

    2014-01-01

    The structure of molecular clouds can be characterized with the probability distribution function (PDF) of the mass surface density. In particular, the properties of the distribution can reveal the nature of the turbulence and star formation present inside the molecular cloud. In this paper, we explore how these structural characteristics evolve with time and also how they relate to various cloud properties as measured from a sample of synthetic column density maps of molecular clouds. We find that, as a cloud evolves, the peak of its column density PDF will shift to surface densities below the observational threshold for detection, resulting in an underlying lognormal distribution which has been effectively lost at late times. Our results explain why certain observations of actively star-forming, dynamically older clouds, such as the Orion molecular cloud, do not appear to have any evidence of a lognormal distribution in their column density PDFs. We also study the evolution of the slope and deviation point ...

  8. Probability distribution of the entanglement across a cut at an infinite-randomness fixed point

    Science.gov (United States)

    Devakul, Trithep; Majumdar, Satya N.; Huse, David A.

    2017-03-01

    We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.

  9. Pore size distribution, survival probability, and relaxation time in random and ordered arrays of fibers

    Science.gov (United States)

    Tomadakis, Manolis M.; Robertson, Teri J.

    2003-07-01

    We present a random walk based investigation of the pore size probability distribution and its moments, the survival probability and mean survival time, and the principal relaxation time, for random and ordered arrays of cylindrical fibers of various orientation distributions. The dimensionless mean survival time, principal relaxation time, mean pore size, and mean square pore size are found to increase with porosity, remain practically independent of the directionality of random fiber beds, and attain lower values for ordered arrays. Wide pore size distributions are obtained for random fiber structures and relatively narrow for ordered square arrays, all in very good agreement with theoretically predicted limiting values. Analytical results derived for the pore size probability and its lower moments for square arrays of fibers practically coincide with the corresponding simulation results. Earlier variational bounds on the mean survival time and principal relaxation time are obeyed by our numerical results in all cases, and are found to be quite sharp up to very high porosities. Dimensionless groups representing the deviation of such bounds from our simulation results vary in practically the same range as the corresponding values reported earlier for beds of spherical particles. A universal scaling expression of the literature relating the mean survival time to the mean pore size [S. Torquato and C. L. Y. Yeong, J. Chem. Phys. 106, 8814 (1997)] agrees very well with our results for all types of fiber structures, thus validated for the first time for anisotropic porous media.

  10. Optimal design of unit hydrographs using probability distribution and genetic algorithms

    Indian Academy of Sciences (India)

    Rajib Kumar Bhattacharjya

    2004-10-01

    A nonlinear optimization model is developed to transmute a unit hydrograph into a probability distribution function (PDF). The objective function is to minimize the sum of the square of the deviation between predicted and actual direct runoff hydrograph of a watershed. The predicted runoff hydrograph is estimated by using a PDF. In a unit hydrograph, the depth of rainfall excess must be unity and the ordinates must be positive. Incorporation of a PDF ensures that the depth of rainfall excess for the unit hydrograph is unity, and the ordinates are also positive. Unit hydrograph ordinates are in terms of intensity of rainfall excess on a discharge per unit catchment area basis, the unit area thus representing the unit rainfall excess. The proposed method does not have any constraint. The nonlinear optimization formulation is solved using binary-coded genetic algorithms. The number of variables to be estimated by optimization is the same as the number of probability distribution parameters; gamma and log-normal probability distributions are used. The existing nonlinear programming model for obtaining optimal unit hydrograph has also been solved using genetic algorithms, where the constrained nonlinear optimization problem is converted to an unconstrained problem using penalty parameter approach. The results obtained are compared with those obtained by the earlier LP model and are fairly similar.

  11. Probability distribution of financial returns in a model of multiplicative Brownian motion with stochastic diffusion coefficient

    Science.gov (United States)

    Silva, Antonio

    2005-03-01

    It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225

  12. Effect of Rain on Probability Distributions Fitted to Vehicle Time Headways

    Directory of Open Access Journals (Sweden)

    Hashim Mohammed Alhassan

    2012-01-01

    Full Text Available Time headway data generated from different rain conditions were fitted to probability distributions to see which ones best described the trends in headway behaviour in wet weather.  Data was generated from the J5, a principal road in Johor Bahru for two months and the headways in no-rain condition were analysed and compared to the rain generated headway data. The results showed a decrease in headways between no-rain and the rain conditions. Further decreases were observed with increase in rainfall intensity. Thus between no-rain to light rain condition there was 15.66% reduction in the mean headways. Also the mean headway reduction between no-rain and medium rain condition is 19.97% while the reduction between no-rain and heavy rain condition is 25.65%. This trend is already acknowledged in the literature. The Burr probability distribution ranked first amongst five others in describing the trends in headway behaviour during rainfall. It passed the goodness of fit tests for the K-S, A2 and C-S at 95% and 99 % respectively. The scale parameter of the Burr model and the P-value increased as the rain intensity increased. This suggests more vehicular cluster during rainfall with the probability of this occurring increasing with more rain intensity. The coefficient of variation and Skewness also pointed towards increase in vehicle cluster. The Burr Probability Distribution therefore can be applied to model headways in rain and no-rain weather conditions among others.

  13. VIDE: The Void IDentification and Examination toolkit

    CERN Document Server

    Sutter, P M; Hamaus, Nico; Pisani, Alice; Wandelt, Benjamin D; Warren, Michael S; Villaescusa-Navarro, Francisco; Zivick, Paul; Mao, Qingqing; Thompson, Benjamin B

    2014-01-01

    We present VIDE, the Void IDentification and Examination toolkit, an open-source Python/C++ code for finding cosmic voids in galaxy redshift surveys and N-body simulations, characterizing their properties, and providing a platform for more detailed analysis. At its core, VIDE uses a greatly enhanced version of ZOBOV (Neyinck 2008) to calculate a Voronoi tessellation for estimating the density field and a watershed transform to construct voids. The watershed levels are used to place voids in a hierarchical tree. VIDE provides significant additional functionality for both pre- and post-processing: for example, VIDE can work with volume- or magnitude-limited galaxy samples with arbitrary survey geometries, or dark matter particles or halo catalogs in a variety of common formats. It can also randomly subsample inputs and includes a Halo Occupation Distribution model for constructing mock galaxy populations. VIDE outputs a summary of void properties in plain ASCII, and provides a Python API to perform many analysi...

  14. The Lognormal Probability Distribution Function of the Perseus Molecular Cloud: A Comparison of HI and Dust

    CERN Document Server

    Burkhart, Blakesley; Murray, Claire; Stanimirovic, Snezana

    2015-01-01

    The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e. Av <1) PDF using dust tracers. In order to constrain the shape and properties of the low column density probability distribution function, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution, and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a powerlaw form. We find that the PDF of the atomic gas is narrow and at column densities larger than...

  15. Criticality of the net-baryon number probability distribution at finite density

    Directory of Open Access Journals (Sweden)

    Kenji Morita

    2015-02-01

    Full Text Available We compute the probability distribution P(N of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T<1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4 spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N. By considering ratios of P(N to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4 criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4 criticality in the context of binomial and negative-binomial distributions for the net proton number.

  16. Voiding dysfunction - A review

    Directory of Open Access Journals (Sweden)

    Sripathi V

    2005-01-01

    Full Text Available In a child who is toilet trained the sudden onset of daytime wetting with frequency or urgency is alarming to the parents. Initially this subject was subdivided into a number of descriptive clinical conditions which led to a lot of confusion in recognition and management. Subsequently, the term elimination dysfunction was coined by Stephen Koff to emphasise the association between recurrent urinary infection, wetting, constipation and bladder overactivity. From a urodynamic point of view, in voiding dysfunction, there is either detrusor overactivity during bladder filling or dyssynergic action between the detrusor and the external sphincter during voiding. Identifying a given condition as a ′filling phase dysfunction′ or ′voiding phase dysfunction′ helps to provide appropriate therapy. Objective clinical criteria should be used to define voiding dysfunction. These include bladder wall thickening, large capacity bladder and infrequent voiding, bladder trabeculation and spinning top deformity of the urethra and a clinically demonstrated Vincent′s curtsy. The recognition and treatment of constipation is central to the adequate treatment of voiding dysfunction. Transcutaneous electric nerve stimuation for the treatment of detrusor overactivity, biofeedback with uroflow EMG to correct dyssynergic voiding, and behavioral therapy all serve to correct voiding dysfunction in its early stages. In established neurogenic bladder disease the use of Botulinum Toxin A injections into the detrusor or the external sphincter may help in restoring continence especially in those refractory to drug therapy. However in those children in whom the upper tracts are threatened, augmentation of the bladder may still be needed.

  17. Voids in cosmological simulations over cosmic time

    CERN Document Server

    Wojtak, Radosław; Abel, Tom

    2016-01-01

    We study evolution of voids in cosmological simulations using a new method for tracing voids over cosmic time. The method is based on tracking watershed basins (contiguous regions around density minima) of well developed voids at low redshift, on a regular grid of density field. It enables us to construct a robust and continuous mapping between voids at different redshifts, from initial conditions to the present time. We discuss how the new approach eliminates strong spurious effects of numerical origin when voids evolution is traced by matching voids between successive snapshots (by analogy to halo merger trees). We apply the new method to a cosmological simulation of a standard LambdaCDM cosmological model and study evolution of basic properties of typical voids (with effective radii between 6Mpc/h and 20Mpc/h at redshift z=0) such as volumes, shapes, matter density distributions and relative alignments. The final voids at low redshifts appear to retain a significant part of the configuration acquired in in...

  18. Void Statistics and Void Galaxies in the 2dFGRS

    CERN Document Server

    von Benda-Beckmann, Alexander M

    2007-01-01

    For the 2dFGRS we study the properties of voids and of fainter galaxies within voids that are defined by brighter galaxies. Our results are compared with simulated galaxy catalogues from the Millenium simulation coupled with a semianalytical galaxy formation recipe. We derive the void size distribution and discuss its dependence on the faint magnitude limit of the galaxies defining the voids. While voids among faint galaxies are typically smaller than those among bright galaxies, the ratio of the void sizes to the mean galaxy separation reaches larger values. This is well reproduced in the mock galaxy samples studied. We provide analytic fitting functions for the void size distribution. Furthermore, we study the galaxy population inside voids defined by objects with $B_J -5\\log{h}< -20$ and diameter larger than 10 \\hMpc. We find a clear bimodality of the void galaxies similar to the average comparison sample. We confirm the enhanced abundance of galaxies in the blue cloud and a depression of the number of ...

  19. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Science.gov (United States)

    Sharma, Anurag; Kumar, Bimlesh

    2017-02-01

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram-Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points.

  20. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    Directory of Open Access Journals (Sweden)

    Fonseca Rasmus

    2009-10-01

    Full Text Available Abstract Background Predicting the three-dimensional structure of a protein from its amino acid sequence is currently one of the most challenging problems in bioinformatics. The internal structure of helices and sheets is highly recurrent and help reduce the search space significantly. However, random coil segments make up nearly 40% of proteins and they do not have any apparent recurrent patterns, which complicates overall prediction accuracy of protein structure prediction methods. Luckily, previous work has indicated that coil segments are in fact not completely random in structure and flanking residues do seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle residue in the input-window. The trained neural network shows a significant improvement (4-68% in predicting the most probable bin (covering a 30° × 30° area of the dihedral angle space for all amino acids in the data set compared to baseline statistics. An accuracy comparable to that of secondary structure prediction (≈ 80% is achieved by observing the 20 bins with highest output values. Conclusion Many different protein structure prediction methods exist and each uses different tools and auxiliary predictions to help determine the native structure. In this work the sequence is used to predict local context dependent dihedral angle propensities in coil-regions. This predicted distribution can potentially improve tertiary structure prediction

  1. Evolution Equation for a Joint Tomographic Probability Distribution of Spin-1 Particles

    Science.gov (United States)

    Korennoy, Ya. A.; Man'ko, V. I.

    2016-11-01

    The nine-component positive vector optical tomographic probability portrait of quantum state of spin-1 particles containing full spatial and spin information about the state without redundancy is constructed. Also the suggested approach is expanded to symplectic tomography representation and to representations with quasidistributions like Wigner function, Husimi Q-function, and Glauber-Sudarshan P-function. The evolution equations for constructed vector optical and symplectic tomograms and vector quasidistributions for arbitrary Hamiltonian are found. The evolution equations are also obtained in special case of the quantum system of charged spin-1 particle in arbitrary electro-magnetic field, which are analogs of non-relativistic Proca equation in appropriate representations. The generalization of proposed approach to the cases of arbitrary spin is discussed. The possibility of formulation of quantum mechanics of the systems with spins in terms of joint probability distributions without the use of wave functions or density matrices is explicitly demonstrated.

  2. Label Ranking with Abstention: Predicting Partial Orders by Thresholding Probability Distributions (Extended Abstract)

    CERN Document Server

    Cheng, Weiwei

    2011-01-01

    We consider an extension of the setting of label ranking, in which the learner is allowed to make predictions in the form of partial instead of total orders. Predictions of that kind are interpreted as a partial abstention: If the learner is not sufficiently certain regarding the relative order of two alternatives, it may abstain from this decision and instead declare these alternatives as being incomparable. We propose a new method for learning to predict partial orders that improves on an existing approach, both theoretically and empirically. Our method is based on the idea of thresholding the probabilities of pairwise preferences between labels as induced by a predicted (parameterized) probability distribution on the set of all rankings.

  3. Probability distributions of whisker-surface contact: quantifying elements of the rat vibrissotactile natural scene.

    Science.gov (United States)

    Hobbs, Jennifer A; Towal, R Blythe; Hartmann, Mitra J Z

    2015-08-01

    Analysis of natural scene statistics has been a powerful approach for understanding neural coding in the auditory and visual systems. In the field of somatosensation, it has been more challenging to quantify the natural tactile scene, in part because somatosensory signals are so tightly linked to the animal's movements. The present work takes a step towards quantifying the natural tactile scene for the rat vibrissal system by simulating rat whisking motions to systematically investigate the probabilities of whisker-object contact in naturalistic environments. The simulations permit an exhaustive search through the complete space of possible contact patterns, thereby allowing for the characterization of the patterns that would most likely occur during long sequences of natural exploratory behavior. We specifically quantified the probabilities of 'concomitant contact', that is, given that a particular whisker makes contact with a surface during a whisk, what is the probability that each of the other whiskers will also make contact with the surface during that whisk? Probabilities of concomitant contact were quantified in simulations that assumed increasingly naturalistic conditions: first, the space of all possible head poses; second, the space of behaviorally preferred head poses as measured experimentally; and third, common head poses in environments such as cages and burrows. As environments became more naturalistic, the probability distributions shifted from exhibiting a 'row-wise' structure to a more diagonal structure. Results also reveal that the rat appears to use motor strategies (e.g. head pitches) that generate contact patterns that are particularly well suited to extract information in the presence of uncertainty. © 2015. Published by The Company of Biologists Ltd.

  4. The probability distribution functions of emission line flux measurements and their ratios

    CERN Document Server

    Wesson, R; Scicluna, P

    2016-01-01

    Many physical parameters in astrophysics are derived using the ratios of two observed quantities. If the relative uncertainties on measurements are small enough, uncertainties can be propagated analytically using simplifying assumptions, but for large normally distributed uncertainties, the probability distribution of the ratio become skewed, with a modal value offset from that expected in Gaussian uncertainty propagation. Furthermore, the most likely value of a ratio A/B is not equal to the reciprocal of the most likely value of B/A. The effect is most pronounced when the uncertainty on the denominator is larger than that on the numerator. We show that this effect is seen in an analysis of 12,126 spectra from the Sloan Digital Sky Survey. The intrinsically fixed ratio of the [O III] lines at 4959 and 5007 ${\\AA}$ is conventionally expressed as the ratio of the stronger line to the weaker line. Thus, the uncertainty on the denominator is larger, and non-Gaussian probability distributions result. By taking thi...

  5. Comparison of Lauritzen-Spiegelhalter and successive restrictions algorithms for computing probability distributions in Bayesian networks

    Science.gov (United States)

    Smail, Linda

    2016-06-01

    The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.

  6. Finite de Finetti theorem for conditional probability distributions describing physical theories

    Science.gov (United States)

    Christandl, Matthias; Toner, Ben

    2009-04-01

    We work in a general framework where the state of a physical system is defined by its behavior under measurement and the global state is constrained by no-signaling conditions. We show that the marginals of symmetric states in such theories can be approximated by convex combinations of independent and identical conditional probability distributions, generalizing the classical finite de Finetti theorem of Diaconis and Freedman. Our results apply to correlations obtained from quantum states even when there is no bound on the local dimension, so that known quantum de Finetti theorems cannot be used.

  7. Discrete coherent states and probability distributions in finite-dimensional spaces

    Energy Technology Data Exchange (ETDEWEB)

    Galetti, D.; Marchiolli, M.A.

    1995-06-01

    Operator bases are discussed in connection with the construction of phase space representatives of operators in finite-dimensional spaces and their properties are presented. It is also shown how these operator bases allow for the construction of a finite harmonic oscillator-like coherent state. Creation and annihilation operators for the Fock finite-dimensional space are discussed and their expressions in terms of the operator bases are explicitly written. The relevant finite-dimensional probability distributions are obtained and their limiting behavior for an infinite-dimensional space are calculated which agree with the well know results. (author). 20 refs, 2 figs.

  8. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham

    2017-04-07

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.

  9. Spectra and probability distributions of thermal flux in turbulent Rayleigh-B\\'{e}nard convection

    CERN Document Server

    Pharasi, Hirdesh K; Kumar, Krishna; Bhattacharjee, Jayanta K

    2016-01-01

    The spectra of turbulent heat flux $\\mathrm{H}(k)$ in Rayleigh-B\\'{e}nard convection with and without uniform rotation are presented. The spectrum $\\mathrm{H}(k)$ scales with wave number $k$ as $\\sim k^{-2}$. The scaling exponent is almost independent of the Taylor number $\\mathrm{Ta}$ and Prandtl number $\\mathrm{Pr}$ for higher values of the reduced Rayleigh number $r$ ($ > 10^3$). The exponent, however, depends on $\\mathrm{Ta}$ and $\\mathrm{Pr}$ for smaller values of $r$ ($<10^3$). The probability distribution functions of the local heat fluxes are non-Gaussian and have exponential tails.

  10. Wind speed analysis in La Vainest, Mexico: a bimodal probability distribution case

    Energy Technology Data Exchange (ETDEWEB)

    Jaramillo, O.A.; Borja, M.A. [Energias No Convencionales, Morelos (Mexico). Instituto de Investigaciones Electricas

    2004-08-01

    The statistical characteristics of the wind speed in La Vainest, Oxoic, Mexico, have been analyzed by using wind speed data recorded by Instituto de Investigaciones Electricas (IIE). By grouping the observations by annual, seasonal and wind direction, we show that the wind speed distribution, with calms included, is not represented by the typical two-parameter Weibull function. A mathematical formulation by using a bimodal Weibull and Weibull probability distribution function (PDF) has been developed to analyse the wind speed frequency distribution in that region. The model developed here can be applied for similar regions where the wind speed distribution presents a bimodal PDF. The two-parameter Weibull wind speed distribution must not be generalised, since it is not accurate to represent some wind regimes as the case of La Ventosa, Mexico. The analysis of wind data shows that computing the capacity factor for wind power plants to be installed in La Ventosa must be carded out by means of a bimodal PDF instead of the typical Weibull PDF. Otherwise, the capacity factor will be underestimated. (author)

  11. Long-Term Probability Distribution of Wind Turbine Planetary Bearing Loads (Poster)

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Z.; Xing, Y.; Guo, Y.; Dong, W.; Moan, T.; Gao, Z.

    2013-04-01

    Among the various causes of bearing damage and failure, metal fatigue of the rolling contact surface is the dominant failure mechanism. The fatigue life is associated with the load conditions under which wind turbines operate in the field. Therefore, it is important to understand the long-term distribution of the bearing loads under various environmental conditions. The National Renewable Energy Laboratory's 750-kW Gearbox Reliability Collaborative wind turbine is studied in this work. A decoupled analysis using several computer codes is carried out. The global aero-elastic simulations are performed using HAWC2. The time series of the drivetrain loads and motions from the global dynamic analysis are fed to a drivetrain model in SIMPACK. The time-varying internal pressure distribution along the raceway is obtained analytically. A series of probability distribution functions are then used to fit the long-term statistical distribution at different locations along raceways. The long-term distribution of the bearing raceway loads are estimated under different environmental conditions. Finally, the bearing fatigue lives are calculated.

  12. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    Science.gov (United States)

    Cherniz, Analía S.; Bonell, Claudia E.; Tabernig, Carolina B.

    2007-11-01

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.

  13. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    Energy Technology Data Exchange (ETDEWEB)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)

    2007-11-15

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.

  14. Draining the Local Void

    CERN Document Server

    Rizzi, Luca; Shaya, Edward J; Kourkchi, Ehsan; Karachentsev, Igor D

    2016-01-01

    Two galaxies that lie deep within the Local Void provide a test of the expectation that voids expand. The modest (M_B~-14) HI bearing dwarf galaxies ALFAZOAJ1952+1428 and KK246 have been imaged with Hubble Space Telescope in order to study the stellar populations and determine distances from the luminosities of stars at the tip of the red giant branch. The mixed age systems have respective distances of 8.39 Mpc and 6.95 Mpc and inferred line-of-sight peculiar velocities of -114 km/s and -66 km/s toward us and away from the void center. These motions compound on the Milky Way motion of ~230 km/s away from the void. The orbits of the two galaxies are reasonably constrained by a numerical action model encompassing an extensive region that embraces the Local Void. It is unambiguously confirmed that these two void galaxies are moving away from the void center at several hundred km/s.

  15. Practical Statistics for the Voids Between Galaxies

    Directory of Open Access Journals (Sweden)

    Zaninetti, L.

    2010-12-01

    Full Text Available The voids between galaxies are identified withthe volumes of the Poisson Voronoi tessellation.Two new survival functions for the apparent radii of voids are derived. The sectional normalized area ofthe Poisson Voronoi tessellation is modelledby the Kiang function and by the exponential function. Two new survival functions with equivalent sectional radius are therefore derived; they represent an alternative to the survival function of voids between galaxies as given by the self-similar distribution. The spatial appearance of slices of the 2dF Galaxy Redshift Survey is simulated.

  16. EVALUATION OF THE PROBABILITY DISTRIBUTION OF PITTING CORROSION FATIGUE LIFE IN AIRCRAFT MATERIALS

    Institute of Scientific and Technical Information of China (English)

    WANG Qingyuan (王清远); N.KAWAGOISHI; Q.CHEN; R.M.PIDAPARTI

    2003-01-01

    Corrosion and fatigue properties of aircraft materials are known to have a considerable scatter due to the random nature of materials,loading,and environmental conditions.A probabilistic approach for predicting the pitting corrosion fatigue life has been investigated which captures the effect of the interaction of the cyclic load and corrosive environment and all stages of the corrosion fatigue process (i.e.the pit nucleation and growth,pit-crack transition,short- and long-crack propagation).The probabilistic model investigated considers the uncertainties in the initial pit size,corrosion pitting current,and material properties due to the scatter found in the experimental data.Monte Carlo simulations were performed to define the failure probability distribution.Predicted cumulative distribution functions of fatigue life agreed reasonably well with the existing experimental data.

  17. Properties of Galaxies in and around Voids

    CERN Document Server

    Hopp, U

    1997-01-01

    Two surveys for intrinsically faint galaxies towards nearby voids have been conducted at the MPI für Astronomie, Heidelberg. One selected targets from a new diameter limited ($\\Phi \\ge 5''$) catalog with morphological criteria while the other used digitized objective prism Schmidt plates to select mainly HII dwarf galaxies. For some 450 galaxies, redshifts and other optical data were obtained. We studied the spatial distribution of the sample objects, their luminosity function, and their intrinsic properties. Most of the galaxies belong to already well known sheets and filaments. But we found about a dozen highly isolated galaxies in each sample (nearest neighborhood distance $\\ge 3 h_{75}^{-1} Mpc$). These tend to populate additional structures and are not distributed homogeneously throughout the voids. As our results on 'void galaxies' still suffer from small sample statistics, I also tried to combine similar existing surveys of nearby voids to get further hints on the larger structure and on the luminosit...

  18. Wave Packet Dynamics in the Infinite Square Well with the Wigner Quasi-probability Distribution

    Science.gov (United States)

    Belloni, Mario; Doncheski, Michael; Robinett, Richard

    2004-05-01

    Over the past few years a number of authors have been interested in the time evolution and revivals of Gaussian wave packets in one-dimensional infinite wells and in two-dimensional infinite wells of various geometries. In all of these circumstances, the wave function is guaranteed to revive at a time related to the inverse of the system's ground state energy, if not sooner. To better visualize these revivals we have calculated the time-dependent Wigner quasi-probability distribution for position and momentum, P_W(x; p), for Gaussian wave packet solutions of this system. The Wigner quasi-probability distribution clearly demonstrates the short-term semi-classical time dependence, as well as longer-term revival behavior and the structure during the collapsed state. This tool also provides an excellent way of demonstrating the patterns of highly-correlated Schrödinger-cat-like `mini-packets' which appear at fractional multiples of the exact revival time. This research is supported in part by a Research Corporation Cottrell College Science Award (CC5470) and the National Science Foundation under contracts DUE-0126439 and DUE-9950702.

  19. A Voting Based Approach to Detect Recursive Order Number of Photocopy Documents Using Probability Distributions

    Directory of Open Access Journals (Sweden)

    Rani K

    2014-08-01

    Full Text Available Photocopy documents are very common in our normal life. People are permitted to carry and present photocopied documents to avoid damages to the original documents. But this provision is misused for temporary benefits by fabricating fake photocopied documents. Fabrication of fake photocopied document is possible only in 2nd and higher order recursive order of photocopies. Whenever a photocopied document is submitted, it may be required to check its originality. When the document is 1st order photocopy, chances of fabrication may be ignored. On the other hand when the photocopy order is 2nd or above, probability of fabrication may be suspected. Hence when a photocopy document is presented, the recursive order number of photocopy is to be estimated to ascertain the originality. This requirement demands to investigate methods to estimate order number of photocopy. In this work, a voting based approach is used to detect the recursive order number of the photocopy document using probability distributions exponential, extreme values and lognormal distributions is proposed. A detailed experimentation is performed on a generated data set and the method exhibits efficiency close to 89%.

  20. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  1. Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data

    Directory of Open Access Journals (Sweden)

    Jayajit Das '

    2015-07-01

    Full Text Available A common statistical situation concerns inferring an unknown distribution Q(x from a known distribution P(y, where X (dimension n, and Y (dimension m have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt approach that estimates Q(x based only on the available data, namely, P(y. The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.

  2. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  3. An informative prior probability distribution of the gompertz parameters for bayesian approaches in paleodemography.

    Science.gov (United States)

    Sasaki, Tomohiko; Kondo, Osamu

    2016-03-01

    In paleodemography, the Bayesian approach has been suggested to provide an effective means by which mortality profiles of past populations can be adequately estimated, and thus avoid problems of "age-mimicry" inherent in conventional approaches. In this study, we propose an application of the Gompertz model using an "informative" prior probability distribution by revising a recent example of the Bayesian approach based on an "uninformative" distribution. Life-table data of 134 human populations including those of contemporary hunter-gatherers were used to determine the Gompertz parameters of each population. In each population, we used both raw life-table data and the Gompertz parameters to calculate some demographic values such as the mean life-span, to confirm representativeness of the model. Then, the correlation between the two Gompertz parameters (the Strehler-Mildvan correlation) was re-established. We incorporated the correlation into the Bayesian approach as an "informative" prior probability distribution, and tested its effectiveness using simulated data. Our analyses showed that the mean life-span (≥ age 15) and the proportion of living persons aging over 45 were well-reproduced by the Gompertz model. The simulation showed that using the correlation as an informative prior provides a narrower estimation range in the Bayesian approach than does the uninformative prior. The Gompertz model can be assumed to accurately estimate the mean life-span and/or the proportion of old people in a population. We suggest that the Strehler-Mildvan correlation can be used as a useful constraint in demographic reconstructions of past human populations. © 2015 Wiley Periodicals, Inc.

  4. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the......Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set...... with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  5. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  6. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-01-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  7. Detection of two power-law tails in the probability distribution functions of massive GMCs

    CERN Document Server

    Schneider, N; Girichidis, P; Rayner, T; Motte, F; Andre, P; Russeil, D; Abergel, A; Anderson, L; Arzoumanian, D; Benedettini, M; Csengeri, T; Didelon, P; Francesco, J D; Griffin, M; Hill, T; Klessen, R S; Ossenkopf, V; Pezzuto, S; Rivera-Ingraham, A; Spinoglio, L; Tremblin, P; Zavagno, A

    2015-01-01

    We report the novel detection of complex high-column density tails in the probability distribution functions (PDFs) for three high-mass star-forming regions (CepOB3, MonR2, NGC6334), obtained from dust emission observed with Herschel. The low column density range can be fit with a lognormal distribution. A first power-law tail starts above an extinction (Av) of ~6-14. It has a slope of alpha=1.3-2 for the rho~r^-alpha profile for an equivalent density distribution (spherical or cylindrical geometry), and is thus consistent with free-fall gravitational collapse. Above Av~40, 60, and 140, we detect an excess that can be fitted by a flatter power law tail with alpha>2. It correlates with the central regions of the cloud (ridges/hubs) of size ~1 pc and densities above 10^4 cm^-3. This excess may be caused by physical processes that slow down collapse and reduce the flow of mass towards higher densities. Possible are: 1. rotation, which introduces an angular momentum barrier, 2. increasing optical depth and weaker...

  8. Probability distribution of turbulence in curvilinear cross section mobile bed channel.

    Science.gov (United States)

    Sharma, Anurag; Kumar, Bimlesh

    2016-01-01

    The present study investigates the probability density functions (PDFs) of two-dimensional turbulent velocity fluctuations, Reynolds shear stress (RSS) and conditional RSSs in threshold channel obtained by using Gram-Charlier (GC) series. The GC series expansion has been used up to the moments of order four to include the skewness and kurtosis. Experiments were carried out in the curvilinear cross section sand bed channel at threshold condition with uniform sand size of d50 = 0.418 mm. The result concludes that the PDF distributions of turbulent velocity fluctuations and RSS calculated theoretically based on GC series expansion satisfied the PDFs obtained from the experimental data. The PDF distribution of conditional RSSs related to the ejections and sweeps are well represented by the GC series exponential distribution, except that a slight departure of inward and outward interactions is observed, which may be due to weaker events. This paper offers some new insights into the probabilistic mechanism of sediment transport, which can be helpful in sediment management and design of curvilinear cross section mobile bed channel.

  9. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    Science.gov (United States)

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.

  10. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  11. Spatial probability distribution of future volcanic eruptions at El Hierro Island (Canary Islands, Spain)

    Science.gov (United States)

    Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro

    2013-05-01

    The 2011 submarine eruption that took place in the proximity of El Hierro Island (Canary Islands, Spain) has raised the need to identify the most likely future emission zones even on volcanoes characterized by low frequency activity. Here, we propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the probabilistic analysis of volcano-structural data of the Island collected through new fieldwork measurements, bathymetric information, as well as analysis of geological maps, orthophotos and aerial photographs. These data have been divided into different datasets and converted into separate and weighted probability density functions, which were included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. The most likely area to host new eruptions in El Hierro is in the south-western part of the West rift. High probability locations are also found in the Northeast and South rifts, and along the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency measures and civil defense actions.

  12. A Cosmic Void Catalog of SDSS DR12 BOSS Galaxies

    Science.gov (United States)

    Mao, Qingqing; Berlind, Andreas A.; Scherrer, Robert J.; Neyrinck, Mark C.; Scoccimarro, Román; Tinker, Jeremy L.; McBride, Cameron K.; Schneider, Donald P.; Pan, Kaike; Bizyaev, Dmitry; Malanushenko, Elena; Malanushenko, Viktor

    2017-02-01

    We present a cosmic void catalog using the large-scale structure galaxy catalog from the Baryon Oscillation Spectroscopic Survey (BOSS). This galaxy catalog is part of the Sloan Digital Sky Survey (SDSS) Data Release 12 and is the final catalog of SDSS-III. We take into account the survey boundaries, masks, and angular and radial selection functions, and apply the ZOBOV void finding algorithm to the Galaxy catalog. We identify a total of 10,643 voids. After making quality cuts to ensure that the voids represent real underdense regions, we obtain 1,228 voids with effective radii spanning the range 20–100 {h}-1 {Mpc} and with central densities that are, on average, 30% of the mean sample density. We release versions of the catalogs both with and without quality cuts. We discuss the basic statistics of voids, such as their size and redshift distributions, and measure the radial density profile of the voids via a stacking technique. In addition, we construct mock void catalogs from 1000 mock galaxy catalogs, and find that the properties of BOSS voids are in good agreement with those in the mock catalogs. We compare the stellar mass distribution of galaxies living inside and outside of the voids, and find no large difference. These BOSS and mock void catalogs are useful for a number of cosmological and galaxy environment studies.

  13. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    Predicting the three-dimensional structure of a protein from its amino acid sequence is currently one of the most challenging problems in bioinformatics. The internal structure of helices and sheets is highly recurrent and help reduce the search space significantly. However, random coil segments...... make up nearly 40\\% of proteins, and they do not have any apparent recurrent patterns which complicates overall prediction accuracy of protein structure prediction methods. Luckily, previous work has indicated that coil segments are in fact not completely random in structure and flanking residues do...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...

  14. Light Scattering of Rough Orthogonal Anisotropic Surfaces with Secondary Most Probable Slope Distributions

    Institute of Scientific and Technical Information of China (English)

    LI Hai-Xia; CHENG Chuan-Fu

    2011-01-01

    @@ We study the light scattering of an orthogonal anisotropic rough surface with secondary most probable slope distribution It is found that the scattered intensity profiles have obvious secondary maxima, and in the direction perpendicular to the plane of incidence, the secondary maxima are oriented in a curve on the observation plane,which is called the orientation curve.By numerical calculation of the scattering wave fields with the height data of the sample, it is validated that the secondary maxima are induced by the side face element, which constitutes the prismoid structure of the anisotropic surface.We derive the equation of the quadratic orientation curve.Experimentally, we construct the system for light scattering measurement using a CCD.The scattered intensity profiles are extracted from the images at different angles of incidence along the orientation curves.The experimental results conform to the theory.

  15. EVALUATION OF THE PROBABILITY DISTRIBUTION OF PITTING CORROSION FATIGUE LIFE IN AIRCRAFT MATERIALS

    Institute of Scientific and Technical Information of China (English)

    王清远; N.KAWAGOISHI; Q.CHEN; R.M.PIDAPARTI

    2003-01-01

    Corrosion and fatigue properties of aircraft materials axe known to have a considerablescatter due to the random nature of materials, loading, and environmental conditions. A probabilisticapproach for predicting the pitting corrosion fatigue life has been investigated which captures the effectof the interaction of the cyclic load and corrosive environment and all stages of the corrosion fatigueprocess (i.e. the pit nucleation and growth, pit-crack transition, short- and long-crack propagation).The probabilistic model investigated considers the uncertainties in the initial pit size, corrosion pittingcurrent, and material properties due to the scatter found in the experimental data. Monte Carlo simu-lations were performed to define the failure probability distribution. Predicted cumulative distributionfunctions of fatigue life agreed reasonably well with the existing experimental data.

  16. Lower Bound Bayesian Networks - An Efficient Inference of Lower Bounds on Probability Distributions in Bayesian Networks

    CERN Document Server

    Andrade, Daniel

    2012-01-01

    We present a new method to propagate lower bounds on conditional probability distributions in conventional Bayesian networks. Our method guarantees to provide outer approximations of the exact lower bounds. A key advantage is that we can use any available algorithms and tools for Bayesian networks in order to represent and infer lower bounds. This new method yields results that are provable exact for trees with binary variables, and results which are competitive to existing approximations in credal networks for all other network structures. Our method is not limited to a specific kind of network structure. Basically, it is also not restricted to a specific kind of inference, but we restrict our analysis to prognostic inference in this article. The computational complexity is superior to that of other existing approaches.

  17. Sampling the probability distribution of Type Ia Supernova lightcurve parameters in cosmological analysis

    Science.gov (United States)

    Dai, Mi; Wang, Yun

    2016-06-01

    In order to obtain robust cosmological constraints from Type Ia supernova (SN Ia) data, we have applied Markov Chain Monte Carlo (MCMC) to SN Ia lightcurve fitting. We develop a method for sampling the resultant probability density distributions (pdf) of the SN Ia lightcuve parameters in the MCMC likelihood analysis to constrain cosmological parameters, and validate it using simulated data sets. Applying this method to the `joint lightcurve analysis (JLA)' data set of SNe Ia, we find that sampling the SN Ia lightcurve parameter pdf's leads to cosmological parameters closer to that of a flat Universe with a cosmological constant, compared to the usual practice of using only the best-fitting values of the SN Ia lightcurve parameters. Our method will be useful in the use of SN Ia data for precision cosmology.

  18. Probability Distributions of Random Electromagnetic Fields in the Presence of a Semi-Infinite Isotropic Medium

    CERN Document Server

    Arnaut, L R

    2006-01-01

    Using a TE/TM decomposition for an angular plane-wave spectrum of free random electromagnetic waves and matched boundary conditions, we derive the probability density function for the energy density of the vector electric field in the presence of a semi-infinite isotropic medium. The theoretical analysis is illustrated with calculations and results for good electric conductors and for a lossless dielectric half-space. The influence of the permittivity and conductivity on the intensity, random polarization, statistical distribution and standard deviation of the field is investigated, both for incident plus reflected fields and for refracted fields. External refraction is found to result in compression of the fluctuations of the random field.

  19. Probability distribution function for inclinations of merging compact binaries detected by gravitational wave interferometers

    CERN Document Server

    Seto, Naoki

    2014-01-01

    We analytically discuss probability distribution function (PDF) for inclinations of merging compact binaries whose gravitational waves are coherently detected by a network of ground based interferometers. The PDF would be useful for studying prospects of (1) simultaneously detecting electromagnetic signals (such as gamma-ray-bursts) associated with binary mergers and (2) statistically constraining the related theoretical models from the actual observational data of multi-messenger astronomy. Our approach is similar to Schutz (2011), but we explicitly include the dependence of the polarization angles of the binaries, based on the concise formulation given in Cutler and Flanagan (1994). We find that the overall profiles of the PDFs are similar for any networks composed by the second generation detectors (Advanced-LIGO, Advanced-Virgo, KAGRA, LIGO-India). For example, 5.1% of detected binaries would have inclination angle less than 10 degree with at most 0.1% differences between the potential networks. A perturb...

  20. On the reliability of observational measurements of column density probability distribution functions

    CERN Document Server

    Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S

    2016-01-01

    Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...

  1. GENERALIZED FATIGUE CONSTANT LIFE CURVE AND TWO-DIMENSIONAL PROBABILITY DISTRIBUTION OF FATIGUE LIMIT

    Institute of Scientific and Technical Information of China (English)

    熊峻江; 武哲; 高镇同

    2002-01-01

    According to the traditional fatigue constant life curve, the concept and the universal expression of the generalized fatigue constant life curve were proposed.Then, on the basis of the optimization method of the correlation coefficient, the parameter estimation formulas were induced and the generalized fatigue constant life curve with the reliability level p was given.From P-Sa-Sm curve, the two-dimensional probability distribution of the fatigue limit was derived.After then, three set of tests of LY11 CZ corresponding to the different average stress were carried out in terms of the two-dimensional up-down method.Finally, the methods are used to analyze the test results, and it is found that the analyzedresults with the high precision may be obtained.

  2. The HI Probability Distribution Function and the Atomic-to-Molecular Transition in Molecular Clouds

    CERN Document Server

    Imara, Nia

    2016-01-01

    We characterize the column density probability distributions functions (PDFs) of the atomic hydrogen gas, HI, associated with seven Galactic molecular clouds (MCs). We use 21 cm observations from the Leiden/Argentine/Bonn Galactic HI Survey to derive column density maps and PDFs. We find that the peaks of the HI PDFs occur at column densities ranging from ~1-2$\\times 10^{21}$ cm$^2$ (equivalently, ~0.5-1 mag). The PDFs are uniformly narrow, with a mean dispersion of $\\sigma_{HI}\\approx 10^{20}$ cm$^2$ (~0.1 mag). We also investigate the HI-to-H$_2$ transition towards the cloud complexes and estimate HI surface densities ranging from 7-16 $M_\\odot$ pc$^{-2}$ at the transition. We propose that the HI PDF is a fitting tool for identifying the HI-to-H$_2$ transition column in Galactic MCs.

  3. Probability distribution function and multiscaling properties in the Korean stock market

    Science.gov (United States)

    Lee, Kyoung Eun; Lee, Jae Woo

    2007-09-01

    We consider the probability distribution function (pdf) and the multiscaling properties of the index and the traded volume in the Korean stock market. We observed the power law of the pdf at the fat tail region for the return, volatility, the traded volume, and changes of the traded volume. We also investigate the multifractality in the Korean stock market. We consider the multifractality by the detrended fluctuation analysis (MFDFA). We observed the multiscaling behaviors for index, return, traded volume, and the changes of the traded volume. We apply MFDFA method for the randomly shuffled time series to observe the effects of the autocorrelations. The multifractality is strongly originated from the long time correlations of the time series.

  4. Analysis of Low Probability of Intercept (LPI) Radar Signals Using the Wigner Distribution

    Science.gov (United States)

    Gau, Jen-Yu

    2002-09-01

    The parameters of Low Probability of Intercept (LPI) radar signals are hard to identity by using traditional periodogram signal processing techniques. Using the Wigner Distribution (WD), this thesis examines eight types of LPI radar signals. Signal to noise ratios of 0 dB and -6 dB are also investigated. The eight types LPI radar signals examined include Frequency Modulation Continuous Wave (FMCW), Frank code, Pt code, P2 code, P3 code, P4 code, COSTAS frequency hopping and Phase Shift Keying/Frequency Shift Keying (PSK/FSK) signals. Binary Phase Shift Keying (BPSK) signals although not used in modern LPI radars are also examined to further illustrate the principal characteristics of the WD.

  5. Binomial moments of the distance distribution and the probability of undetected error

    Energy Technology Data Exchange (ETDEWEB)

    Barg, A. [Lucent Technologies, Murray Hill, NJ (United States). Bell Labs.; Ashikhmin, A. [Los Alamos National Lab., NM (United States)

    1998-09-01

    In [1] K.A.S. Abdel-Ghaffar derives a lower bound on the probability of undetected error for unrestricted codes. The proof relies implicitly on the binomial moments of the distance distribution of the code. The authors use the fact that these moments count the size of subcodes of the code to give a very simple proof of the bound in [1] by showing that it is essentially equivalent to the Singleton bound. They discuss some combinatorial connections revealed by this proof. They also discuss some improvements of this bound. Finally, they analyze asymptotics. They show that an upper bound on the undetected error exponent that corresponds to the bound of [1] improves known bounds on this function.

  6. Mean square delay dependent-probability-distribution stability analysis of neutral type stochastic neural networks.

    Science.gov (United States)

    Muralisankar, S; Manivannan, A; Balasubramaniam, P

    2015-09-01

    The aim of this manuscript is to investigate the mean square delay dependent-probability-distribution stability analysis of neutral type stochastic neural networks with time-delays. The time-delays are assumed to be interval time-varying and randomly occurring. Based on the new Lyapunov-Krasovskii functional and stochastic analysis approach, a novel sufficient condition is obtained in the form of linear matrix inequality such that the delayed stochastic neural networks are globally robustly asymptotically stable in the mean-square sense for all admissible uncertainties. Finally, the derived theoretical results are validated through numerical examples in which maximum allowable upper bounds are calculated for different lower bounds of time-delay.

  7. The H I Probability Distribution Function and the Atomic-to-molecular Transition in Molecular Clouds

    Science.gov (United States)

    Imara, Nia; Burkhart, Blakesley

    2016-10-01

    We characterize the column-density probability distribution functions (PDFs) of the atomic hydrogen gas, H i, associated with seven Galactic molecular clouds (MCs). We use 21 cm observations from the Leiden/Argentine/Bonn Galactic H i Survey to derive column-density maps and PDFs. We find that the peaks of the H i PDFs occur at column densities in the range ˜1-2 × 1021 {{cm}}-2 (equivalently, ˜0.5-1 mag). The PDFs are uniformly narrow, with a mean dispersion of {σ }{{H}{{I}}}≈ {10}20 {{cm}}-2 (˜0.1 mag). We also investigate the H i-to-H2 transition toward the cloud complexes and estimate H i surface densities ranging from 7 to 16 {M}⊙ {{pc}}-2 at the transition. We propose that the H i PDF is a fitting tool for identifying the H i-to-H2 transition column in Galactic MCs.

  8. Random numbers from the tails of probability distributions using the transformation method

    CERN Document Server

    Fulger, Daniel; Germano, Guido

    2009-01-01

    The speed of many one-line transformation methods for the production of, for example, Levy alpha-stable random numbers, which generalize Gaussian ones, and Mittag-Leffler random numbers, which generalize exponential ones, is very high and satisfactory for most purposes. However, for the class of decreasing probability densities fast rejection implementations like the Ziggurat by Marsaglia and Tsang promise a significant speed-up if it is possible to complement them with a method that samples the tails of the infinite support. This requires the fast generation of random numbers greater or smaller than a certain value. We present a method to achieve this, and also to generate random numbers within any arbitrary interval. We demonstrate the method showing the properties of the transform maps of the above mentioned distributions as examples of stable and geometric stable random numbers used for the stochastic solution of the space-time fractional diffusion equation.

  9. Communication in a Poisson Field of Interferers -- Part I: Interference Distribution and Error Probability

    CERN Document Server

    Pinto, Pedro C

    2010-01-01

    We present a mathematical model for communication subject to both network interference and noise. We introduce a framework where the interferers are scattered according to a spatial Poisson process, and are operating asynchronously in a wireless environment subject to path loss, shadowing, and multipath fading. We consider both cases of slow and fast-varying interferer positions. The paper is comprised of two separate parts. In Part I, we determine the distribution of the aggregate network interference at the output of a linear receiver. We characterize the error performance of the link, in terms of average and outage probabilities. The proposed model is valid for any linear modulation scheme (e.g., M-ary phase shift keying or M-ary quadrature amplitude modulation), and captures all the essential physical parameters that affect network interference. Our work generalizes the conventional analysis of communication in the presence of additive white Gaussian noise and fast fading, allowing the traditional results...

  10. A study of process induced voids in resistance welding of thermoplastic composites

    NARCIS (Netherlands)

    Shi, H.; Fernandez Villegas, I.; Bersee, H.E.N.

    2015-01-01

    Void formation in resistance welding of woven fabric reinforced thermoplastic composites was investigated. Void contents were measured using optical microscopy and digital image process. Un-even void distributions were observed in the joints, and more voids were found in the middle of the joints tha

  11. Likelihood analysis of species occurrence probability from presence-only data for modelling species distributions

    Science.gov (United States)

    Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.

    2012-01-01

    1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities

  12. Development of a Medical-text Parsing Algorithm Based on Character Adjacent Probability Distribution for Japanese Radiology Reports

    National Research Council Canada - National Science Library

    N. Nishimoto; S. Terae; M. Uesugi; K. Ogasawara; T. Sakurai

    2008-01-01

    Objectives: The objectives of this study were to investigate the transitional probability distribution of medical term boundaries between characters and to develop a parsing algorithm specifically for medical texts. Methods...

  13. Size effect on strength and lifetime probability distributions of quasibrittle structures

    Indian Academy of Sciences (India)

    Zdeněk P Bažant; Jia-Liang Le

    2012-02-01

    Engineering structures such as aircraft, bridges, dams, nuclear containments and ships, as well as computer circuits, chips and MEMS, should be designed for failure probability < $10^{-6}-10^{-7}$ per lifetime. The safety factors required to ensure it are still determined empirically, even though they represent much larger and much more uncertain corrections to deterministic calculations than do the typical errors of modern computer analysis of structures. The empirical approach is sufficient for perfectly brittle and perfectly ductile structures since the cumulative distribution function (cdf) of random strength is known, making it possible to extrapolate to the tail from the mean and variance. However, the empirical approach does not apply to structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared to structure size. This paper presents a refined theory on the strength distribution of quasibrittle structures, which is based on the fracture mechanics of nanocracks propagating by activation energy controlled small jumps through the atomic lattice and an analytical model for the multi-scale transition of strength statistics. Based on the power law for creep crack growth rate and the cdf of material strength, the lifetime distribution of quasibrittle structures under constant load is derived. Both the strength and lifetime cdfs are shown to be sizeand geometry-dependent. The theory predicts intricate size effects on both the mean structural strength and lifetime, the latter being much stronger. The theory is shown to match the experimentally observed systematic deviations of strength and lifetime histograms of industrial ceramics from the Weibull distribution.

  14. Probability distribution of biofilm thickness and effect of biofilm on the permeability of porous media

    Science.gov (United States)

    Ye, S.; Sleep, B. E.; Chien, C.

    2010-12-01

    Probability distribution of biofilm thickness and effect of biofilm on permeability of saturated porous media were investigated in a two-dimensional sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) under condition of rich nutrition. Inoculation of the lower portion of the cell with a methanogenic culture and addition of methanol to the bottom of the cell led to biomass growth. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy (CLSM). Permeability was measured by laboratory hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. Biomass distribution on the grain of sand was not uniform. Biofilm thickness was a random variable with a normal distribution by statistical analysis of CLSM images. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model (Taylor et al., 1990) showed reduction in permeability of 2-5 orders magnitude. The Clement's model (Clement et al., 1996) predicted 3%-98% of the initial value. Seki's model (Seki and Miyazaki, 2001) could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor’s model predicted bigger reductions, and the results of

  15. The probability distribution for non-Gaussianity estimators constructed from the CMB trispectrum

    CERN Document Server

    Smith, Tristan L

    2012-01-01

    Considerable recent attention has focussed on the prospects to use the cosmic microwave background (CMB) trispectrum to probe the physics of the early universe. Here we evaluate the probability distribution function (PDF) for the standard estimator tau_nle for the amplitude tau_nl of the CMB trispectrum both for the null-hypothesis (i.e., for Gaussian maps with tau_nl = 0) and for maps with a non-vanishing trispectrum (|tau_nl|>0). We find these PDFs to be highly non-Gaussian in both cases. We also evaluate the variance with which the trispectrum amplitude can be measured, , as a function of its underlying value, tau_nl. We find a strong dependence of this variance on tau_nl. We also find that the variance does not, given the highly non-Gaussian nature of the PDF, effectively characterize the distribution. Detailed knowledge of these PDFs will therefore be imperative in order to properly interpret the implications of any given trispectrum measurement. For example, if a CMB experiment with a maximum multipole ...

  16. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled.Theory reliably describes the behavior in the weak turbulence regime,but theoretical description in the strong and whole turbulence regimes are still controversial.Based on Born perturbation theory,the physical manifestations and correlations of three typical PDF models (Rice-Nakagami,exponential-Bessel and negative-exponential distribution) were theoretically analyzed.It is shown that these models can be derived by separately making circular-Gaussian,strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory,which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications.In addition,a common shortcoming of the three models is that they are all approximations.A new model,called the Maclaurin-spread distribution,is proposed without any approximation except for assuming the correlation coefficient to be zero.So,it is considered that the new model can exactly reflect the Born perturbation theory.Simulated results prove the accuracy of this new model.

  17. Probability distribution functions for ELM bursts in a series of JET tokamak discharges

    Energy Technology Data Exchange (ETDEWEB)

    Greenhough, J [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Chapman, S C [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Dendy, R O [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Ward, D J [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom)

    2003-05-01

    A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour.

  18. Measurement of air distribution and void fraction of an upwards air-water flow using electrical resistance tomography and a wire-mesh sensor

    Science.gov (United States)

    Olerni, Claudio; Jia, Jiabin; Wang, Mi

    2013-03-01

    Measurements on an upwards air-water flow are reported that were obtained simultaneously with a dual-plane electrical resistance tomograph (ERT) and a wire-mesh sensor (WMS). The ultimate measurement target of both ERT and WMS is the same, the electrical conductivity of the medium. The ERT is a non-intrusive device whereas the WMS requires a net of wires that physically crosses the flow. This paper presents comparisons between the results obtained simultaneously from the ERT and the WMS for evaluation and calibration of the ERT. The length of the vertical testing pipeline section is 3 m with an internal diameter of 50 mm. Two distinct sets of air-water flow rate scenarios, bubble and slug regimes, were produced in the experiments. The fast impedance camera ERT recorded the data at an approximate time resolution of 896 frames per second (fps) per plane in contrast with the 1024 fps of the wire-mesh sensor WMS200. The set-up of the experiment was based on well established knowledge of air-water upwards flow, particularly the specific flow regimes and wall peak effects. The local air void fraction profiles and the overall air void fraction were produced from two systems to establish consistency for comparison of the data accuracy. Conventional bulk flow measurements in air mass and electromagnetic flow metering, as well as pressure and temperature, were employed, which brought the necessary calibration to the flow measurements. The results show that the profiles generated from the two systems have a certain level of inconsistency, particularly in a wall peak and a core peak from the ERT and WMS respectively, whereas the two tomography instruments achieve good agreement on the overall air void fraction for bubble flow. For slug flow, when the void fraction is over 30%, the ERT underestimates the void fraction, but a linear relation between ERT and WMS is still observed.

  19. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    Science.gov (United States)

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  20. The Lognormal Probability Distribution Function of the Perseus Molecular Cloud: A Comparison of HI and Dust

    Science.gov (United States)

    Burkhart, Blakesley; Lee, Min-Young; Murray, Claire E.; Stanimirović, Snezana

    2015-10-01

    The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e., {A}V\\lt 1) PDF using dust tracers. In order to constrain the shape and properties of the low column density PDF, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a power-law form. We find that the PDF of the atomic gas is narrow, and at column densities larger than the HI-H2 transition, the HI rapidly depletes, suggesting that the HI PDF may be used to find the HI-H2 transition column density. We also calculate the sonic Mach number of the atomic gas by using HI absorption line data, which yield a median value of Ms = 4.0 for the CNM, while the HI emission PDF, which traces both the WNM and CNM, has a width more consistent with transonic turbulence.

  1. Effects of Voids on Concrete Tensile Fracturing: A Mesoscale Study

    Directory of Open Access Journals (Sweden)

    Lei Xu

    2017-01-01

    Full Text Available A two-dimensional mesoscale modeling framework, which considers concrete as a four-phase material including voids, is developed for studying the effects of voids on concrete tensile fracturing under the plane stress condition. Aggregate is assumed to behave elastically, while a continuum damaged plasticity model is employed to describe the mechanical behaviors of mortar and ITZ. The effects of voids on the fracture mechanism of concrete under uniaxial tension are first detailed, followed by an extensive investigation of the effects of void volume fraction on concrete tensile fracturing. It is found that both the prepeak and postpeak mesoscale cracking in concrete are highly affected by voids, and there is not a straightforward relation between void volume fraction and the postpeak behavior due to the randomness of void distribution. The fracture pattern of concrete specimen with voids is controlled by both the aggregate arrangement and the distribution of voids, and two types of failure modes are identified for concrete specimens under uniaxial tension. It is suggested that voids should be explicitly modeled for the accurate fracturing simulation of concrete on the mesoscale.

  2. Universal Probability Distribution for the Wave Function of a Quantum System Entangled with its Environment

    Science.gov (United States)

    Goldstein, Sheldon; Lebowitz, Joel L.; Mastrodonato, Christian; Tumulka, Roderich; Zanghì, Nino

    2016-03-01

    A quantum system (with Hilbert space {H}1) entangled with its environment (with Hilbert space {H}2) is usually not attributed to a wave function but only to a reduced density matrix {ρ1}. Nevertheless, there is a precise way of attributing to it a random wave function {ψ1}, called its conditional wave function, whose probability distribution {μ1} depends on the entangled wave function {ψ in H1 ⊗ H2} in the Hilbert space of system and environment together. It also depends on a choice of orthonormal basis of H2 but in relevant cases, as we show, not very much. We prove several universality (or typicality) results about {μ1}, e.g., that if the environment is sufficiently large then for every orthonormal basis of H2, most entangled states {ψ} with given reduced density matrix {ρ1} are such that {μ1} is close to one of the so-called GAP (Gaussian adjusted projected) measures, {GAP(ρ1)}. We also show that, for most entangled states {ψ} from a microcanonical subspace (spanned by the eigenvectors of the Hamiltonian with energies in a narrow interval {[E, E+ δ E]}) and most orthonormal bases of H2, {μ1} is close to {GAP({tr}2 ρ_{mc})} with {ρ_{mc}} the normalized projection to the microcanonical subspace. In particular, if the coupling between the system and the environment is weak, then {μ1} is close to {GAP(ρ_β)} with {ρ_β} the canonical density matrix on H1 at inverse temperature {β=β(E)}. This provides the mathematical justification of our claim in Goldstein et al. (J Stat Phys 125: 1193-1221, 2006) that GAP measures describe the thermal equilibrium distribution of the wave function.

  3. Probability Distribution Function of a Forced Passive Tracer in the Lower Stratosphere

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The probability distribution function (PDF) of a passive tracer, forced by a "mean gradient", is studied. First, we take two theoretical approaches, the Lagrangian and the conditional closure formalisms, to study the PDFs of such an externally forced passive tracer. Then, we carry out numerical simulations for an idealized random flow on a sphere and for European Center for Medium-Range Weather Forecasts (ECMWF) stratospheric winds to test whether the mean-gradient model can be applied to studying stratospheric tracer mixing in midlatitude surf zones, in which a weak and poleward zonal-mean gradient is maintained by tracer leakage through polar and tropical mixing barriers, and whether the PDFs of tracer fluctuations in midlatitudes are consistent with the theoretical predictions. The numerical simulations show that when diffusive dissipation is balanced by the mean-gradient forcing, the PDF in the random flow and the Southern-Hemisphere PDFs in ECMWF winds show time-invariant exponential tails, consistent with theoretical predictions. In the Northern Hemisphere, the PDFs exhibit non-Gaussian tails. However, the PDF tails are not consistent with theoretical expectations. The long-term behavior of the PDF tails of the forced tracer is compared to that of a decaying tracer. It is found that the PDF tails of the decaying tracer are time-dependent, and evolve toward flatter than exponential.

  4. Understanding star formation in molecular clouds I. A universal probability distribution of column densities ?

    CERN Document Server

    Schneider, N; Csengeri, T; Klessen, R; Federrath, C; Tremblin, P; Girichidis, P; Bontemps, S; Andre, Ph

    2014-01-01

    Column density maps of molecular clouds are one of the most important observables in the context of molecular cloud- and star-formation (SF) studies. With Herschel it is now possible to reveal rather precisely the column density of dust, which is the best tracer of the bulk of material in molecular clouds. However, line-of-sight (LOS) contamination from fore- or background clouds can lead to an overestimation of the dust emission of molecular clouds, in particular for distant clouds. This implies too high values for column density and mass, and a misleading interpretation of probability distribution functions (PDFs) of the column density. In this paper, we demonstrate by using observations and simulations how LOS contamination affects the PDF. We apply a first-order approximation (removing a constant level) to the molecular clouds of Auriga and Maddalena (low-mass star-forming), and Carina and NGC3603(both high-mass SF regions). In perfect agreement with the simulations, we find that the PDFs become broader, ...

  5. Turbulence-Induced Relative Velocity of Dust Particles III: The Probability Distribution

    CERN Document Server

    Pan, Liubin; Scalo, John

    2014-01-01

    Motivated by its important role in the collisional growth of dust particles in protoplanetary disks, we investigate the probability distribution function (PDF) of the relative velocity of inertial particles suspended in turbulent flows. Using the simulation from our previous work, we compute the relative velocity PDF as a function of the friction timescales, tau_p1 and tau_p2, of two particles of arbitrary sizes. The friction time of particles included in the simulation ranges from 0.1 tau_eta to 54T_L, with tau_eta and T_L the Kolmogorov time and the Lagrangian correlation time of the flow, respectively. The relative velocity PDF is generically non-Gaussian, exhibiting fat tails. For a fixed value of tau_p1, the PDF is the fattest for equal-size particles (tau_p2~tau_p1), and becomes thinner at both tau_p2tau_p1. Defining f as the friction time ratio of the smaller particle to the larger one, we find that, at a given f in 1/2>T_L). These features are successfully explained by the Pan & Padoan model. Usin...

  6. Exact probability distributions of selected species in stochastic chemical reaction networks.

    Science.gov (United States)

    López-Caamal, Fernando; Marquez-Lago, Tatiana T

    2014-09-01

    Chemical reactions are discrete, stochastic events. As such, the species' molecular numbers can be described by an associated master equation. However, handling such an equation may become difficult due to the large size of reaction networks. A commonly used approach to forecast the behaviour of reaction networks is to perform computational simulations of such systems and analyse their outcome statistically. This approach, however, might require high computational costs to provide accurate results. In this paper we opt for an analytical approach to obtain the time-dependent solution of the Chemical Master Equation for selected species in a general reaction network. When the reaction networks are composed exclusively of zeroth and first-order reactions, this analytical approach significantly alleviates the computational burden required by simulation-based methods. By building upon these analytical solutions, we analyse a general monomolecular reaction network with an arbitrary number of species to obtain the exact marginal probability distribution for selected species. Additionally, we study two particular topologies of monomolecular reaction networks, namely (i) an unbranched chain of monomolecular reactions with and without synthesis and degradation reactions and (ii) a circular chain of monomolecular reactions. We illustrate our methodology and alternative ways to use it for non-linear systems by analysing a protein autoactivation mechanism. Later, we compare the computational load required for the implementation of our results and a pure computational approach to analyse an unbranched chain of monomolecular reactions. Finally, we study calcium ions gates in the sarco/endoplasmic reticulum mediated by ryanodine receptors.

  7. ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning

    Science.gov (United States)

    Sadeh, I.; Abdalla, F. B.; Lahav, O.

    2016-10-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.

  8. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    Science.gov (United States)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  9. Differential Evolution with Adaptive Mutation and Parameter Control Using Lévy Probability Distribution

    Institute of Scientific and Technical Information of China (English)

    Ren-Jie He; Zhen-Yu Yang

    2012-01-01

    Differential evolution (DE) has become a very popular and effective global optimization algorithm in the area of evolutionary computation.In spite of many advantages such as conceptual simplicity,high efficiency and ease of use,DE has two main components,i.e.,mutation scheme and parameter control,which significantly influence its performance.In this paper we intend to improve the performance of DE by using carefully considered strategies for both of the two components.We first design an adaptive mutation scheme,which adaptively makes use of the bias of superior individuals when generating new solutions.Although introducing such a bias is not a new idea,existing methods often use heuristic rules to control the bias.They can hardly maintain the appropriate balance between exploration and exploitation during the search process,because the preferred bias is often problem and evolution-stage dependent.Instead of using any fixed rule,a novel strategy is adopted in the new adaptive mutation scheme to adjust the bias dynamically based on the identified local fitness landscape captured by the current population.As for the other component,i.e.,parameter control,we propose a mechanism by using the Lévy probability distribution to adaptively control the scale factor F of DE.For every mutation in each generation,an Fi is produced from one of four different Lévy distributions according to their historical performance.With the adaptive mutation scheme and parameter control using Lévy distribution as the main components,we present a new DE variant called Lévy DE (LDE).Experimental studies were carried out on a broad range of benchmark functions in global numerical optimization.The results show that LDE is very competitive,and both of the two main components have contributed to its overall performance.The scalability of LDE is also discussed by conducting experiments on some selected benchmark functions with dimensions from 30 to 200.

  10. TRANSVERSELY ISOTROPIC HYPER-ELASTIC MATERIAL RECTANGULAR PLATE WITH VOIDS UNDER A UNIAXIAL EXTENSION

    Institute of Scientific and Technical Information of China (English)

    程昌钧; 任九生

    2003-01-01

    The finite deformation and stress analyses for a transversely isotropic rectangularplate with voids and made of hyper-elastic material with the generalized neo-Hookean strainenergy function under a uniaxial extension are studied. The deformation functions of plateswith voids that are symmetrically distributed in a certain manner are given and the functionsare expressed by two parameters by solving the differential equations. The solution may beapproximately obtained from the minimum potential energy principle. Thus, the analyticsolutions of the deformation and stress of the plate are obtained. The growth of the void.s andthe distribution of stresses along the voids are analyzed and the influences of the degree ofanisotropy, the size of the voids and the distance between the voids are discussed. Thecharacteristics of the growth of the voids and the distribution of stresses of the plates with onevoid, three or five voids are obtained and compared.

  11. 分组截尾数据下离散型寿命概率分布的估计方法%Estimation on survival discrete probability distributions with grouped censored data

    Institute of Scientific and Technical Information of China (English)

    侯超钧; 吴东庆; 王前; 杨志伟

    2012-01-01

    At present, the grouped data to unknown survival discrete probability distributions are less studied. To a-void solving complex nonlinear maximum likelihood equations,the probability distribution formula with recursion relations is derived from the likelihood equations by joining the Lagrange multiplier. Then the maximum likelihood estimation of p1 is obtained through the degradation of single interval model. Thus , the probability distribution of pi is calculated successively. The experiment result shows that this method is effective.%目前针对离散型未知寿命分布的分组数据研究较少.为避免求解复杂的非线性极大似然方程组,由加入拉格朗日乘子的似然方程组中推导具有递推关系的概率分布计算公式,并通过退化的单区间模型得到p1的极大似然估计,从而完成概率分布律p1的递推计算.实验说明该方法有效.

  12. Void shape effects and voids starting from cracked inclusion

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2011-01-01

    Numerical, axisymmetric cell model analyses are used to study the growth of voids in ductile metals, until the mechanism of coalescence with neighbouring voids sets in. A special feature of the present analyses is that extremely small values of the initial void volume fraction are considered, dow...

  13. 多元Beta分布特性分析%Analysis on Multi-dimensional Beta Probability Distribution Function

    Institute of Scientific and Technical Information of China (English)

    潘高田; 梁帆; 郭齐胜; 黄一斌

    2011-01-01

    Based on the quantitative truncated sequential test theory, multi-dimensional Beta probability distribution functions are come across in the problem of weapons system against aerial target hit accuracy tests. This paper analyses multi-dimensional Beta probability distribution function's properties and figures out" part of two-dimensional Beta probability distribution function values. This research plays an important role in the field of weapon system hit accuracy tests.%利用小样本截尾序贯检验理论,在武器系统对空中目标的命中精度检验问题中,遇到了一类多元Beta概率分布函数,讨论分析了多维Beta概率分布函数的特性并给出了概率计算表.结果对武器精度检验具有重要意义和实用价值.

  14. Emergence of visual saliency from natural scenes via context-mediated probability distributions coding.

    Directory of Open Access Journals (Sweden)

    Jinhua Xu

    Full Text Available Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes.

  15. Characterisation of seasonal flood types according to timescales in mixed probability distributions

    Science.gov (United States)

    Fischer, Svenja; Schumann, Andreas; Schulte, Markus

    2016-08-01

    When flood statistics are based on annual maximum series (AMS), the sample often contains flood peaks, which differ in their genesis. If the ratios among event types change over the range of observations, the extrapolation of a probability distribution function (pdf) can be dominated by a majority of events that belong to a certain flood type. If this type is not typical for extraordinarily large extremes, such an extrapolation of the pdf is misleading. To avoid this breach of the assumption of homogeneity, seasonal models were developed that differ between winter and summer floods. We show that a distinction between summer and winter floods is not always sufficient if seasonal series include events with different geneses. Here, we differentiate floods by their timescales into groups of long and short events. A statistical method for such a distinction of events is presented. To demonstrate their applicability, timescales for winter and summer floods in a German river basin were estimated. It is shown that summer floods can be separated into two main groups, but in our study region, the sample of winter floods consists of at least three different flood types. The pdfs of the two groups of summer floods are combined via a new mixing model. This model considers that information about parallel events that uses their maximum values only is incomplete because some of the realisations are overlaid. A statistical method resulting in an amendment of statistical parameters is proposed. The application in a German case study demonstrates the advantages of the new model, with specific emphasis on flood types.

  16. Understanding star formation in molecular clouds. III. Probability distribution functions of molecular lines in Cygnus X

    Science.gov (United States)

    Schneider, N.; Bontemps, S.; Motte, F.; Ossenkopf, V.; Klessen, R. S.; Simon, R.; Fechtenbaum, S.; Herpin, F.; Tremblin, P.; Csengeri, T.; Myers, P. C.; Hill, T.; Cunningham, M.; Federrath, C.

    2016-03-01

    The probability distribution function of column density (N-PDF) serves as a powerful tool to characterise the various physical processes that influence the structure of molecular clouds. Studies that use extinction maps or H2 column-density maps (N) that are derived from dust show that star-forming clouds can best be characterised by lognormal PDFs for the lower N range and a power-law tail for higher N, which is commonly attributed to turbulence and self-gravity and/or pressure, respectively. While PDFs from dust cover a large dynamic range (typically N ~ 1020-24 cm-2 or Av~ 0.1-1000), PDFs obtained from molecular lines - converted into H2 column density - potentially trace more selectively different regimes of (column) densities and temperatures. They also enable us to distinguish different clouds along the line of sight through using the velocity information. We report here on PDFs that were obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region, and make a comparison to a PDF that was derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av ~ 1-30, but is cut for higher Av because of optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up to Av ~ 1-15, followed by excess up to Av ~ 40. Above that value, all CO PDFs drop, which is most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av ~ 15 and 400, respectively. The PDF from dust is lognormal for Av ~ 3-15 and has a power-law tail up to Av ~ 500. Absolute values for the molecular line column densities are, however, rather uncertain because of abundance and excitation temperature variations. If we take the dust PDF at face value, we "calibrate" the molecular line PDF of CS to that of the dust and determine an abundance [CS]/[H2] of 10-9. The slopes of the power-law tails of the CS, N2H+, and dust PDFs are -1.6, -1.4, and -2.3, respectively, and are thus consistent

  17. Simulating Tail Probabilities in GI/GI.1 Queues and Insurance Risk Processes with Subexponentail Distributions

    NARCIS (Netherlands)

    Boots, Nam Kyoo; Shahabuddin, Perwez

    2001-01-01

    This paper deals with estimating small tail probabilities of thesteady-state waiting time in a GI/GI/1 queue with heavy-tailed (subexponential) service times. The problem of estimating infinite horizon ruin probabilities in insurance risk processes with heavy-tailed claims can be transformed into th

  18. Simulating Tail Probabilities in GI/GI.1 Queues and Insurance Risk Processes with Subexponentail Distributions

    NARCIS (Netherlands)

    Boots, Nam Kyoo; Shahabuddin, Perwez

    2001-01-01

    This paper deals with estimating small tail probabilities of thesteady-state waiting time in a GI/GI/1 queue with heavy-tailed (subexponential) service times. The problem of estimating infinite horizon ruin probabilities in insurance risk processes with heavy-tailed claims can be transformed into th

  19. The sparkling Universe: the coherent motions of cosmic voids

    CERN Document Server

    Lambas, Diego G; Ceccarelli, Laura; Ruiz, Andrés N; Paz, Dante J; Maldonado, Victoria E; Luparello, Heliana E

    2015-01-01

    We compute the bulk motions of cosmic voids, using a $\\Lambda$CDM numerical simulation considering the mean velocities of the dark matter inside the void itself and that of the haloes in the surrounding shell. We find coincident values of these two measures in the range $\\sim$ 300-400 km/s, not far from the expected mean peculiar velocities of groups and galaxy clusters. When analysing the distribution of the pairwise relative velocities of voids, we find a remarkable bimodal behaviour consistent with an excess of both systematically approaching and receding voids. We determine that the origin of this bimodality resides in the void large scale environment, since once voids are classified into void-in-void (R-type) or void-in-cloud (S-type), R-types are found mutually receding away, while S-types approach each other. The magnitude of these systematic relative velocities account for more than 100 km/s, reaching large coherence lengths of up to 200 h$^{-1}$ Mpc . We have used samples of voids from the Sloan Digi...

  20. Neurogenic voiding dysfunction.

    Science.gov (United States)

    Georgopoulos, Petros; Apostolidis, Apostolos

    2017-05-01

    This review aims to analyze and discuss all recently published articles associated with neurogenic voiding discussion providing readers with the most updated knowledge and trigger for further research. They include the proposal of a novel classification system for the pathophysiology of neurogenic lower urinary tract dysfunction (NLUTD) which combines neurological defect in a distinct anatomic location, and data on bowel dysfunction, autonomic dysreflexia and urine biomarkers; review of patient-reported outcome measures in NLUTD; review of the criteria for the diagnosis of clinically significant urinary infections; novel research findings on the pathophysiology of NLUTD; and review of data on minimally and more invasive treatments. Despite the extended evidence base on NLUTD, there is a paucity of high-quality new research concerning voiding dysfunction as opposed to storage problems. The update aims to inform clinicians about new developments in clinical practice, as well as ignite discussion for further clinical and basic research in the aforementioned areas of NLUTD.

  1. On de-Sitter Geometry in Cosmic Void Statistics

    CERN Document Server

    Gibbons, Gary W; Yoshida, Naoki; Chon, Sunmyon

    2013-01-01

    Starting from the geometrical concept of a 4-dimensional de-Sitter configuration of spheres in Euclidean 3-space and modelling voids in the Universe as spheres, we show that a uniform distribution over this configuration space implies a power-law for the void number density which is consistent with results from the excursion set formalism and from data, for an intermediate range of void volumes. We also discuss the effect of restricting the survey geometry on the void statistics. This work is a new application of de-Sitter geometry to cosmology and also provides a new geometrical perspective on self-similarity in cosmology.

  2. Effects of Cure Pressure Induced Voids on the Mechanical Strength of Carbon/Epoxy Laminates

    Institute of Scientific and Technical Information of China (English)

    Ling LIU; Boming ZHANG; Zhanjun WU; Dianfu WANG

    2005-01-01

    This work aims at designing a set of curing pressure routes to produce laminates with various void contents. The effects of various consolidation pressures resulting in different void contents on mechanical strength of carbon/epoxy laminates have been examined. Characterization of the voids, in terms of void volume fraction, void distribution,size, and shape, was performed by standard test, ultrasonic inspection and metallographic analysis. The interlaminar shear strength was measured by the short-beam method. An empirical model was used to predict the strength vs porosity. The predicted strengths conform well with the experimental data and voids were found to be uniformly distributed throughout the laminate.

  3. Voids and the Cosmic Web: cosmic depressions & spatial complexity

    CERN Document Server

    van de Weygaert, Rien

    2016-01-01

    Voids form a prominent aspect of the Megaparsec distribution of galaxies and matter. Not only do they represent a key constituent of the Cosmic Web, they also are one of the cleanest probes and measures of global cosmological parameters. The shape and evolution of voids are highly sensitive to the nature of dark energy, while their substructure and galaxy population provides a direct key to the nature of dark matter. Also, the pristine environment of void interiors is an important testing ground for our understanding of environmental influences on galaxy formation and evolution. In this paper, we review the key aspects of the structure and dynamics of voids, with a particular focus on the hierarchical evolution of the void population. We demonstrate how the rich structural pattern of the Cosmic Web is related to the complex evolution and buildup of voids.

  4. The VIMOS Public Extragalactic Redshift Survey (VIPERS). On the recovery of the count-in-cell probability distribution function

    Science.gov (United States)

    Bel, J.; Branchini, E.; Di Porto, C.; Cucciati, O.; Granett, B. R.; Iovino, A.; de la Torre, S.; Marinoni, C.; Guzzo, L.; Moscardini, L.; Cappi, A.; Abbas, U.; Adami, C.; Arnouts, S.; Bolzonella, M.; Bottini, D.; Coupon, J.; Davidzon, I.; De Lucia, G.; Fritz, A.; Franzetti, P.; Fumana, M.; Garilli, B.; Ilbert, O.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; McCracken, H. J.; Paioro, L.; Polletta, M.; Pollo, A.; Schlagenhaufer, H.; Scodeggio, M.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Burden, A.; Marchetti, A.; Mellier, Y.; Nichol, R. C.; Peacock, J. A.; Percival, W. J.; Phleps, S.; Wolk, M.

    2016-04-01

    We compare three methods to measure the count-in-cell probability density function of galaxies in a spectroscopic redshift survey. From this comparison we found that, when the sampling is low (the average number of object per cell is around unity), it is necessary to use a parametric method to model the galaxy distribution. We used a set of mock catalogues of VIPERS to verify if we were able to reconstruct the cell-count probability distribution once the observational strategy is applied. We find that, in the simulated catalogues, the probability distribution of galaxies is better represented by a Gamma expansion than a skewed log-normal distribution. Finally, we correct the cell-count probability distribution function from the angular selection effect of the VIMOS instrument and study the redshift and absolute magnitude dependency of the underlying galaxy density function in VIPERS from redshift 0.5 to 1.1. We found a very weak evolution of the probability density distribution function and that it is well approximated by a Gamma distribution, independently of the chosen tracers. Based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programmes 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The VIPERS web site is http://www.vipers.inaf.it/

  5. A computerized voiding diary.

    Science.gov (United States)

    Rabin, J M; McNett, J; Badlani, G H

    1996-11-01

    To examine a group of subject and control patient's preferences and compliance with regard to the Compu-Void (CV) electronic voiding diary as compared to the written diary (WD) and to compare the two methods with respect to the type of information obtained and whether the order of use of each method influenced results in the subject group. Thirty-six women between the ages of 20 and 84 with bladder symptoms were compared to a group of 36 age-matched women. In 100% of subjects and 95% of control patients, CV entries exceeded the number made with the WD in voiding events and, in subjects, in incontinence episodes recorded (P < .005 and P < .005, respectively). Over 98% of subjects and over 80% of controls preferred the CV (P < .0005). The order of use of each method in subjects made no significant difference with regard to the volume of information obtained (P < .407), number of leakage events recorded (P < .494) or fluid intake patterns (P < .410). Patients' compliance with each method was not affected by the order of use. Our results suggest an increased volume of data and greater patient compliance in reporting bladder symptoms and events using the CV and that the order of use is not important.

  6. Region-based approximation of probability distributions (for visibility between imprecise points among obstacles)

    OpenAIRE

    Buchin, K Kevin; Kostitsyna, I Irina; Löffler, M; Silveira, RI

    2014-01-01

    Let $p$ and $q$ be two imprecise points, given as probability density functions on $\\mathbb R^2$, and let $\\cal R$ be a set of $n$ line segments (obstacles) in $\\mathbb R^2$. We study the problem of approximating the probability that $p$ and $q$ can see each other; that is, that the segment connecting $p$ and $q$ does not cross any segment of $\\cal R$. To solve this problem, we approximate each density function by a weighted set of polygons; a novel approach to dealing with probability densit...

  7. Projectile Two-dimensional Coordinate Measurement Method Based on Optical Fiber Coding Fire and its Coordinate Distribution Probability

    Science.gov (United States)

    Li, Hanshan; Lei, Zhiyong

    2013-01-01

    To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.

  8. Calculation of ruin probabilities for a dense class of heavy tailed distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady

    2015-01-01

    In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximat...

  9. Effects of Turbulent Aberrations on Probability Distribution of Orbital Angular Momentum for Optical Communication

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yi-Xin; CANG Ji

    2009-01-01

    Effects of atmospheric turbulence tilt, defocus, astigmatism and coma aberrations on the orbital angular mo-mentum measurement probability of photons propagating in weak turbulent regime are modeled with Rytov approximation. By considering the resulting wave as a superposition of angular momentum eigenstates, the or-bital angular momentum measurement probabilities of the transmitted digit axe presented. Our results show that the effect of turbulent tilt aberration on the orbital angular momentum measurement probabilities of photons is the maximum among these four kinds of aberrations. As the aberration order increases, the effects of turbulence aberrations on the measurement probabilities of orbital angular momentum generally decrease, whereas the effect of turbulence defoens can be ignored. For tilt aberration, as the difference between the measured orbital angular momentum and the original orbital angular momentum increases, the orbital angular momentum measurement probabifity decreases.

  10. Voids and the Cosmic Web: cosmic depression & spatial complexity

    NARCIS (Netherlands)

    van de Weygaert, Rien; Shandarin, S.; Saar, E.; Einasto, J.

    2016-01-01

    Voids form a prominent aspect of the Megaparsec distribution of galaxies and matter. Not only do theyrepresent a key constituent of the Cosmic Web, they also are one of the cleanest probesand measures of global cosmological parameters. The shape and evolution of voids are highly sensitive tothe natu

  11. Mechanical Stress Effects on Electromigration Voiding in a Meandering Test Stripe

    Science.gov (United States)

    Lowry, L. E.; Tai, B. H.; Mattila, J.; Walsh, L. H.

    1993-01-01

    Earlier experimental findings concluded that electromigratin voids in these meandering stripe test structures were not randomly distributed and that void nucleation frequenly occurred sub-surface at the metal/thermal oxide interface.

  12. Examining the Tails of Probability Distributions Created Using Uncertainty Methods: A Case Study

    Science.gov (United States)

    Kang, M.; Thomson, N. R.; Sykes, J. F.

    2006-12-01

    Environmental management decisions require an understanding of all possible outcomes especially those with a low likelihood of occurrence; however, despite this need emphasis has been placed on the mean rather than extreme outcomes. Typically in groundwater contaminant transport problems, parameter estimates are obtained using automated parameter estimation packages (e.g., PEST) for a given conceptual model. The resulting parameter estimates and covariance information are used to generate Monte Carlo or Latin Hypercube realizations. Our observations indicate that the capacity of the simulations using parameters from the tails of the corresponding probability distributions often fail to sufficiently replicate field based observations. This stems from the fact that the input parameters governing Monte Carlo type uncertainty analysis method are based on the mean. In order to improve the quality of the realizations at the tails, the Dynamically- Dimensioned Search-Uncertainty Analysis (DDS-UA) method is adopted. This approach uses the Dynamically-Dimensioned Search (DDS) algorithm, which is designed to find multiple local minimums, and a pseudo-likelihood function. To test the robustness of this methodology, we applied it to a contaminant transport problem which involved TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois. Contamination has been observed in the Silurian dolomite aquifer underlying the facility, which served as a supply of drinking water. Dissolved TCE is assumed to migrate in a predominantly vertically downward direction through the overburden that underlies the Lockformer site and then migrate horizontally in the underlying aquifer. The model is solved using a semi-analytical solution of the mass conservation equation. The parameter estimation process is complicated by the fact that a concentration level equal or greater than the maximum contaminant level must be observed at specified locations. Penalty functions

  13. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  14. Cosmic Voids: structure, dynamics and galaxies

    CERN Document Server

    van de Weygaert, Rien

    2009-01-01

    In this review we discuss several aspects of Cosmic Voids. Voids are a major component of the large scale distribution of matter and galaxies in the Universe. They are of instrumental importance for understanding the emergence of the Cosmic Web. Their relatively simple shape and structure makes them into useful tools for extracting the value of a variety cosmic parameters, possibly including even that of the influence of dark energy. Perhaps most promising and challenging is the issue of the galaxies found within their realm. Not only does the pristine environment of voids provide a promising testing ground for assessing the role of environment on the formation and evolution of galaxies, the dearth of dwarf galaxies may even represent a serious challenge to the standard view of cosmic structure formation.

  15. Void coalescence within periodic clusters of particles

    Science.gov (United States)

    Thomson, C. I. A.; Worswick, M. J.; Pilkey, A. K.; Lloyd, D. J.

    2003-01-01

    The effect of particle clustering on void damage rates in a ductile material under triaxial loading conditions is examined using three-dimensional finite element analysis. An infinite material containing a regular distribution of clustered particles is modelled using a unit cell approach. Three discrete particles are introduced into each unit cell while a secondary population of small particles within the surrounding matrix is represented using the Gurson-Tvergaard-Needleman (GTN) constitutive equations. Deformation strain states characteristic of sheet metal forming are considered; that is, deep drawing, plane strain and biaxial stretching. Uniaxial tensile stress states with varying levels of superimposed hydrostatic tension are also examined. The orientation of a particle cluster with respect to the direction of major principal loading is shown to significantly influence failure strains. Coalescence of voids within a first-order particle cluster (consisting of three particles) is a stable event while collapse of inter-cluster ligaments leads to imminent material collapse through void-sheeting.

  16. A Hot Spots Ignition Probability Model for Low-Velocity Impacted Explosive Particles Based on the Particle Size and Distribution

    Directory of Open Access Journals (Sweden)

    Hong-fu Guo

    2017-01-01

    Full Text Available Particle size and distribution play an important role in ignition. The size and distribution of the cyclotetramethylene tetranitramine (HMX particles were investigated by Laser Particle Size Analyzer Malvern MS2000 before experiment and calculation. The mean size of particles is 161 μm. Minimum and maximum sizes are 80 μm and 263 μm, respectively. The distribution function is like a quadratic function. Based on the distribution of micron scale explosive particles, a microscopic model is established to describe the process of ignition of HMX particles under drop weight. Both temperature of contact zones and ignition probability of powder explosive can be predicted. The calculated results show that the temperature of the contact zones between the particles and the drop weight surface increases faster and higher than that of the contact zones between two neighboring particles. For HMX particles, with all other conditions being kept constant, if the drop height is less than 0.1 m, ignition probability will be close to 0. When the drop heights are 0.2 m and 0.3 m, the ignition probability is 0.27 and 0.64, respectively, whereas when the drop height is more than 0.4 m, ignition probability will be close to 0.82. In comparison with experimental results, the two curves are reasonably close to each other, which indicates our model has a certain degree of rationality.

  17. Remark about Transition Probabilities Calculation for Single Server Queues with Lognormal Inter-Arrival or Service Time Distributions

    Science.gov (United States)

    Lee, Moon Ho; Dudin, Alexander; Shaban, Alexy; Pokhrel, Subash Shree; Ma, Wen Ping

    Formulae required for accurate approximate calculation of transition probabilities of embedded Markov chain for single-server queues of the GI/M/1, GI/M/1/K, M/G/1, M/G/1/K type with heavy-tail lognormal distribution of inter-arrival or service time are given.

  18. Benchmarking PARTISN with Analog Monte Carlo: Moments of the Neutron Number and the Cumulative Fission Number Probability Distributions

    Energy Technology Data Exchange (ETDEWEB)

    O' Rourke, Patrick Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-27

    The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.

  19. RUNS TEST FOR A CIRCULAR DISTRIBUTION AND A TABLE OF PROBABILITIES

    Science.gov (United States)

    of the well-known Wald - Wolfowitz runs test for a distribution on a straight line. The primary advantage of the proposed test is that it minimizes the number of assumptions on the theoretical distribution.

  20. Deduction of compound nucleus formation probability from the fragment angular distributions in heavy-ion reactions

    Science.gov (United States)

    Yadav, C.; Thomas, R. G.; Mohanty, A. K.; Kapoor, S. S.

    2015-07-01

    The presence of various fissionlike reactions in heavy-ion induced reactions is a major hurdle in the path to laboratory synthesis of heavy and super-heavy nuclei. It is known that the cross section of forming a heavy evaporation residue in fusion reactions depends on the three factors—the capture cross section, probability of compound nucleus formation PCN, and the survival probability of the compound nucleus against fission. As the probability of compound nucleus formation, PCN is difficult to theoretically estimate because of its complex dependence on several parameters; attempts have been made in the past to deduce it from the fission fragment anisotropy data. In the present work, the fragment anisotropy data for a number of heavy-ion reactions are analyzed and it is found that deduction of PCN from the anisotropy data also requires the knowledge of the ratio of relaxation time of the K degree of freedom to pre-equilibrium fission time.

  1. Into the Void

    Science.gov (United States)

    2006-01-01

    17 May 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a portion of a chain of pits on a lava- and dust-covered plain northwest of Tharsis Tholus -- one of the many volcanic constructs in the Tharsis region of Mars. Pit chains, such as this one, are associated with the collapse of surface materials into subsurface voids formed by faulting and expansion -- or extension -- of the bedrock. Location near: 16.4oN, 92.6oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern Winter

  2. Multi-scale Characterization and Modeling of Surface Slope Probability Distribution for ~20-km Diameter Lunar Craters

    Science.gov (United States)

    Mahanti, P.; Robinson, M. S.; Boyd, A. K.

    2013-12-01

    Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was

  3. Quantifying Effects of Voids in Woven Ceramic Matrix Composites

    Science.gov (United States)

    Goldsmith, Marlana B.; Sankar, Bhavani V.; Haftka, Raphael T.; Goldberg, Robert K.

    2013-01-01

    Randomness in woven ceramic matrix composite architecture has been found to cause large variability in stiffness and strength. The inherent voids are an aspect of the architecture that may cause a significant portion of the variability. A study is undertaken to investigate the effects of many voids of random sizes and distributions. Response surface approximations were formulated based on void parameters such as area and length fractions to provide an estimate of the effective stiffness. Obtaining quantitative relationships between the properties of the voids and their effects on stiffness of ceramic matrix composites are of ultimate interest, but the exploratory study presented here starts by first modeling the effects of voids on an isotropic material. Several cases with varying void parameters were modeled which resulted in a large amount of variability of the transverse stiffness and out-of-plane shear stiffness. An investigation into a physical explanation for the stiffness degradation led to the observation that the voids need to be treated as an entity that reduces load bearing capabilities in a space larger than what the void directly occupies through a corrected length fraction or area fraction. This provides explanation as to why void volume fraction is not the only important factor to consider when computing loss of stiffness.

  4. Dislocation and void segregation in copper during neutron irradiation

    DEFF Research Database (Denmark)

    Singh, Bachu Narain; Leffers, Torben; Horsewell, Andy

    1986-01-01

    High-purity (99. 999%) and fully annealed copper specimens have been irradiated in the DR-3 reactor at Riso to doses of 1 multiplied by 10**2**2 and 5 multiplied by 10**2**2 neutrons (fast)m** minus **2(2 multiplied by 10** minus **3 dpa and 1 multiplied by 10** minus **2 dpa, respectively...... were distributed between these walls. The dislocation walls were practically free of voids and generally had a void-denuded zone along them. The density of dislocations (loops and segments) was very low in the region containing voids (i. e. between the dislocation walls). Even with this low dislocation...... density, the void swelling rate was very high (approximately 2. 5% per dpa). The implications of the segregated distribution of sinks for void formation and growth are briefly discussed....

  5. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  6. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  7. Probability Distributions of Cost and Sequential Bidding Procedures for Defense Procurement Contracts

    Science.gov (United States)

    1998-02-01

    html [March 17, 1998]. DeGroot , Morris H., Optimal Statistical Decisions, New York: McGraw-Hill, 1970. Fudenburg, Drew, and Jean Tirole, Game Theory...probabilities of occurrence. The expected utility approach was originally developed by Von Neumann and Morgenstern, and is described, for example, in DeGroot

  8. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...

  9. In favor of general probability distributions: lateral prefrontal and insular cortices respond to stimulus inherent, but irrelevant differences.

    Science.gov (United States)

    Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A

    2016-04-01

    A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.

  10. Uncertainty of Hydrological Drought Characteristics with Copula Functions and Probability Distributions: A Case Study of Weihe River, China

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2017-05-01

    Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.

  11. Shapes and Sizes of Voids in the LCDM Universe: Excursion Set Approach

    CERN Document Server

    Shandarin, S; Heitmann, K; Habib, S; Shandarin, Sergei; Feldman, Hume A.; Heitmann, Katrin

    2006-01-01

    We study the global distribution and morphology of dark matter voids in a LCDM universe using density fields generated by N-body simulations. Voids are defined as isolated regions of the low-density excursion set specified via density thresholds, the density thresholds being quantified by the corresponding filling factors, i.e., the fraction of the total volume in the excursion set. Our work encompasses a systematic investigation of the void volume function, the volume fraction in voids, and the fitting of voids to corresponding ellipsoids and spheres. We emphasize the relevance of the percolation threshold to the void volume statistics of the density field both in the high redshift, Gaussian random field regime, as well as in the present epoch. By using measures such as the Inverse Porosity, we characterize the quality of ellipsoidal fits to voids, finding that such fits are a poor representation of the larger voids that dominate the volume of the void excursion set.

  12. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  13. ASSESSMENT OF ACCURACY OF PRECIPITATION INDEX (SPI DETERMI-NED BY DIFFERENT PROBABILITY DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Edward Gąsiorek

    2014-11-01

    Full Text Available The use of different calculating methods to compute the standardized precipitation index (SPI results in various approximations. Methods based on normal distribution and its transformations, as well as on gamma distribution, give similar results and may be used equally, whereas the lognormal distribution fitting method is significantly discrepant, especially for extreme values of SPI. Therefore, it is problematic which method gives the distribution optimally fitted to empirical data. The aim of this study is to categorize the above mentioned methods according to the degree of approximation to empirical data from the Observatory of Agro- and Hydrometeorology in Wrocław-Swojec from 1964–2009 years.

  14. Understanding statistical power using noncentral probability distributions: Chi-squared, G-squared, and ANOVA

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2007-09-01

    Full Text Available This paper presents a graphical way of interpreting effect sizes when more than two groups are involved in a statistical analysis. This method uses noncentral distributions to specify the alternative hypothesis, and the statistical power can thus be directly computed. This principle is illustrated using the chi-squared distribution and the F distribution. Examples of chi-squared and ANOVA statistical tests are provided to further illustrate the point. It is concluded that power analyses are an essential part of statistical analysis, and that using noncentral distributions provides an argument in favour of using a factorial ANOVA over multiple t tests.

  15. Sampling informative/complex a priori probability distributions using Gibbs sampling assisted by sequential simulation

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou

    2010-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....

  16. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  17. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    Science.gov (United States)

    Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  18. Twenty-four hour predictions of the solar wind speed peaks by the probability distribution function model

    Science.gov (United States)

    Bussy-Virat, C. D.; Ridley, A. J.

    2016-10-01

    Abrupt transitions from slow to fast solar wind represent a concern for the space weather forecasting community. They may cause geomagnetic storms that can eventually affect systems in orbit and on the ground. Therefore, the probability distribution function (PDF) model was improved to predict enhancements in the solar wind speed. New probability distribution functions allow for the prediction of the peak amplitude and the time to the peak while providing an interval of uncertainty on the prediction. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. This represents a considerable improvement upon the first version of the PDF model. A direct comparison with the Wang-Sheeley-Arge model shows that the PDF model is quite similar, except that it leads to fewer false positive predictions and misses fewer events, especially when the peak reaches very high speeds.

  19. Properties of galaxy halos in Clusters and Voids

    CERN Document Server

    Antonuccio-Delogu, V; Pagliaro, A; Van Kampen, E; Colafrancesco, Sergio; Germaná, A; Gambera, M

    2000-01-01

    We use the results of a high resolution N-body simulation to investigate the role of the environment on the formation and evolution of galaxy-sized halos. Starting from a set of constrained initial conditions, we have produced a final configuration hosting a double cluster in one octant and a large void extending over two octants of the simulation box. In this paper we concentrate on {\\em gravitationally bound} galaxy-sized halos extracted from the two regions. Exploiting the high mass resolution of our simulation ($m_{body} = 2.1\\times 10^{9} h^{-1} M_{\\odot}$), we focus on halos with a relatively small mass: $5\\times 10^{10} \\leq M \\leq 2\\times 10^{12} M_{\\odot}$. We present results for two statistics: the relationship between 1-D velocity dispersion and mass and the probability distribution of the spin parameter $P(\\lambda)$. We do find a clear difference between halos lying in overdense regions and in voids. The \\svm relationship is well described by the Truncated Isothermal Sphere (TIS) model introduced ...

  20. Solitary waves for the nonlinear Schrödinger problem with the probability distribution function in the stochastic input case

    Science.gov (United States)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2017-08-01

    This work deals with the construction of the exact traveling wave solutions for the nonlinear Schrödinger equation by the new Riccati-Bernoulli Sub-ODE method. Additionally, we apply this method in order to study the random solutions by finding the probability distribution function when the coefficient in our problem is a random variable. The travelling wave solutions of many equations physically or mathematically are expressed by hyperbolic functions, trigonometric functions and rational functions. We discuss our method in the deterministic case and also in a random case, by studying the beta distribution for the random input.

  1. Classical probability density distributions with uncertainty relations for ground states of simple non-relativistic quantum-mechanical systems

    Science.gov (United States)

    Radożycki, Tomasz

    2016-11-01

    The probability density distributions for the ground states of certain model systems in quantum mechanics and for their classical counterparts are considered. It is shown, that classical distributions are remarkably improved by incorporating into them the Heisenberg uncertainty relation between position and momentum. Even the crude form of this incorporation makes the agreement between classical and quantum distributions unexpectedly good, except for the small area, where classical momenta are large. It is demonstrated that the slight improvement of this form, makes the classical distribution very similar to the quantum one in the whole space. The obtained results are much better than those from the WKB method. The paper is devoted to ground states, but the method applies to excited states too.

  2. A Probability Distribution of Surface Elevation for Wind Waves in Terms of the Gram-Charlier Series

    Institute of Scientific and Technical Information of China (English)

    黄传江; 戴德君; 王伟; 钱成春

    2003-01-01

    Laboratory experiments are conducted to study the probability distribution of surface elevation for wind waves and the convergence is discussed of the Gram-Charlier series in describing the surface elevation distribution. Results show that the agreement between the Gram-Charlier series and the observed distribution becomes better and better as the truncated order of the series increases in a certain range, which is contrary to the phenomenon observed by Huang and Long (1980). It is also shown that the Gram-Charlier series is sensitive to the anomalies in the data set which will make the agreement worse if they are not preprocessed appropriately. Negative values of the probability distribution expressed by the Gram-Charlier series in some ranges of surface elevations are discussed, but the absolute values of the negative values as well as the ranges of their occurrence become smaller gradually as more and more terms are included. Therefore the negative values will have no evident effect on the form of the whole surface elevation distribution when the series is truncated at higher orders. Furthermore, a simple recurrence formula is obtained to calculate the coefficients of the Gram-Charlier series in order to extend the Gram-Charlier series to high orders conveniently.

  3. Two new methods to detect cosmic voids without density measurements

    CERN Document Server

    Elyiv, Andrii; Pollina, Giorgia; Baldi, Marco; Branchini, Enzo; Cimatti, Andrea; Moscardini, Lauro

    2014-01-01

    Cosmic voids are effective cosmological probes to discriminate among competing world models. Their precise and unbiased identification is a prerequisite to perform accurate observational tests. The identification is generally based on density or geometry criteria that, because of their very nature, are prone to shot noise errors. In this work we propose two new void finders that are based on dynamical and clustering criteria to select voids in the Lagrangian coordinates and minimise the impact of sparse sampling. The first approach exploits the Zeldovich approximation to trace back in time the orbits of galaxies located in the voids and their surroundings, whereas the second uses the observed galaxy-galaxy correlation function to relax the objects' spatial distribution to homogeneity and isotropy. In both cases voids are defined as regions of the negative velocity divergence in Lagrangian coordinates, that can be regarded as sinks of the back-in-time streamlines of the mass tracers. To assess the performance ...

  4. Voids and the Cosmic Web: cosmic depression & spatial complexity

    Science.gov (United States)

    van de Weygaert, Rien

    2016-10-01

    Voids form a prominent aspect of the Megaparsec distribution of galaxies and matter. Not only do theyrepresent a key constituent of the Cosmic Web, they also are one of the cleanest probesand measures of global cosmological parameters. The shape and evolution of voids are highly sensitive tothe nature of dark energy, while their substructure and galaxy population provides a direct key to thenature of dark matter. Also, the pristine environment of void interiors is an important testing groundfor our understanding of environmental influences on galaxy formation and evolution. In this paper, we reviewthe key aspects of the structure and dynamics ofvoids, with a particular focus on the hierarchical evolution of the void population. We demonstratehow the rich structural pattern of the Cosmic Web is related to the complex evolution and buildupof voids.

  5. Probability Distributions for Cyclone Key Parameters and Cyclonic Wind Speed for the East Coast of Indian Region

    Directory of Open Access Journals (Sweden)

    Pradeep K. Goyal

    2011-09-01

    Full Text Available This paper presents a study conducted on the probabilistic distribution of key cyclone parameters and the cyclonic wind speed by analyzing the cyclone track records obtained from India meteorological department for east coast region of India. The dataset of historical landfalling storm tracks in India from 1975–2007 with latitude /longitude and landfall locations are used to map the cyclone tracks in a region of study. The statistical tests were performed to find a best fit distribution to the track data for each cyclone parameter. These parameters include central pressure difference, the radius of maximum wind speed, the translation velocity, track angle with site and are used to generate digital simulated cyclones using wind field simulation techniques. For this, different sets of values for all the cyclone key parameters are generated randomly from their probability distributions. Using these simulated values of the cyclone key parameters, the distribution of wind velocity at a particular site is obtained. The same distribution of wind velocity at the site is also obtained from actual track records and using the distributions of the cyclone key parameters as published in the literature. The simulated distribution is compared with the wind speed distributions obtained from actual track records. The findings are useful in cyclone disaster mitigation.

  6. Determination of the equivalent intergranular void ratio - Application to the instability and the critical state of silty sand

    Science.gov (United States)

    Nguyen, Trung-Kien; Benahmed, Nadia; Hicher, Pierre-Yves

    2017-06-01

    This paper presents an experimental study of mechanical response of natural Camargue silty sand. The analysis of test results used the equivalent intergranular void ratio instead of the global void ratio. The calculation of equivalent intergranular void ratio requires the determination of parameter b which represents, physically, the fraction of active fines participating on the chain forces network, hence the strength of the soil. A new formula for determining the parameter b by using an approach based on the coordination number distribution and probability calculation is proposed. The validation of the developed relationship was done through back-analysis of published datasets in literature on the effect of fines content on silty sand behavior. It is shown that the equivalent intergranular void ratio calculated with the b value obtained by the new formula is able to provide strong correlation to not only the critical state of but also the onset of instability of various silty sands, in different terms as peak deviator stress, peak stress ratio or cyclic resistance. Therefore, it is suggested that the use of the equivalent void ratio concept and the new b calculating formula is highly desirable in predicting of the silty sand behavior.

  7. Determination of the equivalent intergranular void ratio - Application to the instability and the critical state of silty sand

    Directory of Open Access Journals (Sweden)

    Nguyen Trung-Kien

    2017-01-01

    Full Text Available This paper presents an experimental study of mechanical response of natural Camargue silty sand. The analysis of test results used the equivalent intergranular void ratio instead of the global void ratio. The calculation of equivalent intergranular void ratio requires the determination of parameter b which represents, physically, the fraction of active fines participating on the chain forces network, hence the strength of the soil. A new formula for determining the parameter b by using an approach based on the coordination number distribution and probability calculation is proposed. The validation of the developed relationship was done through back-analysis of published datasets in literature on the effect of fines content on silty sand behavior. It is shown that the equivalent intergranular void ratio calculated with the b value obtained by the new formula is able to provide strong correlation to not only the critical state of but also the onset of instability of various silty sands, in different terms as peak deviator stress, peak stress ratio or cyclic resistance. Therefore, it is suggested that the use of the equivalent void ratio concept and the new b calculating formula is highly desirable in predicting of the silty sand behavior.

  8. Research on the behavior of fiber orientation probability distribution function in the planar flows

    Institute of Scientific and Technical Information of China (English)

    ZHOU Kun; LIN Jian-zhong

    2005-01-01

    The equation of two-dimensional fiber direction vector was solved theoretically to give the fiber orientation distribution in simple shear flow, flow with two direction shears, extensional flow and arbitrary planar incompressible flow. The Fokker-Planck equation was solved numerically to validify the theoretical solutions. The stable orientation and orientation period of fiber were obtained. The results showed that the fiber orientation distribution is dependent on the relative not absolute magnitude of the matrix rate-of-strain of flow. The effect of fiber aspect ratio on the orientation distribution of fiber is insignificant in most conditions except the simple shear case. It was proved that the results for a planar flow could be generalized to the case of 3-D fiber direction vector.

  9. Multiparameter probability distributions for heavy rainfall modeling in extreme southern Brazil

    Directory of Open Access Journals (Sweden)

    Samuel Beskow

    2015-09-01

    New hydrological insights for the region: The Anderson–Darling and Filliben tests were the most restrictive in this study. Based on the Anderson–Darling test, it was found that the Kappa distribution presented the best performance, followed by the GEV. This finding provides evidence that these multiparameter distributions result, for the region of study, in greater accuracy for the generation of intensity–duration–frequency curves and the prediction of peak streamflows and design hydrographs. As a result, this finding can support the design of hydraulic structures and flood management in river basins.

  10. "Dark energy" in the Local Void

    Science.gov (United States)

    Villata, M.

    2012-05-01

    The unexpected discovery of the accelerated cosmic expansion in 1998 has filled the Universe with the embarrassing presence of an unidentified "dark energy", or cosmological constant, devoid of any physical meaning. While this standard cosmology seems to work well at the global level, improved knowledge of the kinematics and other properties of our extragalactic neighborhood indicates the need for a better theory. We investigate whether the recently suggested repulsive-gravity scenario can account for some of the features that are unexplained by the standard model. Through simple dynamical considerations, we find that the Local Void could host an amount of antimatter (˜5×1015 M ⊙) roughly equivalent to the mass of a typical supercluster, thus restoring the matter-antimatter symmetry. The antigravity field produced by this "dark repulsor" can explain the anomalous motion of the Local Sheet away from the Local Void, as well as several other properties of nearby galaxies that seem to require void evacuation and structure formation much faster than expected from the standard model. At the global cosmological level, gravitational repulsion from antimatter hidden in voids can provide more than enough potential energy to drive both the cosmic expansion and its acceleration, with no need for an initial "explosion" and dark energy. Moreover, the discrete distribution of these dark repulsors, in contrast to the uniformly permeating dark energy, can also explain dark flows and other recently observed excessive inhomogeneities and anisotropies of the Universe.

  11. Is extrapair mating random? On the probability distribution of extrapair young in avian broods

    NARCIS (Netherlands)

    Brommer, Jon E.; Korsten, Peter; Bouwman, Karen A.; Berg, Mathew L.; Komdeur, Jan

    2007-01-01

    A dichotomy in female extrapair copulation (EPC) behavior, with some females seeking EPC and others not, is inferred if the observed distribution of extrapair young (EPY) over broods differs from a random process on the level of individual offspring (binomial, hypergeometrical, or Poisson). A review

  12. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...

  13. Ground impact probability distribution for small unmanned aircraft in ballistic descent

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2017-01-01

    Safety is a key factor in all aviation, and while years of development has made manned aviation relatively safe, the same has yet to happen for unmanned aircraft. However, the rapid development of unmanned aircraft technology means that the range of commercial and scientific applications is growing...... equally rapid. At the same time the trend in national and international regulations for unmanned aircraft is to take a risk-based approach, effectively requiring risk assessment for every flight operation. This work addresses the growing need for methods for quantitatively evaluating individual flights...... by modelling the consequences of a ballistic descent of an unmanned aircraft as a result of a major inflight incident. The presented model is a probability density function for the ground impact area based on a second order drag model with probabilistic assumptions on the least well-known parameters...

  14. A Cosmic Void Catalog of SDSS DR12 BOSS Galaxies

    CERN Document Server

    Mao, Qingqing; Scherrer, Robert J; Scoccimarro, Roman; Tinker, Jeremy L; McBride, Cameron K; Neyrinck, Mark C; Schneider, Donald P; Pan, Kaike; Bizyaev, Dmitry; Malanushenko, Elena; Malanushenko, Viktor

    2016-01-01

    We present a cosmic void catalog using the large-scale structure galaxy catalog from the Baryon Oscillation Spectroscopic Survey (BOSS). This galaxy catalog is part of the Sloan Digital Sky Survey (SDSS) Data Release 12 and is the final catalog of SDSS-III. We take into account the survey boundaries, masks, and angular and radial selection functions, and apply the ZOBOV void finding algorithm to the galaxy catalog. After making quality cuts to ensure that the voids represent real underdense regions, we identify 1228 voids with effective radii spanning the range 20-100Mpc/h and with central densities that are, on average, 30% of the mean sample density. We discuss the basic statistics of voids, such as their size and redshift distributions, and measure the radial density profile of the voids via a stacking technique. In addition, we construct mock void catalogs from 1000 mock galaxy catalogs, and find that the properties of BOSS voids are in good agreement with those in the mock catalogs. We compare the stella...

  15. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    Science.gov (United States)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  16. A near-infrared SETI experiment: probability distribution of false coincidences

    Science.gov (United States)

    Maire, Jérôme; Wright, Shelley A.; Werthimer, Dan; Treffers, Richard R.; Marcy, Geoffrey W.; Stone, Remington P. S.; Drake, Frank; Siemion, Andrew

    2014-07-01

    A Search for Extraterrestrial Life (SETI), based on the possibility of interstellar communication via laser signals, is being designed to extend the search into the near-infrared spectral region (Wright et al, this conference). The dedicated near-infrared (900 to 1700 nm) instrument takes advantage of a new generation of avalanche photodiodes (APD), based on internal discrete amplification. These discrete APD (DAPD) detectors have a high speed response (laser light pulse detection in our experiment. These criteria are defined to optimize the trade between high detection efficiency and low false positive coincident signals, which can be produced by detector dark noise, background light, cosmic rays, and astronomical sources. We investigate experimentally how false coincidence rates depend on the number of detectors in parallel, and on the signal pulse height and width. We also look into the corresponding threshold to each of the signals to optimize the sensitivity while also reducing the false coincidence rates. Lastly, we discuss the analytical solution used to predict the probability of laser pulse detection with multiple detectors.

  17. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    Science.gov (United States)

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.

  18. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  19. Confidence limits with multiple channels and arbitrary probability distributions for sensitivity and expected background

    CERN Document Server

    Perrotta, A

    2002-01-01

    A MC method is proposed to compute upper limits, in a pure Bayesian approach, when the errors associated with the experimental sensitivity and expected background content are not Gaussian distributed or not small enough to apply usual approximations. It is relatively easy to extend the procedure to the multichannel case (for instance when different decay branching, luminosities or experiments have to be combined). Some of the searches for supersymmetric particles performed in the DELPHI experiment at the LEP electron- positron collider use such a procedure to propagate systematics into the calculation of cross-section upper limits. One of these searches is described as an example. (6 refs).

  20. Development of probability distributions for regional climate change from uncertain global mean warming and an uncertain scaling relationship

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available To produce probability distributions for regional climate change in surface temperature and precipitation, a probability distribution for global mean temperature increase has been combined with the probability distributions for the appropriate scaling variables, i.e. the changes in regional temperature/precipitation per degree global mean warming. Each scaling variable is assumed to be normally distributed. The uncertainty of the scaling relationship arises from systematic differences between the regional changes from global and regional climate model simulations and from natural variability. The contributions of these sources of uncertainty to the total variance of the scaling variable are estimated from simulated temperature and precipitation data in a suite of regional climate model experiments conducted within the framework of the EU-funded project PRUDENCE, using an Analysis Of Variance (ANOVA. For the area covered in the 2001–2004 EU-funded project SWURVE, five case study regions (CSRs are considered: NW England, the Rhine basin, Iberia, Jura lakes (Switzerland and Mauvoisin dam (Switzerland. The resulting regional climate changes for 2070–2099 vary quite significantly between CSRs, between seasons and between meteorological variables. For all CSRs, the expected warming in summer is higher than that expected for the other seasons. This summer warming is accompanied by a large decrease in precipitation. The uncertainty of the scaling ratios for temperature and precipitation is relatively large in summer because of the differences between regional climate models. Differences between the spatial climate-change patterns of global climate model simulations make significant contributions to the uncertainty of the scaling ratio for temperature. However, no meaningful contribution could be found for the scaling ratio for precipitation due to the small number of global climate models in the PRUDENCE project and natural variability, which is

  1. How to use MATLAB to fit the ex-Gaussian and other probability functions to a distribution of response times

    Directory of Open Access Journals (Sweden)

    Denis Cousineau

    2008-03-01

    Full Text Available This article discusses how to characterize response time (RT frequency distributions in terms of probability functions and how to implement the necessary analysis tools using MATLAB. The first part of the paper discusses the general principles of maximum likelihood estimation. A detailed implementation that allows fitting the popular ex-Gaussian function is then presented followed by the results of a Monte Carlo study that shows the validity of the proposed approach. Although the main focus is the ex-Gaussian function, the general procedure described here can be used to estimate best fitting parameters of various probability functions. The proposed computational tools, written in MATLAB source code, are available through the Internet.

  2. Codon information value and codon transition-probability distributions in short-term evolution

    Science.gov (United States)

    Jiménez-Montaño, M. A.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Ramos-Fernández, A.

    2016-07-01

    To understand the way the Genetic Code and the physical-chemical properties of coded amino acids affect accepted amino acid substitutions in short-term protein evolution, taking into account only overall amino acid conservation, we consider an underlying codon-level model. This model employs codon pair-substitution frequencies from an empirical matrix in the literature, modified for single-base mutations only. Ordering the degenerated codons according to their codon information value (Volkenstein, 1979), we found that three-fold and most of four-fold degenerated codons, which have low codon values, were best fitted to rank-frequency distributions with constant failure rate (exponentials). In contrast, almost all two-fold degenerated codons, which have high codon values, were best fitted to rank-frequency distributions with variable failure rate (inverse power-laws). Six-fold degenerated codons are considered to be doubly assigned. The exceptional behavior of some codons, including non-degenerate codons, is discussed.

  3. Probability density functions for description of diameter distribution in thinned stands of Tectona grandis

    Directory of Open Access Journals (Sweden)

    Julianne de Castro Oliveira

    2012-06-01

    Full Text Available The objective of this study was to evaluate the effectiveness of fatigue life, Frechet, Gamma, Generalized Gamma, Generalized Logistic, Log-logistic, Nakagami, Beta, Burr, Dagum, Weibull and Hyperbolic distributions in describing diameter distribution in teak stands subjected to thinning at different ages. Data used in this study originated from 238 rectangular permanent plots 490 m2 in size, installed in stands of Tectona grandis L. f. in Mato Grosso state, Brazil. The plots were measured at ages 34, 43, 55, 68, 81, 82, 92, 104, 105, 120, 134 and 145 months on average. Thinning was done in two occasions: the first was systematic at age 81months, with a basal area intensity of 36%, while the second was selective at age 104 months on average and removed poorer trees, reducing basal area by 30%. Fittings were assessed by the Kolmogorov-Smirnov goodness-of-fit test. The Log-logistic (3P, Burr (3P, Hyperbolic (3P, Burr (4P, Weibull (3P, Hyperbolic (2P, Fatigue Life (3P and Nakagami functions provided more satisfactory values for the k-s test than the more commonly used Weibull function.

  4. Testing Gravity using Void Profiles

    Science.gov (United States)

    Cai, Yan-Chuan; Padilla, Nelson; Li, Baojiu

    2016-10-01

    We investigate void properties in f(R) models using N-body simulations, focusing on their differences from General Relativity (GR) and their detectability. In the Hu-Sawicki f(R) modified gravity (MG) models, the halo number density profiles of voids are not distinguishable from GR. In contrast, the same f(R) voids are more empty of dark matter, and their profiles are steeper. This can in principle be observed by weak gravitational lensing of voids, for which the combination of a spectroscopic redshift and a lensing photometric redshift survey over the same sky is required. Neglecting the lensing shape noise, the f(R) model parameter amplitudes fR0=10-5 and 10-4 may be distinguished from GR using the lensing tangential shear signal around voids by 4 and 8 σ for a volume of 1 (Gpc/h)3. The line-of-sight projection of large-scale structure is the main systematics that limits the significance of this signal for the near future wide angle and deep lensing surveys. For this reason, it is challenging to distinguish fR0=10-6 from GR. We expect that this can be overcome with larger volume. The halo void abundance being smaller and the steepening of dark matter void profiles in f(R) models are unique features that can be combined to break the degeneracy between fR0 and σ8.

  5. Probability distribution of the free energy of a directed polymer in a random medium

    Science.gov (United States)

    Brunet, Éric; Derrida, Bernard

    2000-06-01

    We calculate exactly the first cumulants of the free energy of a directed polymer in a random medium for the geometry of a cylinder. By using the fact that the nth moment of the partition function is given by the ground-state energy of a quantum problem of n interacting particles on a ring of length L, we write an integral equation allowing to expand these moments in powers of the strength of the disorder γ or in powers of n. For n small and n~(Lγ)-1/2, the moments take a scaling form which allows us to describe all the fluctuations of order 1/L of the free energy per unit length of the directed polymer. The distribution of these fluctuations is the same as the one found recently in the asymmetric exclusion process, indicating that it is characteristic of all the systems described by the Kardar-Parisi-Zhang equation in 1+1 dimensions.

  6. Confidence Limits with Multiple Channels and Arbitrary Probability Distributions for Sensitivity and Expected Background

    Science.gov (United States)

    Perrotta, Andrea

    A MC method is proposed to compute upper limits, in a pure Bayesian approach, when the errors associated to the experimental sensitivity and to the expected background content are not Gaussian distributed or not small enough to apply the usual approximations. It is relatively easy to extend the procedure to the multichannel case (for instance when different decay branchings, or luminosities or experiments have to be combined). Some of the searches for supersymmetric particles performed in the DELPHI experiment at the LEP electron-positron collider use such a procedure to propagate the systematics into the calculation of the cross-section upper limits. One of these searches will be described as an example.

  7. Maxwell and the normal distribution: A colored story of probability, independence, and tendency toward equilibrium

    Science.gov (United States)

    Gyenis, Balázs

    2017-02-01

    We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.

  8. PRECISION COSMOGRAPHY WITH STACKED VOIDS

    Energy Technology Data Exchange (ETDEWEB)

    Lavaux, Guilhem [Department of Physics, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Wandelt, Benjamin D. [UPMC Univ Paris 06, UMR 7095, Institut d' Astrophysique de Paris, 98 bis, boulevard Arago, 75014 Paris (France)

    2012-08-01

    We present a purely geometrical method for probing the expansion history of the universe from the observation of the shape of stacked voids in spectroscopic redshift surveys. Our method is an Alcock-Paczynski (AP) test based on the average sphericity of voids posited on the local isotropy of the universe. It works by comparing the temporal extent of cosmic voids along the line of sight with their angular, spatial extent. We describe the algorithm that we use to detect and stack voids in redshift shells on the light cone and test it on mock light cones produced from N-body simulations. We establish a robust statistical model for estimating the average stretching of voids in redshift space and quantify the contamination by peculiar velocities. Finally, assuming that the void statistics that we derive from N-body simulations is preserved when considering galaxy surveys, we assess the capability of this approach to constrain dark energy parameters. We report this assessment in terms of the figure of merit (FoM) of the dark energy task force and in particular of the proposed Euclid mission which is particularly suited for this technique since it is a spectroscopic survey. The FoM due to stacked voids from the Euclid wide survey may double that of all other dark energy probes derived from Euclid data alone (combined with Planck priors). In particular, voids seem to outperform baryon acoustic oscillations by an order of magnitude. This result is consistent with simple estimates based on mode counting. The AP test based on stacked voids may be a significant addition to the portfolio of major dark energy probes and its potentialities must be studied in detail.

  9. The Homotopic Probability Distribution and the Partition Function for the Entangled System Around a Ribbon Segment Chain

    Institute of Scientific and Technical Information of China (English)

    QIAN Shang-Wu; GU Zhi-Yu

    2001-01-01

    Using the Feynman's path integral with topological constraints arising from the presence of one singular line, we find the homotopic probability distribution PnL for the winding number n and the partition function PL of the entangled system around a ribbon segment chain. We find that when the width of the ribbon segment chain 2a increases,the partition function exponentially decreases, whereas the free energy increases an amount, which is proportional to the square of the width. When the width tends to zero we obtain the same results as those of a single chain with one singular point.

  10. Exact valence bond entanglement entropy and probability distribution in the XXX spin chain and the potts model.

    Science.gov (United States)

    Jacobsen, J L; Saleur, H

    2008-02-29

    We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.

  11. A Method for Justification of the View of Observables in Quantum Mechanics and Probability Distributions in Phase Space

    CERN Document Server

    Beniaminov, E M

    2001-01-01

    There are considered some corollaries of certain hypotheses on the observation process of microphenomena. We show that an enlargement of the phase space and of its motion group and an account for the diffusion motions of microsystems in the enlarged space, the motions which act by small random translations along the enlarged group, lead to observable quantum effects. This approach enables one to recover probability distributions in the phase space for wave functions. The parameters of the model considered here are estimated on the base of Lamb's shift in the spectrum of the hydrogen's atom.

  12. Inertial particles distribute in turbulence as Poissonian points with random intensity inducing clustering and supervoiding

    CERN Document Server

    Schmidt, Lukas; Holzner, Markus

    2016-01-01

    This work considers the distribution of inertial particles in turbulence using the point-particle approximation. We demonstrate that the random point process formed by the positions of particles in space is a Poisson point process with log-normal random intensity ("log Gaussian Cox process" or LGCP). The probability of having a finite number of particles in a small volume is given in terms of the characteristic function of a log-normal distribution. Corrections due to discreteness of the number of particles to the previously derived statistics of particle concentration in the continuum limit are provided. These are relevant for dealing with experimental or numerical data. The probability of having regions without particles, i.e. voids, is larger for inertial particles than for tracer particles where voids are distributed according to Poisson processes. Further, the probability of having large voids decays only log-normally with size. This shows that particles cluster, leaving voids behind. At scales where the...

  13. 导流介质对VARTM复合材料纤维分布及空隙率的影响%Effects of Infusion Media on Fiber Volume Fraction Distribution and Void Content in Vacuum Assisted Resin Transfer Molding

    Institute of Scientific and Technical Information of China (English)

    赖家美; 陈显明; 王德盼; 鄢冬冬; 王科

    2014-01-01

    Effects of the size of infusion media on resin flow behavior,fiber volume fraction distribution and void content in vacuum assisted resin transfer molding(VARTM) were studied.The results showed that with the increase of infusion media size, the resin flow rate increased exponentially;the fiber volume fraction showed a tendency to increase after the first decrease,and the infusion media boundary was just the high and low fiber volume fraction line;the void content increased first and then decreased and increased tremendously at last,varied from 3.86% to 19.92%.%研究了导流介质尺寸对真空辅助树脂传递模塑(VARTM)工艺中树脂流动行为的影响,以及对复合材料制品中纤维分布和空隙率的影响。结果表明,随着导流介质尺寸的增加,树脂在增强体中的流动速度加快,并呈现指数加速趋势;制品中纤维体积含量呈现先减少后增大的趋势,并且以导流介质边界为纤维体积含量高低的分界线;复合材料制品的空隙率范围在3.86%~19.92%,空隙率呈现先增大后减小再加速增大的趋势。

  14. Nonuniversal power law scaling in the probability distribution of scientific citations

    CERN Document Server

    Peterson, G J; Dill, K A; 10.1073/pnas.1010757107

    2010-01-01

    We develop a model for the distribution of scientific citations. The model involves a dual mechanism: in the direct mechanism, the author of a new paper finds an old paper A and cites it. In the indirect mechanism, the author of a new paper finds an old paper A only via the reference list of a newer intermediary paper B, which has previously cited A. By comparison to citation databases, we find that papers having few citations are cited mainly by the direct mechanism. Papers already having many citations (`classics') are cited mainly by the indirect mechanism. The indirect mechanism gives a power-law tail. The `tipping point' at which a paper becomes a classic is about 25 citations for papers published in the Institute for Scientific Information (ISI) Web of Science database in 1981, 31 for Physical Review D papers published from 1975-1994, and 37 for all publications from a list of high h-index chemists assembled in 2007. The power-law exponent is not universal. Individuals who are highly cited have a system...

  15. Structured Coupling of Probability Loss Distributions: Assessing Joint Flood Risk in Multiple River Basins.

    Science.gov (United States)

    Timonina, Anna; Hochrainer-Stigler, Stefan; Pflug, Georg; Jongman, Brenden; Rojas, Rodrigo

    2015-11-01

    Losses due to natural hazard events can be extraordinarily high and difficult to cope with. Therefore, there is considerable interest to estimate the potential impact of current and future extreme events at all scales in as much detail as possible. As hazards typically spread over wider areas, risk assessment must take into account interrelations between regions. Neglecting such interdependencies can lead to a severe underestimation of potential losses, especially for extreme events. This underestimation of extreme risk can lead to the failure of riskmanagement strategies when they are most needed, namely, in times of unprecedented events. In this article, we suggest a methodology to incorporate such interdependencies in risk via the use of copulas. We demonstrate that by coupling losses, dependencies can be incorporated in risk analysis, avoiding the underestimation of risk. Based on maximum discharge data of river basins and stream networks, we present and discuss different ways to couple loss distributions of basins while explicitly incorporating tail dependencies. We distinguish between coupling methods that require river structure data for the analysis and those that do not. For the later approach we propose a minimax algorithm to choose coupled basin pairs so that the underestimation of risk is avoided and the use of river structure data is not needed. The proposed methodology is especially useful for large-scale analysis and we motivate and apply our method using the case of Romania. The approach can be easily extended to other countries and natural hazards.

  16. Understanding star formation in molecular clouds III. Probability distribution functions of molecular lines in Cygnus X

    CERN Document Server

    Schneider, N; Motte, F; Ossenkopf, V; Klessen, R S; Simon, R; Fechtenbaum, S; Herpin, F; Tremblin, P; Csengeri, T; Myers, P C; Hill, T; Cunningham, M; Federrath, C

    2015-01-01

    Column density (N) PDFs serve as a powerful tool to characterize the physical processes that influence the structure of molecular clouds. Star-forming clouds can best be characterized by lognormal PDFs for the lower N range and a power-law tail for higher N, commonly attributed to turbulence and self-gravity and/or pressure, respectively. We report here on PDFs obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region and compare to a PDF derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av~1-30, but is cut for higher Av due to optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up for Av~1-15, followed by excess up to Av~40. Above that value, all CO PDFs drop, most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av~15 and 400, respectively. The PDF from dust is lognormal for Av~2-15 and has a power-law tail up to Av~500. Absolute values for the molecular lin...

  17. Nonuniversal power law scaling in the probability distribution of scientific citations.

    Science.gov (United States)

    Peterson, George J; Pressé, Steve; Dill, Ken A

    2010-09-14

    We develop a model for the distribution of scientific citations. The model involves a dual mechanism: in the direct mechanism, the author of a new paper finds an old paper A and cites it. In the indirect mechanism, the author of a new paper finds an old paper A only via the reference list of a newer intermediary paper B, which has previously cited A. By comparison to citation databases, we find that papers having few citations are cited mainly by the direct mechanism. Papers already having many citations ("classics") are cited mainly by the indirect mechanism. The indirect mechanism gives a power-law tail. The "tipping point" at which a paper becomes a classic is about 25 citations for papers published in the Institute for Scientific Information (ISI) Web of Science database in 1981, 31 for Physical Review D papers published from 1975-1994, and 37 for all publications from a list of high h-index chemists assembled in 2007. The power-law exponent is not universal. Individuals who are highly cited have a systematically smaller exponent than individuals who are less cited.

  18. Computation of steady-state probability distributions in stochastic models of cellular networks.

    Directory of Open Access Journals (Sweden)

    Mark Hallen

    2011-10-01

    Full Text Available Cellular processes are "noisy". In each cell, concentrations of molecules are subject to random fluctuations due to the small numbers of these molecules and to environmental perturbations. While noise varies with time, it is often measured at steady state, for example by flow cytometry. When interrogating aspects of a cellular network by such steady-state measurements of network components, a key need is to develop efficient methods to simulate and compute these distributions. We describe innovations in stochastic modeling coupled with approaches to this computational challenge: first, an approach to modeling intrinsic noise via solution of the chemical master equation, and second, a convolution technique to account for contributions of extrinsic noise. We show how these techniques can be combined in a streamlined procedure for evaluation of different sources of variability in a biochemical network. Evaluation and illustrations are given in analysis of two well-characterized synthetic gene circuits, as well as a signaling network underlying the mammalian cell cycle entry.

  19. Dropping Probability Reduction in OBS Networks: A Simple Approach

    KAUST Repository

    Elrasad, Amr

    2016-08-01

    In this paper, we propose and derive a slotted-time model for analyzing the burst blocking probability in Optical Burst Switched (OBS) networks. We evaluated the immediate and delayed signaling reservation schemes. The proposed model compares the performance of both just-in-time (JIT) and just-enough-time (JET) signaling protocols associated with of void/non-void filling link scheduling schemes. It also considers none and limited range wavelength conversions scenarios. Our model is distinguished by being adaptable to different offset-time and burst length distributions. We observed that applying a limited range of wavelength conversion, burst blocking probability is reduced by several orders of magnitudes and yields a better burst delivery ratio compared with full wavelength conversion.

  20. Families of Fokker-Planck equations and the associated entropic form for a distinct steady-state probability distribution with a known external force field.

    Science.gov (United States)

    Asgarani, Somayeh

    2015-02-01

    A method of finding entropic form for a given stationary probability distribution and specified potential field is discussed, using the steady-state Fokker-Planck equation. As examples, starting with the Boltzmann and Tsallis distribution and knowing the force field, we obtain the Boltzmann-Gibbs and Tsallis entropies. Also, the associated entropy for the gamma probability distribution is found, which seems to be in the form of the gamma function. Moreover, the related Fokker-Planck equations are given for the Boltzmann, Tsallis, and gamma probability distributions.

  1. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    Science.gov (United States)

    Lu, Dan; Zhang, Guannan; Webster, Clayton; Barbier, Charlotte

    2016-12-01

    In this work, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.

  2. Systematic Study of Rogue Wave Probability Distributions in a Fourth-Order Nonlinear Schr\\"odinger Equation

    CERN Document Server

    Ying, L H

    2012-01-01

    Nonlinear instability and refraction by ocean currents are both important mechanisms that go beyond the Rayleigh approximation and may be responsible for the formation of freak waves. In this paper, we quantitatively study nonlinear effects on the evolution of surface gravity waves on the ocean, to explore systematically the effects of various input parameters on the probability of freak wave formation. The fourth-order current-modified nonlinear Schr\\"odinger equation (CNLS4) is employed to describe the wave evolution. By solving CNLS4 numerically, we are able to obtain quantitative predictions for the wave height distribution as a function of key environmental conditions such as average steepness, angular spread, and frequency spread of the local sea state. Additionally, we explore the spatial dependence of the wave height distribution, associated with the buildup of nonlinear development.

  3. Image analysis of aggregate,mastic and air void phases for asphalt mixture%Image analysis of aggregate, mastic and air void phases for asphalt mixture

    Institute of Scientific and Technical Information of China (English)

    ADHIKARI Sanjeev; YOU Zhan-ping; HAO Pei-wen; WANG Hai-nian

    2013-01-01

    The shape characterization and spatial distribution of aggregate,mastic and air void phases for asphalt mixture were analyzed.Three air void percentage asphalt mixtures,4%,7% and 8%,respectively,were cut into cross sections and polished.X-ray scanning microscope was used to capture aggregate,mastic,air void phase by the image.The average of polygon diameter was chosen as a threshold to determine which aggregates would be retained on a given sieve.The aggregate morphological image from scanned image was utilized by digital image processing method to calculate the gradation of aggregate and simulate the real gradation.Analysis result shows that the air void of asphalt mixture has influence on the correlation between calculation gradation and actual gradation.When comparing 4.75 mm sieve size of 4%,7% and 8% air void asphalt mixtures,7% air void asphalt mixture has 55% higher than actual size gradation,8% air void asphalt mixture has 8% higher than actual size gradation,and 4% air void asphalt mixture has 3.71% lower than actual size gradation.4% air void asphalt mixture has the best correlation between calculation gradation and actual gradation comparing to other specimens.The air void percentage of asphalt mixture has no obvious influence on the air void orientation,and three asphalt mixtures show the similar air orientation along the same direction.4 tabs,7 figs,17 refs.

  4. Extinction probabilities and stationary distributions of mobile genetic elements in prokaryotes: The birth-death-diversification model.

    Science.gov (United States)

    Drakos, Nicole E; Wahl, Lindi M

    2015-12-01

    Theoretical approaches are essential to our understanding of the complex dynamics of mobile genetic elements (MGEs) within genomes. Recently, the birth-death-diversification model was developed to describe the dynamics of mobile promoters (MPs), a particular class of MGEs in prokaryotes. A unique feature of this model is that genetic diversification of elements was included. To explore the implications of diversification on the longterm fate of MGE lineages, in this contribution we analyze the extinction probabilities, extinction times and equilibrium solutions of the birth-death-diversification model. We find that diversification increases both the survival and growth rate of MGE families, but the strength of this effect depends on the rate of horizontal gene transfer (HGT). We also find that the distribution of MGE families per genome is not necessarily monotonically decreasing, as observed for MPs, but may have a peak in the distribution that is related to the HGT rate. For MPs specifically, we find that new families have a high extinction probability, and predict that the number of MPs is increasing, albeit at a very slow rate. Additionally, we develop an extension of the birth-death-diversification model which allows MGEs in different regions of the genome, for example coding and non-coding, to be described by different rates. This extension may offer a potential explanation as to why the majority of MPs are located in non-promoter regions of the genome.

  5. Probability theory

    CERN Document Server

    S Varadhan, S R

    2001-01-01

    This volume presents topics in probability theory covered during a first-year graduate course given at the Courant Institute of Mathematical Sciences. The necessary background material in measure theory is developed, including the standard topics, such as extension theorem, construction of measures, integration, product spaces, Radon-Nikodym theorem, and conditional expectation. In the first part of the book, characteristic functions are introduced, followed by the study of weak convergence of probability distributions. Then both the weak and strong limit theorems for sums of independent rando

  6. Fast Hadamard transforms for compressive sensing of joint systems: measurement of a 3.2 million-dimensional bi-photon probability distribution.

    Science.gov (United States)

    Lum, Daniel J; Knarr, Samuel H; Howell, John C

    2015-10-19

    We demonstrate how to efficiently implement extremely high-dimensional compressive imaging of a bi-photon probability distribution. Our method uses fast-Hadamard-transform Kronecker-based compressive sensing to acquire the joint space distribution. We list, in detail, the operations necessary to enable fast-transform-based matrix-vector operations in the joint space to reconstruct a 16.8 million-dimensional image in less than 10 minutes. Within a subspace of that image exists a 3.2 million-dimensional bi-photon probability distribution. In addition, we demonstrate how the marginal distributions can aid in the accuracy of joint space distribution reconstructions.

  7. On cavitation instabilities with interacting voids

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2012-01-01

    voids so far apart that the radius of the plastic zone around each void is less than 1% of the current spacing between the voids, can still affect each others at the occurrence of a cavitation instability such that one void stops growing while the other grows in an unstable manner. On the other hand...

  8. Magnetization curves and probability angular distribution of the magnetization vector in Er2Fe14Si3

    Science.gov (United States)

    Sobh, Hala A.; Aly, Samy H.; Shabara, Reham M.; Yehia, Sherif

    2016-01-01

    Specific magnetic and magneto-thermal properties of Er2Fe14Si3, in the temperature range of 80-300 K, have been investigated using basic laws of classical statistical mechanics in a simple model. In this model, the constructed partition function was used to derive, and therefore calculate the temperature and/or field dependence of a host of physical properties. Examples of these properties are: the magnetization, magnetic heat capacity, magnetic susceptibility, probability angular distribution of the magnetization vector, and the associated angular dependence of energy. We highlight a correlation between the energy of the system, its magnetization behavior and the angular location of the magnetization vector. Our results show that Er2Fe14Si3 is an easy-axis system in the temperature range 80-114 K, but switches to an easy-plane system at T≥114 K. This transition is also supported by both of the temperature dependence of the magnetic heat capacity, which develops a peak at a temperature ~114 K, and the probability landscape which shows, in zero magnetic field, a prominent peak in the basal plane at T=113.5 K.

  9. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  10. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    Science.gov (United States)

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  11. Influence of Coloured Correlated Noises on Probability Distribution and Mean of Tumour Cell Number in the Logistic Growth Model

    Institute of Scientific and Technical Information of China (English)

    HAN Li-Bo; GONG Xiao-Long; CAO Li; WU Da-Jin

    2007-01-01

    An approximate Fokker-P1anck equation for the logistic growth model which is driven by coloured correlated noises is derived by applying the Novikov theorem and the Fox approximation. The steady-state probability distribution (SPD) and the mean of the tumour cell number are analysed. It is found that the SPD is the single extremum configuration when the degree of correlation between the multiplicative and additive noises, λ, is in -1<λ ≤ 0 and can be the double extrema in 0<λ<1. A configuration transition occurs because of the variation of noise parameters. A minimum appears in the curve of the mean of the steady-state tumour cell number, 〈x〉, versus λ. The position and the value of the minimum are controlled by the noise-correlated times.

  12. The Effect of Probability Distributions in a Failure Criterion for the Reliability of a Passive Safety System

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok-Jung; Yang, Joon-Eon; Lee, Won-Jea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    A safety issue of a Very High Temperature Reactor (VHTR) is to estimate the Reliability of a Passive safety System (RoPS). The Stress-Strength Interference (SSI) approach is widely adopted to estimate the RoPS. Major efforts for the RoPS addressed a quantification of the operational uncertainty of a passive safety system given a postulated accident scenario. However, another important problem is to determine the failure criteria of a passive safety system, because there is an ambiguity in the failure criteria for a VHTR due to the inherent safety characteristics. This paper focuses on an investigation of the reliability characteristics due to a change of the probability distribution in a failure criterion for the quantification of the RoPS.

  13. Forecasting the Stock Market with Linguistic Rules Generated from the Minimize Entropy Principle and the Cumulative Probability Distribution Approaches

    Directory of Open Access Journals (Sweden)

    Chung-Ho Su

    2010-12-01

    Full Text Available To forecast a complex and non-linear system, such as a stock market, advanced artificial intelligence algorithms, like neural networks (NNs and genetic algorithms (GAs have been proposed as new approaches. However, for the average stock investor, two major disadvantages are argued against these advanced algorithms: (1 the rules generated by NNs and GAs are difficult to apply in investment decisions; and (2 the time complexity of the algorithms to produce forecasting outcomes is very high. Therefore, to provide understandable rules for investors and to reduce the time complexity of forecasting algorithms, this paper proposes a novel model for the forecasting process, which combines two granulating methods (the minimize entropy principle approach and the cumulative probability distribution approach and a rough set algorithm. The model verification demonstrates that the proposed model surpasses the three listed conventional fuzzy time-series models and a multiple regression model (MLR in forecast accuracy.

  14. Comparison between the probability distribution of returns in the Heston model and empirical data for stock indexes

    Science.gov (United States)

    Silva, A. Christian; Yakovenko, Victor M.

    2003-06-01

    We compare the probability distribution of returns for the three major stock-market indexes (Nasdaq, S&P500, and Dow-Jones) with an analytical formula recently derived by Drăgulescu and Yakovenko for the Heston model with stochastic variance. For the period of 1982-1999, we find a very good agreement between the theory and the data for a wide range of time lags from 1 to 250 days. On the other hand, deviations start to appear when the data for 2000-2002 are included. We interpret this as a statistical evidence of the major change in the market from a positive growth rate in 1980s and 1990s to a negative rate in 2000s.

  15. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    Energy Technology Data Exchange (ETDEWEB)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)

    2012-07-06

    Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life

  16. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    Energy Technology Data Exchange (ETDEWEB)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)

    2012-07-06

    Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life

  17. Precision cosmography with stacked voids

    CERN Document Server

    Lavaux, Guilhem

    2011-01-01

    We present a purely geometrical method for probing the expansion history of the Universe from the observation of the shape of stacked voids in spectroscopic re dshift surveys. Our method is an Alcock-Pasczinsky test based on the average sphericity of voids posited on the local isotropy of the Universe. It works by comparing the temporal extent of cosmic voids along the line of sight with their angular, spatial extent. We describe the algorithm that we use to detect and stack voids in redshift shells on the light cone and test it on mock light cones produced from N-body simulations. We establish a robust statistical model for estimating the average stretching of voids in redshift space and quantify the contamination by peculiar velocities. Finally, we assess the capability of this approach to constrain dark energy parameters in terms of the figure of merit (FoM) of the dark energy task force and in particular of the proposed Euclid mission which is particularly suited for this technique since it is a spectrosc...

  18. Modelling the void deformation and closure by hot forging of ingot castings

    DEFF Research Database (Denmark)

    Christiansen, Peter; Hattel, Jesper Henri; Kotas, Petr;

    2012-01-01

    After solidification and cooling cast ingots contain voids due to improper feeding and volume shrinkage. Such voids are normally unwanted, so besides of forming the ingot to the desired shape, one of the purposes of the post processing of the ingot by hot forging is to close such voids by mechani......After solidification and cooling cast ingots contain voids due to improper feeding and volume shrinkage. Such voids are normally unwanted, so besides of forming the ingot to the desired shape, one of the purposes of the post processing of the ingot by hot forging is to close such voids...... and focuses on how the voids deform depending on their size and distribution in the ingot as well ashow the forging forces are applied....

  19. Self-similarity and universality of void density profiles in simulation and SDSS data

    CERN Document Server

    Nadathur, S; Diego, J M; Iliev, I T; Gottlöber, S; Watson, W A; Yepes, G

    2014-01-01

    The stacked density profile of cosmic voids in the galaxy distribution provides an important tool for the use of voids for precision cosmology. We study the density profiles of voids identified using the ZOBOV watershed transform algorithm in realistic mock luminous red galaxy (LRG) catalogues from the Jubilee simulation, as well as in void catalogues constructed from the SDSS LRG and Main Galaxy samples. We compare different methods for reconstructing density profiles scaled by the void radius and show that the most commonly used method based on counts in shells and simple averaging is statistically flawed as it underestimates the density in void interiors. We provide two alternative methods that do not suffer from this effect; one based on Voronoi tessellations is also easily able to account from artefacts due to finite survey boundaries and so is more suitable when comparing simulation data to observation. Using this method we show that voids in simulation are exactly self-similar, meaning that their avera...

  20. Critical Finite Size Scaling Relation of the Order-Parameter Probability Distribution for the Three-Dimensional Ising Model on the Creutz Cellular Automaton

    Institute of Scientific and Technical Information of China (English)

    B. Kutlu; M. Civi

    2006-01-01

    @@ We study the order parameter probability distribution at the critical point for the three-dimensional spin-1/2 and spin-1 Ising models on the simple cubic lattice under periodic boundary conditions.

  1. IGM Constraints from the SDSS-III/BOSS DR9 Ly-alpha Forest Flux Probability Distribution Function

    CERN Document Server

    Lee, Khee-Gan; Spergel, David N; Weinberg, David H; Hogg, David W; Viel, Matteo; Bolton, James S; Bailey, Stephen; Pieri, Matthew M; Carithers, William; Schlegel, David J; Lundgren, Britt; Palanque-Delabrouille, Nathalie; Suzuki, Nao; Schneider, Donald P; Yeche, Christophe

    2014-01-01

    The Ly$\\alpha$ forest flux probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the flux PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from SDSS Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS flux PDFs, measured at $\\langle z \\rangle = [2.3,2.6,3.0]$, are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, $\\gamma$, and temperature at mean-density, $T_0$, where $T(\\Delta) = T_0 \\Delta^{\\gamma-1}$. We find that a significant population of partial Lyman-limit systems with a column-density distribution slope of $\\beta_\\mathrm{pLLS} \\sim -2$ are required to explain the data at the low-flux end of flux PDF, while uncertainties in the mean \\lya\\ forest transmission affect the...

  2. The dipole moment of a wall-charged void in a bulk dielectric

    DEFF Research Database (Denmark)

    McAllister, Iain Wilson

    1993-01-01

    The dipole moment of a wall-charged void is examined with reference to the spatial extent of the surface charge density σ and the distribution of this charge. The salient factors influencing the void dipole moment are also examined. From a study of spherical voids, it is shown that, although the σ......-distribution influences the dipole moment, the spatial extent of σ has a greater influence. This behavior is not unexpected. For a void of fixed dimensions, the smaller the charged surface area, the greater is the charges, and thus the greater the dipole moment...

  3. "Compu-Void II": the computerized voiding diary.

    Science.gov (United States)

    Rabin, J M; McNett, J; Badlani, G H

    1996-02-01

    We have previously described an electronic voiding diary, "Compu-Void" (Copyright, 1990) developed to automate recording of bladder symptoms (Rabin et al., 1993). Our objectives in this, the second phase of this study, were to examine a group of subject and control patients' preference and compliance with regard to the "Compu-Void" (CV) compared to the standard written voiding diary (WD), to compare the two methods with respect to the amount and type of information obtained and to determine whether or not the order of use of each recording method influenced results in the subject group. Thirty-six women between the ages of 20 and 84 with bladder symptomatology were compared to a group 36 age-matched women. In 100% of subjects and 95% of control patients, CV entries exceeded the number made with the WD in voiding events and in subjects, in incontinent episodes recorded (P < 0.0005 and P < 0.005, respectively). Over 98% of subjects and over 80% of control patients preferred CV over the WD (p < 0.0005). The order of use of each recording method in subjects made no significant difference with regard to the volume of information obtained (p < 0.407), number of urinary leakage events recorded (p < 0.494), and fluid intake patterns (p < 0.410). Patient impressions of, and compliance with each method were not affected by order of use. The only difference regarding order of use was that most subjects who used the CV first also found the WD to be tedious (61% vs 14%). Our results suggest increased volume of data and of patient compliance in reporting bladder symptoms and events using CV, and that order of use is not an important factor in determining patient impressions of the two methods. The majority of subject and control patients preferred CV over traditional methods. An updated version of the software and hardware is also included.

  4. 3D Property Modeling of Void Ratio by Cokriging

    Institute of Scientific and Technical Information of China (English)

    Yao Lingqing; Pan Mao; Cheng Qiuming

    2008-01-01

    Void ratio measures compactness of ground soil in geotechnical engineering. When samples are collected in certain area for mapping void ratios, other relevant types of properties such as water content may be also analyzed. To map the spatial distribution of void ratio in the area based on these types of point, observation data interpolation is often needed. Owing to the variance of sampling density along the horizontal and vertical directions, special consideration is required to handle anisotropy of estimator. 3D property modeling aims at predicting the overall distribution of property values from limited samples, and geostatistical method can he employed naturally here because they help to minimize the mean square error of estimation. To construct 3D property model of void ratio, cokriging was used considering its mutual correlation with water content, which is another important soil parameter. Moreover, K-D tree was adopted to organize the samples to accelerate neighbor query in 3D space during the above modeling process. At last, spatial configuration of void ratio distribution in an engineering body was modeled through 3D visualization, which provides important information for civil engineering purpose.

  5. On the void explanations of the Cold Spot

    CERN Document Server

    Marcos-Caballero, A; Martínez-González, E; Vielva, P

    2015-01-01

    The integrated Sachs-Wolfe (ISW) contribution induced on the cosmic microwave background by the presence of a supervoid as the one detected by Szapudi et al. (2015) is reviewed in this letter in order to check whether it could explain the Cold Spot (CS) anomaly. Two different models, previously used for the same purpose, are considered to describe the matter density profile of the void: a top hat function and a compensated profile produced by a Gaussian potential. The analysis shows that, even enabling ellipticity changes or different values for the dark-energy equation of state parameter $\\omega$, the ISW contribution due to the presence of the void does not reproduce the properties of the CS. Finally, the probability of alignment between the void and the CS is also questioned as an argument in favor of a physical connection between these two phenomena.

  6. IGM CONSTRAINTS FROM THE SDSS-III/BOSS DR9 Lyα FOREST TRANSMISSION PROBABILITY DISTRIBUTION FUNCTION

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Khee-Gan; Hennawi, Joseph F. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Spergel, David N. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Weinberg, David H. [Department of Astronomy and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210 (United States); Hogg, David W. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, Meyer Hall of Physics, New York, NY 10003 (United States); Viel, Matteo [INAF, Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, I-34131 Trieste (Italy); Bolton, James S. [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD (United Kingdom); Bailey, Stephen; Carithers, William; Schlegel, David J. [E.O. Lawrence Berkeley National Lab, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Pieri, Matthew M. [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth PO1 3FX (United Kingdom); Lundgren, Britt [Department of Astronomy, University of Wisconsin, Madison, WI 53706 (United States); Palanque-Delabrouille, Nathalie; Yèche, Christophe [CEA, Centre de Saclay, Irfu/SPP, F-91191 Gif-sur-Yvette (France); Suzuki, Nao [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo, Kashiwano-ha 5-1-5, Kashiwa-shi, Chiba (Japan); Schneider, Donald P., E-mail: lee@mpia.de [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2015-02-01

    The Lyα forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from Sloan Digital Sky Survey Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at (z) = [2.3, 2.6, 3.0], are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, γ, and temperature at mean density, T {sub 0}, where T(Δ) = T {sub 0}Δ{sup γ} {sup –} {sup 1}. We find that a significant population of partial Lyman-limit systems (LLSs) with a column-density distribution slope of β{sub pLLS} ∼ – 2 are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Lyα forest transmission affect the high-transmission end. After modeling the LLSs and marginalizing over mean transmission uncertainties, we find that γ = 1.6 best describes the data over our entire redshift range, although constraints on T {sub 0} are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships (γ ≤ 1) are disfavored at a significance of over 4σ, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.

  7. CMB lensing beyond the power spectrum: Cosmological constraints from the one-point probability distribution function and peak counts

    Science.gov (United States)

    Liu, Jia; Hill, J. Colin; Sherwin, Blake D.; Petri, Andrea; Böhm, Vanessa; Haiman, Zoltán

    2016-11-01

    Unprecedentedly precise cosmic microwave background (CMB) data are expected from ongoing and near-future CMB stage III and IV surveys, which will yield reconstructed CMB lensing maps with effective resolution approaching several arcminutes. The small-scale CMB lensing fluctuations receive non-negligible contributions from nonlinear structure in the late-time density field. These fluctuations are not fully characterized by traditional two-point statistics, such as the power spectrum. Here, we use N -body ray-tracing simulations of CMB lensing maps to examine two higher-order statistics: the lensing convergence one-point probability distribution function (PDF) and peak counts. We show that these statistics contain significant information not captured by the two-point function and provide specific forecasts for the ongoing stage III Advanced Atacama Cosmology Telescope (AdvACT) experiment. Considering only the temperature-based reconstruction estimator, we forecast 9 σ (PDF) and 6 σ (peaks) detections of these statistics with AdvACT. Our simulation pipeline fully accounts for the non-Gaussianity of the lensing reconstruction noise, which is significant and cannot be neglected. Combining the power spectrum, PDF, and peak counts for AdvACT will tighten cosmological constraints in the Ωm-σ8 plane by ≈30 %, compared to using the power spectrum alone.

  8. Void/Pore Distributions and Ductile Fracture.

    Science.gov (United States)

    1985-11-01

    three holes was never observed remote from the final fracture surface indicates that. the imperfection consists of three linked holes plus a zig - zag ...the sheet and plane strain for the plate). Specimen preparation was performed in a numerically controlled milling machine with the holes being

  9. The degree of compression of spherical granular solids controls the evolution of microstructure and bond probability during compaction.

    Science.gov (United States)

    Nordström, Josefina; Persson, Ann-Sofie; Lazorova, Lucia; Frenning, Göran; Alderborn, Göran

    2013-02-14

    The effect of degree of compression on the evolution of tablet microstructure and bond probability during compression of granular solids has been studied. Microcrystalline cellulose pellets of low (about 11%) and of high (about 32%) porosity were used. Tablets were compacted at 50, 100 and 150 MPa applied pressures and the degree of compression and the tensile strength of the tablets determined. The tablets were subjected to mercury intrusion measurements and from the pore size distributions, a void diameter and the porosities of the voids and the intra-granular pores were calculated. The pore size distributions of the tablets had peaks associated with the voids and the intra-granular pores. The void and intra-granular porosities of the tablets were dependent on the original pellet porosity while the total tablet porosity was independent. The separation distance between pellets was generally lower for tablets formed from high porosity pellets and the void size related linearly to the degree of compression. Tensile strength of tablets was higher for tablets of high porosity pellets and a scaled tablet tensile strength related linearly to the degree of compression above a percolation threshold. In conclusion, the degree of compression controlled the separation distance and the probability of forming bonds between pellets in the tablet. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Coupling effects of void size and void shape on the growth of prolate ellipsoidal microvoid

    Institute of Scientific and Technical Information of China (English)

    Minsheng Huang; Zhenhuan Li; Cheng Wang

    2005-01-01

    The combined effects of void size and void shape on the void growth are studied by using the classical spectrum method. An infinite solid containing an isolated prolate spheroidal void is considered to depict the void shape effect and the Fleck-Hutchinson phenomenological strain gradient plasticity theory is employed to capture the size effects. It is found that the combined effects of void size and void shape are mainly controlled by the remote stress triaxiality. Based on this, a new size-dependent void growth model similar to the Rice-Tracey model is proposed and an important conclusion about the size-dependent void growth is drawn: the growth rate of the void with radius smaller than a critical radius rc may be ignored. It is interesting that rc is a material constant independent of the initial void shape and the remote stress triaxiality.

  11. Constraints on Cosmology and Gravity from the Dynamics of Voids

    Science.gov (United States)

    Hamaus, Nico; Pisani, Alice; Sutter, P. M.; Lavaux, Guilhem; Escoffier, Stéphanie; Wandelt, Benjamin D.; Weller, Jochen

    2016-08-01

    The Universe is mostly composed of large and relatively empty domains known as cosmic voids, whereas its matter content is predominantly distributed along their boundaries. The remaining material inside them, either dark or luminous matter, is attracted to these boundaries and causes voids to expand faster and to grow emptier over time. Using the distribution of galaxies centered on voids identified in the Sloan Digital Sky Survey and adopting minimal assumptions on the statistical motion of these galaxies, we constrain the average matter content Ωm=0.281 ±0.031 in the Universe today, as well as the linear growth rate of structure f /b =0.417 ±0.089 at median redshift z ¯=0.57 , where b is the galaxy bias (68% C.L.). These values originate from a percent-level measurement of the anisotropic distortion in the void-galaxy cross-correlation function, ɛ =1.003 ±0.012 , and are robust to consistency tests with bootstraps of the data and simulated mock catalogs within an additional systematic uncertainty of half that size. They surpass (and are complementary to) existing constraints by unlocking cosmological information on smaller scales through an accurate model of nonlinear clustering and dynamics in void environments. As such, our analysis furnishes a powerful probe of deviations from Einstein's general relativity in the low-density regime which has largely remained untested so far. We find no evidence for such deviations in the data at hand.

  12. Constitutive modeling of rate dependence and microinertia effects in porous-plastic materials with multi-sized voids (MSVs)

    KAUST Repository

    Liu, Jinxing

    2012-11-27

    Micro-voids of varying sizes exist in most metals and alloys. Both experiments and numerical studies have demonstrated the critical influence of initial void sizes on void growth. The classical Gurson-Tvergaard-Needleman model summarizes the influence of voids with a single parameter, namely the void-volume fraction, excluding any possible effects of the void-size distribution. We extend our newly proposed model including the multi-sized void (MSV) effect and the void-interaction effect for the capability of working for both moderate and high loading rate cases, where either rate dependence or microinertia becomes considerable or even dominant. Parametric studies show that the MSV-related competitive mechanism among void growth leads to the dependence of the void growth rate on void size, which directly influences the void\\'s contribution to the total energy composition. We finally show that the stress-strain constitutive behavior is also affected by this MSV-related competitive mechanism. The stabilizing effect due to rate sensitivity and microinertia is emphasized. © 2013 IOP Publishing Ltd.

  13. Combining scenarios in a calculation of the overall probability distribution of cumulative releases of radioactivity from the Waste Isolation Pilot Plant, southeastern New Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, M.S.

    1991-11-01

    The Waste Isolation Pilot Plant (WIPP), in southeastern New Mexico, is a research and development facility to demonstrate safe disposal of defense-generated transuranic waste. The US Department of Energy will designate WIPP as a disposal facility if it meets the US Environmental Protection Agency's standard for disposal of such waste; the standard includes a requirement that estimates of cumulative releases of radioactivity to the accessible environment be incorporated in an overall probability distribution. The WIPP Project has chosen an approach to calculation of an overall probability distribution that employs the concept of scenarios for release and transport of radioactivity to the accessible environment. This report reviews the use of Monte Carlo methods in the calculation of an overall probability distribution and presents a logical and mathematical foundation for use of the scenario concept in such calculations. The report also draws preliminary conclusions regarding the shape of the probability distribution for the WIPP system; preliminary conclusions are based on the possible occurrence of three events and the presence of one feature: namely, the events attempted boreholes over rooms and drifts,'' mining alters ground-water regime,'' water-withdrawal wells provide alternate pathways,'' and the feature brine pocket below room or drift.'' Calculation of the WIPP systems's overall probability distributions for only five of sixteen possible scenario classes that can be obtained by combining the four postulated events or features.

  14. Testing the imprint of nonstandard cosmologies on void profiles using Monte Carlo random walks

    Science.gov (United States)

    Achitouv, Ixandra

    2016-11-01

    Using Monte Carlo random walks of a log-normal distribution, we show how to qualitatively study void properties for nonstandard cosmologies. We apply this method to an f (R ) modified gravity model and recover the N -body simulation results of [1 I. Achitouv, M. Baldi, E. Puchwein, and J. Weller, Phys. Rev. D 93, 103522 (2016).] for the void profiles and their deviation from GR. This method can potentially be extended to study other properties of the large scale structures such as the abundance of voids or overdense environments. We also introduce a new way to identify voids in the cosmic web, using only a few measurements of the density fluctuations around random positions. This algorithm allows us to select voids with specific profiles and radii. As a consequence, we can target classes of voids with higher differences between f (R ) and standard gravity void profiles. Finally, we apply our void criteria to galaxy mock catalogues and discuss how the flexibility of our void finder can be used to reduce systematic errors when probing the growth rate in the galaxy-void correlation function.

  15. Testing the imprint of non-standard cosmologies on void profiles using Monte Carlo random walks

    CERN Document Server

    Achitouv, Ixandra

    2016-01-01

    Using a Monte Carlo random walks of a log-normal distribution, we show how to qualitatively study void properties for non-standard cosmologies. We apply this method to an f(R) modified gravity model and recover the N-body simulation results of (Achitouv et al. 2016) for the void profiles and their deviation from GR. This method can potentially be extended to study other properties of the large scale structures such as the abundance of voids or overdense environments. We also introduce a new way to identify voids in the cosmic web, using only a few measurements of the density fluctuations around random positions. This algorithm allows to select voids with specific profiles and radii. As a consequence, we can target classes of voids with higher differences between f(R) and standard gravity void profiles. Finally we apply our void criteria to galaxy mock catalogues and discuss how the flexibility of our void finder can be used to reduce systematics errors when probing the growth rate in the galaxy-void correlati...

  16. Size-Effects in Void Growth

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof

    2005-01-01

    The size-effect on ductile void growth in metals is investigated. The analysis is based on unit cell models both of arrays of cylindrical voids under plane strain deformation, as well as arrays of spherical voids using an axisymmetric model. A recent finite strain generalization of two higher order...

  17. Numerical Simulation of Dust Void Evolution in Complex Plasmas with Ionization Effect

    Institute of Scientific and Technical Information of China (English)

    LIU Yue; WANG Zheng-Xiong; WANG Xiao-Gang

    2006-01-01

    We develop the nonlinear theory of dust voids [Phys. Rev. Lett. 90 (2003) 075001], focusing particularly on effects of the ionization, to investigate numerically the void evolution under cylindrical coordinates [Phys. Plasmas 13(2006) 064502]. The ion velocity profile is solved by a more accurate ion motion equation with the ion convection and ionization terms. It is shown that the differences between the previous result and the one obtained with ionizations are significant for the distributions of the ion and dust velocities, the dust density, and etc., in the void formation process. Furthermore, the ionization can slow down the void formation process effectively.

  18. Constitutive description of casting aluminum alloy based on cylindrical void-cell model

    Institute of Scientific and Technical Information of China (English)

    CHEN Bin; PENG Xiang-he; ZENG Xiang-guo; WU Xin-yan; SUN Shi-tao

    2006-01-01

    Casting aluminum alloys are highly heterogeneous materials with different types of voids that affect the mechanical properties of the material. Through the analysis ora cylindrical void-cell model the evolution equation of the voids was obtained. The evolution equation was embedded into a nonclassical elastoplastic constitutive relation, and an elastoplastic constitutive relation involving void evolution was obtained. A corresponding finite element procedure was developed and applied to the analyses of the distributions of the axial stress and porosity of notched cylindrical specimens of casting aluminum alloy A101. The computed results show good agreement with experimental data.

  19. Influence of void ratio on phase change of thermal energy storage for heat pipe receiver

    Directory of Open Access Journals (Sweden)

    Xiaohong Gui

    2015-01-01

    Full Text Available In this paper, influence of void ratio on phase change of thermal storage unit for heat pipe receiver under microgravity is numerically simulated. Accordingly, mathematical model is set up. A solidification-melting model upon the enthalpy-porosity method is specially provided to deal with phase changes. The liquid fraction distribution of thermal storage unit of heat pipe receiver is shown. The fluctuation of melting ratio in PCM canister is indicated. Numerical results are compared with experimental ones in Japan. The results show that void cavity prevents the process of phase change greatly. PCM melts slowly during sunlight periods and freezes slowly during eclipse periods as void ratio increases. The utility ratio of PCM during both sunlight periods and eclipse periods decreases obviously with the improvement of void ratio. The thermal resistance of void cavity is much higher than that of PCM canister wall. Void cavity prevents the heat transfer between PCM zone and canister wall.

  20. Benchmark of Subchannel Code VIPRE-W with PSBT Void and Temperature Test Data

    Directory of Open Access Journals (Sweden)

    Y. Sung

    2012-01-01

    Full Text Available This paper summarizes comparisons of VIPRE-W thermal-hydraulic subchannel code predictions with measurements of fluid temperature and void from pressurized water reactor subchannel and bundle tests. Using an existing turbulent mixing model, the empirical coefficient derived from code predictions in comparison to the fluid temperature measurement is similar to those from previous mixing tests of similar bundle configurations. The predicted steady-state axial void distributions and time-dependent void profiles based on the Lellouche and Zolotar model generally agree well with the test data. The void model tends to predict lower void at the upper elevation under bulk boiling. The void predictions are in closer agreement with the measurements from the power increase, temperature increase, and flow reduction transients than the depressurization transient. Additional model sensitivity studies showed no significant improvement in the code predictions as compared to the published test data.

  1. VIPRE-W benchmark with PSBT void and temperature test data

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Oelrich, R.L.; Lee, C.C., E-mail: sungy@westinghouse.com, E-mail: oelricrl@westinghouse.com, E-mail: leecc@westinghouse.com [Westinghouse Electric Co. LLC, Pittsburgh, Pennsylvania (United States); Ruiz-Esquide, N.; Gambetta, M.; Mazufri, C.M., E-mail: nruiz@invap.com.ar, E-mail: gambetta@invap.com.ar, E-mail: mazufri@invap.com.ar [INVAP, San Carlos de Bariloche (Argentina)

    2011-07-01

    This paper summarizes comparisons of VIPRE-W thermal-hydraulic subchannel code predictions with measurements of fluid temperature and void from Pressurized Water Reactor subchannel and bundle tests. Using an existing turbulent mixing model, the empirical coefficient derived from code predictions in comparison to the fluid temperature measurement is similar to those from previous mixing tests of similar bundle configurations. The predicted steady state axial void distributions and time-dependent void profiles based on the Lellouche and Zolotar model generally agree well with the test data. The void model tends to predict lower void at the upper elevation under bulk boiling. The void predictions are in closer agreement with the measurements from the power increase, temperature increase and flow reduction transients than the depressurization transient. (author)

  2. Sodium voiding analysis in Kalimer

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Won-Pyo; Jeong, Kwan-Seong; Hahn, Dohee [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2001-07-01

    A sodium boiling model has been developed for calculations of the void reactivity feedback as well as the fuel and cladding temperatures in the KALIMER core after onset of sodium boiling. The sodium boiling in liquid metal reactors using sodium as coolant should be modeled because of phenomenon difference observed from that in light water reactor systems. The developed model is a multiple -bubble slug ejection model. It allows a finite number of bubbles in a channel at any time. Voiding is assumed to result from formation of bubbles that fill the whole cross section of the coolant channel except for liquid film left on the cladding surface. The vapor pressure, currently, is assumed to be uniform within a bubble. The present study is focused on not only demonstration of the sodium voiding behavior predicted by the developed model, but also confirmation on qualitative acceptance for the model. In results, the model catches important phenomena for sodium boiling, while further effort should be made for the complete analysis. (author)

  3. Elements of probability theory

    CERN Document Server

    Rumshiskii, L Z

    1965-01-01

    Elements of Probability Theory presents the methods of the theory of probability. This book is divided into seven chapters that discuss the general rule for the multiplication of probabilities, the fundamental properties of the subject matter, and the classical definition of probability. The introductory chapters deal with the functions of random variables; continuous random variables; numerical characteristics of probability distributions; center of the probability distribution of a random variable; definition of the law of large numbers; stability of the sample mean and the method of moments

  4. Redshift-space distortions around voids

    Science.gov (United States)

    Cai, Yan-Chuan; Taylor, Andy; Peacock, John A.; Padilla, Nelson

    2016-11-01

    We have derived estimators for the linear growth rate of density fluctuations using the cross-correlation function (CCF) of voids and haloes in redshift space. In linear theory, this CCF contains only monopole and quadrupole terms. At scales greater than the void radius, linear theory is a good match to voids traced out by haloes; small-scale random velocities are unimportant at these radii, only tending to cause small and often negligible elongation of the CCF near its origin. By extracting the monopole and quadrupole from the CCF, we measure the linear growth rate without prior knowledge of the void profile or velocity dispersion. We recover the linear growth parameter β to 9 per cent precision from an effective volume of 3( h-1Gpc)3 using voids with radius >25 h-1Mpc. Smaller voids are predominantly sub-voids, which may be more sensitive to the random velocity dispersion; they introduce noise and do not help to improve measurements. Adding velocity dispersion as a free parameter allows us to use information at radii as small as half of the void radius. The precision on β is reduced to 5 per cent. Voids show diverse shapes in redshift space, and can appear either elongated or flattened along the line of sight. This can be explained by the competing amplitudes of the local density contrast, plus the radial velocity profile and its gradient. The distortion pattern is therefore determined solely by the void profile and is different for void-in-cloud and void-in-void. This diversity of redshift-space void morphology complicates measurements of the Alcock-Paczynski effect using voids.

  5. Void growth to coalescence in a non-local material

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof

    The size-effect in metals containing distributed spherical voids is analyzed numerically using a finite strain generalization of a length scale dependent plasticity theory. Results are obtained for stress-triaxialities relevant in front of a crack tip in an elastic-plastic metal. The influence...

  6. Void growth to coalescence in a non-local material

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof

    2008-01-01

    The size-effect in metals containing distributed spherical voids is analyzed numerically using a finite strain generalization of a length scale dependent plasticity theory. Results are obtained for stress-triaxialities relevant in front of a crack tip in an elastic-plastic metal. The influence...

  7. An H I survey of the Bootes void .2. The analysis

    NARCIS (Netherlands)

    Szomoru, A; vanGorkom, JH; Gregg, MD; Strauss, MA

    1996-01-01

    We discuss the results of a VLA(2) [Napier et al., Proc. IEEE 71, 1295 (1983)] H I survey of the Bootes void and compare the distribution and H I properties of the void galaxies to those of galaxies found in a survey of regions of mean cosmic density. The Bootes survey covers 1100 Mpc(3), or similar

  8. Electromigration of intergranular voids in metal films for microelectronic interconnects

    CERN Document Server

    Averbuch, A; Ravve, I

    2003-01-01

    Voids and cracks often occur in the interconnect lines of microelectronic devices. They increase the resistance of the circuits and may even lead to a fatal failure. Voids may occur inside a single grain, but often they appear on the boundary between two grains. In this work, we model and analyze numerically the migration and evolution of an intergranular void subjected to surface diffusion forces and external voltage applied to the interconnect. The grain-void interface is considered one-dimensional, and the physical formulation of the electromigration and diffusion model results in two coupled fourth-order one-dimensional time-dependent PDEs. The boundary conditions are specified at the triple points, which are common to both neighboring grains and the void. The solution of these equations uses a finite difference scheme in space and a Runge-Kutta integration scheme in time, and is also coupled to the solution of a static Laplace equation describing the voltage distribution throughout the grain. Since the v...

  9. The life and death of cosmic voids

    CERN Document Server

    Sutter, P M; Falck, Bridget; Onions, Julian; Hamaus, Nico; Knebe, Alexander; Srisawat, Chaichalit; Schneider, Aurel

    2014-01-01

    We investigate the formation, growth, merger history, movement, and destruction of cosmic voids detected via the watershed transform in a cosmological N-body dark matter {\\Lambda}CDM simulation. By adapting a method used to construct halo merger trees, we are able to trace individual voids back to their initial appearance and record the merging and evolution of their progenitors at high redshift. For the scales of void sizes captured in our simulation, we find that the void formation rate peaks at scale factor 0.3, which coincides with a growth in the void hierarchy and the emergence of dark energy. Voids of all sizes appear at all scale factors, though the median initial void size decreases with time. When voids become detectable they have nearly their present-day volumes. Almost all voids have relatively stable growth rates and suffer only infrequent minor mergers. Dissolution of a void via merging is very rare. Instead, most voids maintain their distinct identity as annexed subvoids of a larger parent. The...

  10. Dwarf Galaxies in Voids: Dark Matter Halos and Gas Cooling

    CERN Document Server

    Hoeft, Matthias

    2010-01-01

    Galaxy surveys have shown that luminous galaxies are mainly distributed in large filaments and galaxy clusters. The remaining large volumes are virtually devoid of luminous galaxies. This is in concordance with the formation of the large-scale structure in Universe as derived from cosmological simulations. However, the numerical results indicate that cosmological voids are abundantly populated with dark matter haloes which may in principle host dwarf galaxies. Observational efforts have in contrast revealed, that voids are apparently devoid of dwarf galaxies. We investigate the formation of dwarf galaxies in voids by hydrodynamical cosmological simulations. Due to the cosmic ultra-violet background radiation low-mass haloes show generally are reduced baryon fraction. We determine the characteristic mass below which dwarf galaxies are baryon deficient. We show that the circular velocity below which the accretion of baryons is suppressed is approximately 40 km/s. The suppressed baryon accretion is caused by the...

  11. Size-Effects in Void Growth

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof

    2005-01-01

    The size-effect on ductile void growth in metals is investigated. The analysis is based on unit cell models both of arrays of cylindrical voids under plane strain deformation, as well as arrays of spherical voids using an axisymmetric model. A recent finite strain generalization of two higher order...... strain gradient plasticity models is implemented in a finite element program, which is used to study void growth numerically. The results based on the two models are compared. It is shown how gradient effects suppress void growth on the micron scale when compared to predictions based on conventional...... models. This increased resistance to void growth, due to gradient hardening, is accompanied by an increase in the overall strength for the material. Furthermore, for increasing initial void volume fraction, it is shown that the effect of gradients becomes more important to the overall response but less...

  12. PROBABILITY AND STATISTICS.

    Science.gov (United States)

    STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION

  13. Constitutive modeling of rate dependence and microinertia effects in porous-plastic materials with multi-sized voids (MSVs)

    Science.gov (United States)

    Liu, J. X.; El Sayed, T.

    2013-01-01

    Micro-voids of varying sizes exist in most metals and alloys. Both experiments and numerical studies have demonstrated the critical influence of initial void sizes on void growth. The classical Gurson-Tvergaard-Needleman model summarizes the influence of voids with a single parameter, namely the void-volume fraction, excluding any possible effects of the void-size distribution. We extend our newly proposed model including the multi-sized void (MSV) effect and the void-interaction effect for the capability of working for both moderate and high loading rate cases, where either rate dependence or microinertia becomes considerable or even dominant. Parametric studies show that the MSV-related competitive mechanism among void growth leads to the dependence of the void growth rate on void size, which directly influences the void's contribution to the total energy composition. We finally show that the stress-strain constitutive behavior is also affected by this MSV-related competitive mechanism. The stabilizing effect due to rate sensitivity and microinertia is emphasized.

  14. Determining the Probability Distribution of Hillslope Peak Discharge Using an Analytical Solution of Kinematic Wave Time of Concentration

    Science.gov (United States)

    Baiamonte, Giorgio; Singh, Vijay P.

    2016-04-01

    extended to the case of pervious hillslopes, accounting for infiltration. In particular, an analytical solution for the time of concentration for overland flow on a rectangular plane surface was derived using the kinematic wave equation under the Green-Ampt infiltration (Baiamonte and Singh, 2015). The objective of this work is to apply the latter solution to determine the probability distribution of hillslope peak discharge by combining it with the familiar rainfall duration-intensity-frequency approach. References Agnese, C., Baiamonte, G., and Corrao, C. (2001). "A simple model of hillslope response for overland flow generation". Hydrol. Process., 15, 3225-3238, ISSN: 0885-6087, doi: 10.1002/hyp.182. Baiamonte, G., and Agnese, C. (2010). "An analytical solution of kinematic wave equations for overland flow under Green-Ampt infiltration". J. Agr. Eng., vol. 1, p. 41-49, ISSN: 1974-7071. Baiamonte, G., and Singh, V.P., (2015). "Analytical solution of kinematic wave time of concentration for overland flow under Green-Ampt Infiltration." J Hydrol E - ASCE, DOI: 10.1061/(ASCE)HE.1943-5584.0001266. Robinson, J.S., and Sivapalan, M. (1996). "Instantaneous response functions of overland flow and subsurface stormflow for catchment models". Hydrol. Process., 10, 845-862. Singh, V.P. (1976). "Derivation of time of concentration". J. of Hydrol., 30, 147-165. Singh, V.P., (1996). Kinematic-Wave Modeling in Water Resources: Surface-Water Hydrology. John Wiley & Sons, Inc., New York, 1399 pp.

  15. Linear redshift space distortions for cosmic voids based on galaxies in redshift space

    CERN Document Server

    Chuang, Chia-Hsun; Liang, Yu; Font-Ribera, Andreu; Zhao, Cheng; McDonald, Patrick; Tao, Charling

    2016-01-01

    Cosmic voids found in galaxy surveys are defined based on the galaxy distribution in redshift space. We show that the large scale distribution of voids in redshift space traces the fluctuations in the dark matter density field \\delta(k) (in Fourier space with \\mu being the line of sight projected k-vector): \\delta_v^s(k) = (1 + \\beta_v \\mu^2) b^s_v \\delta(k), with a beta factor that will be in general different than the one describing the distribution of galaxies. Only in case voids could be assumed to be quasi-local transformations of the linear (Gaussian) galaxy redshift space field, one gets equal beta factors \\beta_v=\\beta_g=f/b_g with f being the growth rate, and b_g, b^s_v being the galaxy and void bias on large scales defined in redshift space. Indeed, in our mock void catalogs we measure void beta factors being in good agreement with the galaxy one. Further work needs to be done to confirm the level of accuracy of the beta factor equality between voids and galaxies, but in general the void beta factor...

  16. Systemic atherosclerosis and voiding symptom.

    Science.gov (United States)

    Yeniel, A Ozgur; Ergenoglu, A Mete; Meseri, Reci; Ari, Anıl; Sancar, Ceren; Itil, Ismail Mete

    2017-03-01

    To evaluate the effect of atherosclerosis on the storage and voiding symptoms of the bladder in women with overactive bladder (OAB). We retrospectively reviewed the charts of women with OAB who were evaluated between 2013 and 2015 in our urogynecology unit. Charts were assessed for history, examination findings, urinary diary, quality of life (QOL) questionnaires, urodynamic studies (UDSs), and four main risk factors for atherosclerosis: hypertension, diabetes mellitus, smoking, and hyperlipidemia. In a previous study, these were defined as vascular risk factors. Cases were excluded for insufficient data, diabetes mellitus with dysregulated blood glucose, or prolapse greater than 1cm to avoid confusing bladder outlet obstruction. We included 167 eligible cases in this study. We evaluated storage and voiding symptoms such as frequency, nocturia, residual urine volume, and voiding difficulties and UDS findings such as maximum bladder capacity, first desire, strong desire, detrusor overactivity, and bladder contractility index. The vascular risk score was categorized as "no risk" if the woman did not have any of the four risk factors and "at risk" if she had any of the factors. Independent sample t-test and chi-square tests were performed for analyses. Among the participants (n=167), 71.9% had at least one vascular risk factor. Those who were at risk were facing significantly more wet-type OAB (p=0.003) and nocturia (p=0.023). Moreover, mean age (p=0.008) and mean gravidity (p=0.020) were significantly higher in the at-risk group, whereas mean total nocturia QOL questionnaire scores (p=0.029) were significantly lower. Our findings suggest that aging and atherosclerosis may be associated with severe OAB and poorer QOL. Nocturia and related parameters of poor quality can be explained by impaired bladder neck perfusion. Future trials need to assess vascular and molecular changes in women with OAB. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Nocturia: The circadian voiding disorder

    Directory of Open Access Journals (Sweden)

    Jin Wook Kim

    2016-05-01

    Full Text Available Nocturia is a prevalent condition of waking to void during the night. The concept of nocturia has evolved from being a symptomatic aspect of disease associated with the prostate or bladder to a form of lower urinary tract disorder. However, recent advances in circadian biology and sleep science suggest that it might be important to consider nocturia as a form of circadian dysfunction. In the current review, nocturia is reexamined with an introduction to sleep disorders and recent findings in circadian biology in an attempt to highlight the importance of rediscovering nocturia as a problem of chronobiology.

  18. Large-scale clustering of cosmic voids

    Science.gov (United States)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  19. Numerical renormalization group study of probability distributions for local fluctuations in the Anderson-Holstein and Holstein-Hubbard models.

    Science.gov (United States)

    Hewson, Alex C; Bauer, Johannes

    2010-03-24

    We show that information on the probability density of local fluctuations can be obtained from a numerical renormalization group calculation of a reduced density matrix. We apply this approach to the Anderson-Holstein impurity model to calculate the ground state probability density ρ(x) for the displacement x of the local oscillator. From this density we can deduce an effective local potential for the oscillator and compare its form with that obtained from a semiclassical approximation as a function of the coupling strength. The method is extended to the infinite dimensional Holstein-Hubbard model using dynamical mean field theory. We use this approach to compare the probability densities for the displacement of the local oscillator in the normal, antiferromagnetic and charge ordered phases.

  20. The Beckoning Void in Moravagine

    Directory of Open Access Journals (Sweden)

    Stephen K. Bellstrom

    1979-01-01

    Full Text Available The Chapter «Mascha,» lying at the heart of Cendrars's Moravagine , contains within it a variety of images and themes suggestive of emptiness. The philosophy of nihilism is exemplified in the motivations and actions of the group of terrorists seeking to plunge Russia into revolutionary chaos. Mascha's anatomical orifice, symbolizing both a biological and a psychological fault, and the abortion of her child, paralleled by the abortion of the revolutionary ideal among her comrades, are also emblematic of the chapter's central void. Moreover, Cendrars builds the theme of hollowness by describing Moravagine with images of omission, such as «empan» (space or span, «absent,» and «étranger.» Moravagine's presence, in fact, characteristically causes an undercurrent of doubt and uncertainty about the nature of reality to become overt. It is this parodoxical presence which seems to cause the narrator (and consequently the narrative to «lose» a day at the most critical moment of the story. By plunging the reader into the narrator's lapsus memoriae , Cendrars aims at creating a feeling of the kind of mental and cosmic disorder for which Moravagine is the strategist and apologist. This technique of insufficiency is an active technique, even though it relies on the passive idea of removing explanation and connecting details. The reader is invited, or lured, into the central void of the novel and, faced with unresolvable dilemmas, becomes involved in the same disorder that was initially produced.

  1. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  2. The Effect of Random Voids in the Modified Gurson Model

    Science.gov (United States)

    Fei, Huiyang; Yazzie, Kyle; Chawla, Nikhilesh; Jiang, Hanqing

    2012-02-01

    The porous plasticity model (usually referred to as the Gurson-Tvergaard-Needleman model or modified Gurson model) has been widely used in the study of microvoid-induced ductile fracture. In this paper, we studied the effects of random voids on the porous plasticity model. Finite-element simulations were conducted to study a copper/tin/copper joint bar under uniaxial tension using the commercial finite-element package ABAQUS. A randomly distributed initial void volume fraction with different types of distribution was introduced, and the effects of this randomness on the crack path and macroscopic stress-strain behavior were studied. It was found that consideration of the random voids is able to capture more detailed and localized deformation features, such as different crack paths and different ultimate tensile strengths, and meanwhile does not change the macroscopic stress-strain behavior. It seems that the random voids are able to qualitatively explain the scattered observations in experiments while keeping the macroscopic measurements consistent.

  3. Excursion Sets and Non-Gaussian Void Statistics

    CERN Document Server

    D'Amico, Guido; Noreña, Jorge; Paranjape, Aseem

    2010-01-01

    Primordial non-Gaussianity (NG) affects the large scale structure (LSS) of the universe by leaving an imprint on the distribution of matter at late times. Much attention has been focused on using the distribution of collapsed objects (i.e. dark matter halos and the galaxies and galaxy clusters that reside in them) to probe primordial NG. An equally interesting and complementary probe however is the abundance of extended underdense regions or voids in the LSS. The calculation of the abundance of voids using the excursion set formalism in the presence of primordial NG is subject to the same technical issues as the one for halos, which were discussed e.g. in arXiv:1005.1203. However, unlike the excursion set problem for halos which involved random walks in the presence of one barrier $\\delta_c$, the void excursion set problem involves two barriers $\\delta_v$ and $\\delta_c$. This leads to a new complication introduced by what is called the "void-in-cloud" effect discussed in the literature, which is unique to the...

  4. Universal density profile for cosmic voids.

    Science.gov (United States)

    Hamaus, Nico; Sutter, P M; Wandelt, Benjamin D

    2014-06-27

    We present a simple empirical function for the average density profile of cosmic voids, identified via the watershed technique in ΛCDM N-body simulations. This function is universal across void size and redshift, accurately describing a large radial range of scales around void centers with only two free parameters. In analogy to halo density profiles, these parameters describe the scale radius and the central density of voids. While we initially start with a more general four-parameter model, we find two of its parameters to be redundant, as they follow linear trends with the scale radius in two distinct regimes of the void sample, separated by its compensation scale. Assuming linear theory, we derive an analytic formula for the velocity profile of voids and find an excellent agreement with the numerical data as well. In our companion paper [Sutter et al., arXiv:1309.5087 [Mon. Not. R. Astron. Soc. (to be published)

  5. Air void structure and frost resistance

    DEFF Research Database (Denmark)

    Hasholt, Marianne Tange

    2014-01-01

    is proportional to the product of total air content and specific surface. In all 4 cases, the conclusion is concurrent that the parameter of total surface area of air voids performs equally well or better than the spacing factor when linking air void characteristics to frost resistance (salt frost scaling...... will take place in the air void, being feed from the capillary, but without pressure build-up in the capillary. If the capillary is not connected to an air void, ice formation will take place in the capillary pore, where it can generate substantial pressure. Like this, frost resistance depends......This article compiles results from 4 independent laboratory studies. In each study, the same type of concrete is tested at least 10 times, the air void structure being the only variable. For each concrete mix both air void analysis of the hardened concrete and a salt frost scaling test...

  6. Void Profile from Planck Lensing Potential Map

    Science.gov (United States)

    Chantavat, Teeraparb; Sawangwit, Utane; Wandelt, Benjamin D.

    2017-02-01

    We use the lensing potential map from Planck CMB lensing reconstruction analysis and the “Public Cosmic Void Catalog” to measure the stacked void lensing potential. We have made an attempt to fit the HSW void profile parameters from the stacked lensing potential. In this profile, four parameters are needed to describe the shape of voids with different characteristic radii R V . However, we have found that after reducing the background noise by subtracting the average background, there is a residue lensing power left in the data. The inclusion of the environment shifting parameter, {γ }V, is necessary to get a better fit to the data with the residue lensing power. We divide the voids into two redshift bins: cmass1 (0.45Digital Sky Survey voids reside in an underdense region.

  7. The probability distribution of side-chain conformations in [Leu] and [Met]enkephalin determines the potency and selectivity to mu and delta opiate receptors

    DEFF Research Database (Denmark)

    Nielsen, Bjørn Gilbert; Jensen, Morten Østergaard; Bohr, Henrik

    2003-01-01

    The structure of enkephalin, a small neuropeptide with five amino acids, has been simulated on computers using molecular dynamics. Such simulations exhibit a few stable conformations, which also have been identified experimentally. The simulations provide the possibility to perform cluster analysis...... in the space defined by potentially pharmacophoric measures such as dihedral angles, side-chain orientation, etc. By analyzing the statistics of the resulting clusters, the probability distribution of the side-chain conformations may be determined. These probabilities allow us to predict the selectivity...

  8. Estimation of most probable power distribution in BWRs by least squares method using in-core measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ezure, Hideo

    1988-09-01

    Effective combination of measured data with theoretical analysis has permitted deriving a mehtod for more accurately estimating the power distribution in BWRs. Use is made of least squares method for the combination between relationship of the power distribution with measured values and the model used in FLARE or in the three-dimensional two-group diffusion code. Trial application of the new method to estimating the power distribution in JPDR-1 has proved the method to provide reliable results.

  9. Probability distribution function values in mobile phones;Valores de funciones de distribución de probabilidad en el teléfono móvil

    Directory of Open Access Journals (Sweden)

    Luis Vicente Chamorro Marcillllo

    2013-06-01

    Full Text Available Engineering, within its academic and application forms, as well as any formal research work requires the use of statistics and every inferential statistical analysis requires the use of values of probability distribution functions that are generally available in tables. Generally, management of those tables have physical problems (wasteful transport and wasteful consultation and operational (incomplete lists and limited accuracy. The study, “Probability distribution function values in mobile phones”, permitted determining – through a needs survey applied to students involved in statistics studies at Universidad de Nariño – that the best known and most used values correspond to Chi-Square, Binomial, Student’s t, and Standard Normal distributions. Similarly, it showed user’s interest in having the values in question within an alternative means to correct, at least in part, the problems presented by “the famous tables”. To try to contribute to the solution, we built software that allows immediately and dynamically obtaining the values of the probability distribution functions most commonly used by mobile phones.

  10. On the use of area-averaged void fraction and local bubble chord length entropies as two-phase flow regime indicators

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, Leonor; Julia, J.E. [Universitat Jaume I, Departamento de Ingenieria Mecanica y Construccion Campus de Riu Sec, Castellon (Spain); Paranjape, Sidharth; Hibiki, Takashi; Ishii, Mamoru [Purdue University, Nuclear Engineering Department, West Lafayette, IN (United States)

    2010-11-15

    In this work, the use of the area-averaged void fraction and bubble chord length entropies is introduced as flow regime indicators in two-phase flow systems. The entropy provides quantitative information about the disorder in the area-averaged void fraction or bubble chord length distributions. The CPDF (cumulative probability distribution function) of void fractions and bubble chord lengths obtained by means of impedance meters and conductivity probes are used to calculate both entropies. Entropy values for 242 flow conditions in upward two-phase flows in 25.4 and 50.8-mm pipes have been calculated. The measured conditions cover ranges from 0.13 to 5 m/s in the superficial liquid velocity j{sub f} and ranges from 0.01 to 25 m/s in the superficial gas velocity j{sub g}. The physical meaning of both entropies has been interpreted using the visual flow regime map information. The area-averaged void fraction and bubble chord length entropies capability as flow regime indicators have been checked with other statistical parameters and also with different input signals durations. The area-averaged void fraction and the bubble chord length entropies provide better or at least similar results than those obtained with other indicators that include more than one parameter. The entropy is capable to reduce the relevant information of the flow regimes in only one significant and useful parameter. In addition, the entropy computation time is shorter than the majority of the other indicators. The use of one parameter as input also represents faster predictions. (orig.)

  11. A Finding Method of Business Risk Factors Using Characteristics of Probability Distributions of Effect Ratios on Qualitative and Quantitative Hybrid Simulation

    Science.gov (United States)

    Samejima, Masaki; Negoro, Keisuke; Mitsukuni, Koshichiro; Akiyoshi, Masanori

    We propose a finding method of business risk factors on qualitative and quantitative hybrid simulation in time series. Effect ratios of qualitative arcs in the hybrid simulation vary output values of the simulation, so we define effect ratios causing risk as business risk factors. Finding business risk factors in entire ranges of effect ratios is time-consuming. It is considered that probability distributions of effect ratios in present time step and ones in previous time step are similar, the probability distributions in present time step can be estimated. Our method finds business risk factors in only estimated ranges effectively. Experimental results show that a precision rate and a recall rate are 86%, and search time is decreased 20% at least.

  12. The darkness that shaped the void: dark energy and cosmic voids

    NARCIS (Netherlands)

    Bos, E. G. Patrick; van de Weygaert, Rien; Dolag, Klaus; Pettorino, Valeria

    2012-01-01

    We assess the sensitivity of void shapes to the nature of dark energy that was pointed out in recent studies and also investigate whether or not void shapes are useable as an observational probe in galaxy redshift surveys. Our focus is on the evolution of the mean void ellipticity and its underlying

  13. The darkness that shaped the void : Dark energy and cosmic voids

    NARCIS (Netherlands)

    Bos, E. G. Patrick; van de Weygaert, Rien; Dolag, Klaus; Pettorino, Valeria

    2012-01-01

    We assess the sensitivity of void shapes to the nature of dark energy that was pointed out in recent studies and also investigate whether or not void shapes are useable as an observational probe in galaxy redshift surveys. Our focus is on the evolution of the mean void ellipticity and its underlying

  14. The difference between the joint probability distributions of apparent wave heights and periods and individual wave heights and periods

    Institute of Scientific and Technical Information of China (English)

    ZHENGGuizhen; JIANGXiulan; HANShuzong

    2004-01-01

    The joint distribution of wave heights and periods of individual waves is usually approximated by the joint distribution of apparent wave heights and periods. However there is difference between them. This difference is addressed and the theoretical joint distributions of apparent wave heights and periods due to Longuet-Higgins and Sun are modified to give more reasonable representations of the joint distribution of wave heights and periods of individual waves. The modification has overcome an inherent drawback of these joint PDFs that the mean wave period is infinite. A comparison is made between the modified formulae and the field data of Goda, which shows that the new formulae consist with the measurement better than their original counterparts.

  15. Protein-Protein Interaction Site Predictions with Three-Dimensional Probability Distributions of Interacting Atoms on Protein Surfaces

    Science.gov (United States)

    Chen, Ching-Tai; Peng, Hung-Pin; Jian, Jhih-Wei; Tsai, Keng-Chang; Chang, Jeng-Yih; Yang, Ei-Wen; Chen, Jun-Bo; Ho, Shinn-Ying; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI) sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins) and were tested on an independent dataset (consisting of 142 proteins). The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted correctly with

  16. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    Science.gov (United States)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  17. Diagnostics of Rovibrational Distribution of H2 in Low Temperature Plasmas by Fulcher-α band Spectroscopy - on the Reaction Rates and Transition Probabilities

    Institute of Scientific and Technical Information of China (English)

    Xiao Bingjia; Shinichiro Kado; Shin Kajita; Daisuge Yamasaki; Satoru Tanaka

    2005-01-01

    A novel fitting procedure is proposed for a better determination of H2 rovibrational distribution from the Fulcher-a band spectroscopy. We have recalculated the transition probabilities and the results show that they deviate from Franck-Condon approximation especially for the non-diagonal transitions. We also calculated the complete sets of vibrationally resolved crosssections for electron impact d3∏u- X3∑g transition based on the semi-classical Gryzinski theory.An example of experimental study confirms that current approach provides a tool for a better diagnostics of H2 rovibrational distribution in electronic ground state.

  18. Kinetic Monte Carlo simulations of void lattice formation during irradiation

    Science.gov (United States)

    Heinisch, H. L.; Singh, B. N.

    2003-11-01

    Over the last decade, molecular dynamics simulations of displacement cascades have revealed that glissile clusters of self-interstitial crowdions are formed directly in cascades and that they migrate one-dimensionally along close-packed directions with extremely low activation energies. Occasionally, under various conditions, a crowdion cluster can change its Burgers vector and glide along a different close-packed direction. The recently developed production bias model (PBM) of microstructure evolution under irradiation has been structured specifically to take into account the unique properties of the vacancy and interstitial clusters produced in the cascades. Atomic-scale kinetic Monte Carlo (KMC) simulations have played a useful role in understanding the defect reaction kinetics of one-dimensionally migrating crowdion clusters as a function of the frequency of direction changes. This has made it possible to incorporate the migration properties of crowdion clusters and changes in reaction kinetics into the PBM. In the present paper we utilize similar KMC simulations to investigate the significant role that crowdion clusters can play in the formation and stability of void lattices. The creation of stable void lattices, starting from a random distribution of voids, is simulated by a KMC model in which vacancies migrate three-dimensionally and self-interstitial atom (SIA) clusters migrate one-dimensionally, interrupted by directional changes. The necessity of both one-dimensional migration and Burgers vectors changes of SIA clusters for the production of stable void lattices is demonstrated, and the effects of the frequency of Burgers vector changes are described.

  19. An HI survey of the bootes void; 2, the analysis

    CERN Document Server

    Szomoru, A; Gregg, M D; Strauss, M A

    1995-01-01

    We discuss the results of a VLA HI survey of the Bootes void and compare the distribution and HI properties of the void galaxies to those of galaxies found in a survey of regions of mean cosmic density. The Bootes survey covers 1100 Mpc^{3}, or \\sim 1\\% of the volume of the void and consists of 24 cubes of typically 2 Mpc * 2 Mpc * 1280 km/s, centered on optically known galaxies. Sixteen targets were detected in HI; 18 previously uncataloged objects were discovered directly in HI. The control sample consists of 12 cubes centered on IRAS selected galaxies with FIR luminosities similar to those of the Bootes targets and located in regions of 1 to 2 times the cosmic mean density. In addition to the 12 targets 29 companions were detected in HI. We find that the number of galaxies within 1 Mpc of the targets is the same to within a factor of two for void and control samples, and thus that the small scale clustering of galaxies is the same in regions that differ by a factor of \\sim 6 in density on larger scales. A ...

  20. Answers from the Void: VIDE and its Applications

    Science.gov (United States)

    Sutter, P. M.; Hamaus, N.; Pisani, A.; Lavaux, G.; Wandelt, B. D.

    2016-10-01

    We discuss various applications ofvide, the Void IDentification and Examination toolkit, anopen-source Python/C++ code for finding cosmic voids in galaxy redshift surveysand $N$-body simulations.Based on a substantially enhanced version of ZOBOV, vide not only finds voids, but alsosummarizes their properties, extracts statisticalinformation, and providesa Python-based platform for more detailed analysis, such asmanipulating void catalogs and particle members, filtering, plotting,computing clustering statistics, stacking, comparing catalogs, andfitting density profiles.vide also provides significant additional functionality forpre-processing inputs: for example, vide can work with volume- ormagnitude-limited galaxy samples with arbitrary survey geometries,or darkmatter particles or halo catalogs in a variety of common formats.It can also randomly subsample inputsand includes a Halo Occupation Distribution model forconstructing mock galaxy populations.vide has been used for a wide variety of applications, fromdiscovering a universal density profile to estimatingprimordial magnetic fields, andis publicly available athttp://bitbucket.org/cosmicvoids/vide\\_publicandhttp://www.cosmicvoids.net.

  1. Simulation of dust voids in complex plasmas

    NARCIS (Netherlands)

    W. J. Goedheer,; Land, V.

    2008-01-01

    In dusty radio-frequency (RF) discharges under micro-gravity conditions often a void is observed, a dust free region in the discharge center. This void is generated by the drag of the positive ions pulled out of the discharge by the electric field. We have developed a hydrodynamic model for dusty RF

  2. Assembly of filamentary void galaxy configurations

    NARCIS (Netherlands)

    Rieder, Steven; van de Weijgaert, Rien; Cautun, Marius; Beygu, Burcu; Zwart, Simon Portegies

    2013-01-01

    We study the formation and evolution of filamentary configurations of dark matter haloes in voids. Our investigation uses the high-resolution Lambda cold dark matter simulation CosmoGrid to look for void systems resembling the VGS_31 elongated system of three interacting galaxies that was recently

  3. The Hierarchical Structure and Dynamics of Voids

    CERN Document Server

    Aragon-Calvo, M A

    2012-01-01

    Contrary to the common view voids have very complex internal structure and dynamics. Here we show how the hierarchy of structures in the density field inside voids is reflected by a similar hierarchy of structures in the velocity field. Voids defined by dense filaments and clusters can de described as simple expanding domains with coherent flows everywhere except at their boundaries. At scales smaller that the void radius the velocity field breaks into expanding sub-domains corresponding to sub- voids. These sub-domains break into even smaller sub-sub domains at smaller scales resulting in a nesting hierarchy of locally expanding domains. The ratio between the magnitude of the velocity field responsible for the expansion of the void and the velocity field defining the sub voids is approximately one order of magnitude. The small-scale components of the velocity field play a minor role in the shaping of the voids but they define the local dynamics directly affecting the processes of galaxy formation and evoluti...

  4. Atomistic insights into dislocation-based mechanisms of void growth and coalescence

    Science.gov (United States)

    Mi, Changwen; Buttry, Daniel A.; Sharma, Pradeep; Kouris, Demitris A.

    2011-09-01

    One of the low-temperature failure mechanisms in ductile metallic alloys is the growth of voids and their coalescence. In the present work we attempt to obtain atomistic insights into the mechanisms underpinning cavitation in a representative metal, namely Aluminum. Often the pre-existing voids in metallic alloys such as Al have complex shapes (e.g. corrosion pits) and the defromation/damage mechanisms exhibit a rich size-dependent behavior across various material length scales. We focus on these two issues in this paper through large-scale calculations on specimens of sizes ranging from 18 thousand to 1.08 million atoms. In addition to the elucidation of the dislocation propagation based void growth mechanism we highlight the observed length scale effect reflected in the effective stress-strain response, stress triaxiality and void fraction evolution. Furthermore, as expected, the conventionally used Gurson's model fails to capture the observed size-effects calling for a mechanistic modification that incorporates the mechanisms observed in our (and other researchers') simulation. Finally, in our multi-void simulations, we find that, the splitting of a big void into a distribution of small ones increases the load-carrying capacity of specimens. However, no obvious dependence of the void fraction evolution on void coalescence is observed.

  5. Flow pattern and void fraction distribution measurements during ebullition of weakly foaming two-hase mixtures by using conductivity probes; Ermittlung der Stroemungsform und der Dampfgehaltverteilung bei dem Aufwallen von schwach schaeumenden Zweiphasengemischen mit Hilfe von Leitfaehigkeitsonden

    Energy Technology Data Exchange (ETDEWEB)

    Friedel, L. [Technische Univ. Hamburg-Harburg (Germany); Prasser, H.M. [Forschungszentrum Rossendorf (Germany); Schecker, J. [Airbus Deutschland GmbH, Hamburg (Germany)

    2006-02-15

    On account of recalculating pressure relief experiments with foaming systems by using multiphysics codes regularly as level swell submodel bubble flow is considered since this conforms best with the impression. On the basis of measurements of the void fraction during venting of non-foaming as well as weakly foaming, isobutanolic hot water it will be demonstrated that actually a so called homogeneous bubble flow in the form as described in the level swell model bubbly flow by DIERS for use in dynamic simulations of venting establishes. (orig.)

  6. Electrical impedance-based void fraction measurement and flow regime identification in microchannel flows under adiabatic conditions

    OpenAIRE

    Paranjape, Sidharth; Ritchey, Susan N; Garimella, S V

    2012-01-01

    Electrical impedance of a two-phase mixture is a function of void fraction and phase distribution. The difference in the specific electrical conductance and permittivity of the two phases is exploited to measure electrical impedance for obtaining void fraction and flow regime characteristics. An electrical impedance meter is constructed for the measurement of void fraction in microchannel two-phase flow. The experiments are conducted in air–water two-phase flow under adiabatic conditions. A t...

  7. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    Science.gov (United States)

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  8. Investigating anthelmintic efficacy against gastrointestinal nematodes in cattle by considering appropriate probability distributions for faecal egg count data

    Directory of Open Access Journals (Sweden)

    J.W. Love

    2017-04-01

    Where FEC data were obtained with less sensitive counting techniques (i.e. McMaster 30 or 15 epg, zero-inflated distributions and their associated central tendency were the most appropriate and would be recommended to use, i.e. the arithmetic group mean divided by the proportion of non-zero counts present; otherwise apparent anthelmintic efficacy could be misrepresented.

  9. Analytical Description of Voids in Majumdar-Papapetrou Spacetimes

    CERN Document Server

    Varela, V

    1999-01-01

    We discuss new Majumdar-Papapetrou solutions for the 3+1 Einstein-Maxwell equations, with charged dust acting as the external source of the fields. The solutions satisfy non-linear potential equations which are related to well-known wave equations of 1+1 soliton physics. Although the matter distributions are not localised, they present central structures which may be identified with voids.

  10. Cosmic voids and void lensing in the Dark Energy Survey Science Verification data

    Energy Technology Data Exchange (ETDEWEB)

    Sánchez, C.; Clampitt, J.; Kovacs, A.; Jain, B.; García-Bellido, J.; Nadathur, S.; Gruen, D.; Hamaus, N.; Huterer, D.; Vielzeuf, P.; Amara, A.; Bonnett, C.; DeRose, J.; Hartley, W. G.; Jarvis, M.; Lahav, O.; Miquel, R.; Rozo, E.; Rykoff, E. S.; Sheldon, E.; Wechsler, R. H.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Annis, J.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Cunha, C. E.; D' Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Evrard, A. E.; Neto, A. Fausti; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Krause, E.; Kuehn, K.; Lima, M.; Maia, M. A. G.; Marshall, J. L.; Melchior, P.; Plazas, A. A.; Reil, K.; Romer, A. K.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Walker, A. R.; Weller, J.

    2016-10-26

    Galaxies and their dark matter halos populate a complicated filamentary network around large, nearly empty regions known as cosmic voids. Cosmic voids are usually identified in spectroscopic galaxy surveys, where 3D information about the large-scale structure of the Universe is available. Although an increasing amount of photometric data is being produced, its potential for void studies is limited since photometric redshifts induce line-of-sight position errors of $\\sim50$ Mpc/$h$ or more that can render many voids undetectable. In this paper we present a new void finder designed for photometric surveys, validate it using simulations, and apply it to the high-quality photo-$z$ redMaGiC galaxy sample of the Dark Energy Survey Science Verification (DES-SV) data. The algorithm works by projecting galaxies into 2D slices and finding voids in the smoothed 2D galaxy density field of the slice. Fixing the line-of-sight size of the slices to be at least twice the photo-$z$ scatter, the number of voids found in these projected slices of simulated spectroscopic and photometric galaxy catalogs is within 20% for all transverse void sizes, and indistinguishable for the largest voids of radius $\\sim 70$ Mpc/$h$ and larger. The positions, radii, and projected galaxy profiles of photometric voids also accurately match the spectroscopic void sample. Applying the algorithm to the DES-SV data in the redshift range $0.2voids with comoving radii spanning the range 18-120 Mpc/$h$, and carry out a stacked weak lensing measurement. With a significance of $4.4\\sigma$, the lensing measurement confirms the voids are truly underdense in the matter field and hence not a product of Poisson noise, tracer density effects or systematics in the data. It also demonstrates, for the first time in real data, the viability of void lensing studies in photometric surveys.

  11. Predicting probability of occurrence and factors affecting distribution and abundance of three Ozark endemic crayfish species at multiple spatial scales

    Science.gov (United States)

    Nolen, Matthew S.; Magoulick, Daniel D.; DiStefano, Robert J.; Imhoff, Emily M.; Wagner, Brian K.

    2014-01-01

    Crayfishes and other freshwater aquatic fauna are particularly at risk globally due to anthropogenic demand, manipulation and exploitation of freshwater resources and yet are often understudied. The Ozark faunal region of Missouri and Arkansas harbours a high level of aquatic biological diversity, especially in regard to endemic crayfishes. Three such endemics, Orconectes eupunctus,Orconectes marchandi and Cambarus hubbsi, are threatened by limited natural distribution and the invasions of Orconectes neglectus.

  12. On the Efficient Simulation of the Distribution of the Sum of Gamma-Gamma Variates with Application to the Outage Probability Evaluation Over Fading Channels

    KAUST Repository

    Ben Issaid, Chaouki

    2016-06-01

    The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverbation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is intimately related to the difficult question of analyzing the statistics of a sum of Gamma-Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose a mean-shift importance sampling scheme that efficiently evaluates the outage probability of L-branch maximum ratio combining diversity receivers over Gamma-Gamma fading channels. The proposed estimator satisfies the well-known bounded relative error criterion, a well-desired property characterizing the robustness of importance sampling schemes, for both identically and non-identically independent distributed cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations.

  13. Probability distribution of the number of distinct sites visited by a random walk on the finite-size fully-connected lattice

    CERN Document Server

    Turban, L

    2016-01-01

    The probability distribution of the number $s$ of distinct sites visited up to time $t$ by a random walk on the fully-connected lattice with $N$ sites is first obtained by solving the eigenvalue problem associated with the discrete master equation. Then, using generating function techniques, we compute the joint probability distribution of $s$ and $r$, where $r$ is the number of sites visited only once up to time $t$. Mean values, variances and covariance are deduced from the generating functions and their finite-size-scaling behaviour is studied. Introducing properly centered and scaled variables $u$ and $v$ for $r$ and $s$ and working in the scaling limit ($t\\to\\infty$, $N\\to\\infty$ with $w=t/N$ fixed) the joint probability density of $u$ and $v$ is shown to be a bivariate Gaussian density. It follows that the fluctuations of $r$ and $s$ around their mean values in a finite-size system are Gaussian in the scaling limit. The same type of finite-size scaling is expected to hold on periodic lattices above the ...

  14. 负二项分布概率最大值的性质%The Characters of the Probability Maximum Value for Negative Binomial Distribution

    Institute of Scientific and Technical Information of China (English)

    丁勇

    2016-01-01

    The character of probability maximum value for negative binomial distribution was explored. The probability maximum value for negative binomial distribution was a function of p and r, where p was the probability of success for each test, and r was the number of the first successful test. It was a mono-tonically increasing continuous function of p when r was given,only (r-1)/p was a integer, its derivative did not exist, and a monotone decreasing function of r when p was given.%负二项分布概率的最大值是每次试验成功的概率p和首次试验成功次数r的函数。对确定的r,该函数是p的单调上升的连续函数,仅当(r-1)/p是整数时不可导;对确定的p,该函数是r的单调下降函数。

  15. Modelling Void Abundance in Modified Gravity

    CERN Document Server

    Voivodic, Rodrigo; Llinares, Claudio; Mota, David F

    2016-01-01

    We use a spherical model and an extended excursion set formalism with drifting diffusive barriers to predict the abundance of cosmic voids in the context of general relativity as well as f(R) and symmetron models of modified gravity. We detect spherical voids from a suite of N-body simulations of these gravity theories and compare the measured void abundance to theory predictions. We find that our model correctly describes the abundance of both dark matter and galaxy voids, providing a better fit than previous proposals in the literature based on static barriers. We use the simulation abundance results to fit for the abundance model free parameters as a function of modified gravity parameters, and show that counts of dark matter voids can provide interesting constraints on modified gravity. For galaxy voids, more closely related to optical observations, we find that constraining modified gravity from void abundance alone may be significantly more challenging. In the context of current and upcoming galaxy surv...

  16. Redshift-space distortions around voids

    CERN Document Server

    Cai, Yan-Chuan; Peacock, John A; Padilla, Nelson

    2016-01-01

    We have derived estimators for the linear growth rate of density fluctuations using the cross-correlation function of voids and haloes in redshift space, both directly and in Fourier form. In linear theory, this cross-correlation contains only monopole and quadrupole terms. At scales greater than the void radius, linear theory is a good match to voids traced out by haloes in N-body simulations; small-scale random velocities are unimportant at these radii, only tending to cause small and often negligible elongation of the redshift-space cross-correlation function near its origin. By extracting the monopole and quadrupole from the cross-correlation function, we measure the linear growth rate without prior knowledge of the void profile or velocity dispersion. We recover the linear growth parameter $\\beta$ to 9% precision from an effective volume of 3(Gpc/h)^3 using voids with radius greater than 25Mpc/h. Smaller voids are predominantly sub-voids, which may be more sensitive to the random velocity dispersion; the...

  17. Comparative analysis of methods for modelling the short-term probability distribution of extreme wind turbine loads

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov

    2016-01-01

    We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...

  18. Concepts of probability theory

    CERN Document Server

    Pfeiffer, Paul E

    1979-01-01

    Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

  19. 风速概率分布参数估计的低阶概率权重矩法%Low-order Probability-weighted Moments Method for Wind Speed Probability Distribution Parameter Estimation

    Institute of Scientific and Technical Information of China (English)

    潘晓春

    2012-01-01

    It is necessary to describe the statistical properties of wind speed using three-parameter Weibull distribution for offshore wind energy resource assessment and utilization.According to the functional relation between parameters and probability-weighted moments(PWM),the functions were fitted with the shape parameter and PWM using logistic curve.Two formulae of parameter estimation were studied out based on low-order insufficient and exceeding PWM.Accuracy test results show that these formulae had higher precision in large-scale range.Through comparative analysis with high-order PWM method for example,the author believes the low-order PWM methods in this paper are worth popularizing.%为便于进行海上风能资源评估与利用,采用三参数Weibull分布来描述风的统计特性是必要的。根据Weibull分布的三参数与概率权重矩(probability-weighted moment,PWM)的关系,应用罗吉斯蒂曲线拟合形状参数与PWM的函数关系,提出低阶不及PWM和超过PWM 2种参数估计方法。精度检验显示,文中方法在较大范围内均具有较高的精度。通过算例分析比较,认为提出的低阶PWM法值得推广使用。

  20. Quantum probability

    CERN Document Server

    Gudder, Stanley P

    2014-01-01

    Quantum probability is a subtle blend of quantum mechanics and classical probability theory. Its important ideas can be traced to the pioneering work of Richard Feynman in his path integral formalism.Only recently have the concept and ideas of quantum probability been presented in a rigorous axiomatic framework, and this book provides a coherent and comprehensive exposition of this approach. It gives a unified treatment of operational statistics, generalized measure theory and the path integral formalism that can only be found in scattered research articles.The first two chapters survey the ne