WorldWideScience

Sample records for void probability distribution

  1. Void probability scaling in hadron nucleus interactions

    International Nuclear Information System (INIS)

    Ghosh, Dipak; Deb, Argha; Bhattacharyya, Swarnapratim; Ghosh, Jayita; Bandyopadhyay, Prabhat; Das, Rupa; Mukherjee, Sima

    2002-01-01

    Heygi while investigating with the rapidity gap probability (that measures the chance of finding no particle in the pseudo-rapidity interval Δη) found that a scaling behavior in the rapidity gap probability has a close correspondence with the scaling of a void probability in galaxy correlation study. The main aim in this paper is to study the scaling behavior of the rapidity gap probability

  2. Determination of the void nucleation rate from void size distributions

    International Nuclear Information System (INIS)

    Brailsford, A.D.

    1977-01-01

    A method of estimating the void nucleation rate from one void size distribution and from observation of the maximum void radius at prior times is proposed. Implicit in the method are the assumptions that both variations in the critical radius with dose and vacancy thermal emission processes during post-nucleation quasi-steady-state growth may be neglected. (Auth.)

  3. A NEW STATISTICAL PERSPECTIVE TO THE COSMIC VOID DISTRIBUTION

    International Nuclear Information System (INIS)

    Pycke, J-R; Russell, E.

    2016-01-01

    In this study, we obtain the size distribution of voids as a three-parameter redshift-independent log-normal void probability function (VPF) directly from the Cosmic Void Catalog (CVC). Although many statistical models of void distributions are based on the counts in randomly placed cells, the log-normal VPF that we obtain here is independent of the shape of the voids due to the parameter-free void finder of the CVC. We use three void populations drawn from the CVC generated by the Halo Occupation Distribution (HOD) Mocks, which are tuned to three mock SDSS samples to investigate the void distribution statistically and to investigate the effects of the environments on the size distribution. As a result, it is shown that void size distributions obtained from the HOD Mock samples are satisfied by the three-parameter log-normal distribution. In addition, we find that there may be a relation between the hierarchical formation, skewness, and kurtosis of the log-normal distribution for each catalog. We also show that the shape of the three-parameter distribution from the samples is strikingly similar to the galaxy log-normal mass distribution obtained from numerical studies. This similarity between void size and galaxy mass distributions may possibly indicate evidence of nonlinear mechanisms affecting both voids and galaxies, such as large-scale accretion and tidal effects. Considering the fact that in this study, all voids are generated by galaxy mocks and show hierarchical structures in different levels, it may be possible that the same nonlinear mechanisms of mass distribution affect the void size distribution.

  4. The sink strengths of voids and the expected swelling for both random and ordered void distributions

    International Nuclear Information System (INIS)

    Quigley, T.M.; Murphy, S.M.; Bullough, R.; Wood, M.H.

    1981-10-01

    The sink strength of a void has been obtained when the void is a member of a random or ordered distribution of voids. The former sink strength derivation has employed the embedding model and the latter the cellular model. In each case the spatially varying size-effect interaction between the intrinsic point defects and the voids has been included together with the presence of other sink types in addition to the voids. The results are compared with previously published sink strengths that have made use of an approximate representation for the size-effect interactions, and indicate the importance of using the exact form of the interaction. In particular the bias for interstitials compared with vacancies of small voids is now much reduced and contamination of the surfaces of such voids no longer appears essential to facilitate the nucleation and growth of the voids. These new sink strengths have been used, in conjunction with recently published dislocation sink strengths, to calculate the expected swelling of materials containing network dislocations and voids. Results are presented for both the random and the void lattice situations. (author)

  5. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  6. Probability distribution relationships

    Directory of Open Access Journals (Sweden)

    Yousry Abdelkader

    2013-05-01

    Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.

  7. Effect of main stream void distribution on cavitating hydrofoil

    International Nuclear Information System (INIS)

    Ito, J.

    1993-01-01

    For the safety analysis of a loss of coolant accident in a pressurized water reactor, it is important to establish an analytical method which predicts the pump performance under gas-liquid two-phase flow condition. J.H. Kim briefly reviewed several major two-phase flow pump models, and discussed the parameters that could significantly affect two-phase pump behavior. The parameter pointed out to be of the most importance is void distribution at the pump inlet. This says that the pipe bend near the pump inlet makes the void distribution at the pump inlet nonuniform, and this matter can have a significant effect on the impeller blade performance. This paper proposes an analytical method of solution for a partially cavitating hydrofoil placed in the main stream of incompressible homogeneous bubbly two-phase flow conditions whose void fraction is exponentially distributed normal to chordline. The paper clarifies the effect of main stream void distribution parameter on the partially cavitating hydrofoil characteristics

  8. Void distributions in liquid BiBr{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, K [Faculty of Science, Niigata University, Niigata 950-2181 (Japan); Endo, H [Faculty of Science, Kyoto University, Kyoto 606-8224 (Japan); Hoshino, H [Faculty of Education, Hirosaki University, Hirosaki 036-8560 (Japan); Kawakita, Y [Faculty of Sciences, Kyushu University, Fukuoka 810-8560 (Japan); Kohara, S; Itou, M [Japan Synchrotron Radiation Research Institute(JASRI), Sayo-cho 679-5198 (Japan)

    2008-02-15

    The X-ray diffraction experiments and the reverse Monte Carlo analysis for liquid BiBr{sub 3} have been performed to clarify the distribution of Bi and Br ions around voids, comparing with previous results derived in the neutron diffraction experiments. The hexagonal cages involving voids are formed by the corner-sharing of the trigonal pyramidal BiBr{sub 3} blocks. The neighboring cages are linked together in highly correlated fashion. The observed pre-peak in S(Q) at 1.3A{sup -1} is related to the pre-peak of the void-based S'{sub CC} (Q) due to an intermediate chemical order in the structure. The pre-peak intensity increases with increasing temperature. This characteristic change for the pre-peak intensity is discussed by considering the modifications of the topology and stacking in the hexagonal cages.

  9. A void distribution model-flashing flow

    International Nuclear Information System (INIS)

    Riznic, J.; Ishii, M.; Afgan, N.

    1987-01-01

    A new model for flashing flow based on wall nucleations is proposed here and the model predictions are compared with some experimental data. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites was used. Thus it was possible to avoid the usual assumption of a constant bubble number density. Comparisons of the model with the data shows that the model based on the nucleation site density correlation appears to be acceptable to describe the vapor generation in the flashing flow. For the limited data examined, the comparisons show rather satisfactory agreement without using a floating parameter to adjust the model. This result indicated that, at least for the experimental conditions considered here, the mechanistic predictions of the flashing phenomenon is possible on the present wall nucleation based model

  10. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  11. Enthalpy and void distributions in subchannels of PHWR fuel bundles

    Energy Technology Data Exchange (ETDEWEB)

    Park, J W; Choi, H; Rhee, B W [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    Two different types of the CANDU fuel bundles have been modeled for the ASSERT-IV code subchannel analysis. From calculated values of mixture enthalpy and void fraction distribution in the fuel bundles, it is found that net buoyancy effect is pronounced in the central region of the DUPIC fuel bundle when compared with the standard CANDU fuel bundle. It is also found that the central region of the DUPIC fuel bundle can be cooled more efficiently than that of the standard fuel bundle. From the calculated mixture enthalpy distribution at the exit of the fuel channel, it is found that the mixture enthalpy and void fraction can be highest in the peripheral region of the DUPIC fuel bundle. On the other hand, the enthalpy and the void fraction were found to be highest in the central region of the standard CANDU fuel bundle at the exit of the fuel channel. This study shows that the subchannel analysis is very useful in assessing thermal behavior of the fuel bundle that could be used in CANDU reactors. 10 refs., 4 figs., 2 tabs. (Author)

  12. Enthalpy and void distributions in subchannels of PHWR fuel bundles

    Energy Technology Data Exchange (ETDEWEB)

    Park, J. W.; Choi, H.; Rhee, B. W. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    Two different types of the CANDU fuel bundles have been modeled for the ASSERT-IV code subchannel analysis. From calculated values of mixture enthalpy and void fraction distribution in the fuel bundles, it is found that net buoyancy effect is pronounced in the central region of the DUPIC fuel bundle when compared with the standard CANDU fuel bundle. It is also found that the central region of the DUPIC fuel bundle can be cooled more efficiently than that of the standard fuel bundle. From the calculated mixture enthalpy distribution at the exit of the fuel channel, it is found that the mixture enthalpy and void fraction can be highest in the peripheral region of the DUPIC fuel bundle. On the other hand, the enthalpy and the void fraction were found to be highest in the central region of the standard CANDU fuel bundle at the exit of the fuel channel. This study shows that the subchannel analysis is very useful in assessing thermal behavior of the fuel bundle that could be used in CANDU reactors. 10 refs., 4 figs., 2 tabs. (Author)

  13. Effects of Void Uncertainties on Pin Power Distributions and the Void Reactivity Coefficient for a 10X10 BWR Assembly

    International Nuclear Information System (INIS)

    Jatuff, F.; Krouthen, J.; Helmersson, S.; Chawla, R.

    2004-01-01

    A significant source of uncertainty in Boiling Water Reactor physics is associated with the precise characterisation of the axially-dependent neutron moderation properties of the coolant inside the fuel assembly channel, and the corresponding effects on reactor physics parameters such as the lattice neutron multiplication, the neutron migration length, and the pin-by-pin power distribution. In this paper, the effects of particularly relevant void fraction uncertainties on reactor physics parameters have been studied for a BWR assembly of type Westinghouse SVEA-96 using the CASMO-4, HELIOS/PRESTO-2 and MCNP4C codes. The SVEA-96 geometry is characterised by the sub-division of the assembly into four different sub-bundles by means of an inner bypass with a cruciform shape. The study has covered the following issues: (a) the effects of different cross-section data libraries on the void coefficient of reactivity, for a wide range of void fractions; (b) the effects due to a heterogeneous vs. homogeneous void distribution inside the sub-bundles; and (c) the consequences of partly inserted absorber blades producing different void fractions in different sub-bundles. (author)

  14. Mechanistic model for void distribution in flashing flow

    International Nuclear Information System (INIS)

    Riznic, J.; Ishii, M.; Afgan, N.

    1987-01-01

    A problem of discharging of an initially subcooled liquid from a high pressure condition into a low pressure environment is quite important in several industrial systems such as nuclear reactors and chemical reactors. A new model for the flashing process is proposed here based on the wall nucleation theory, bubble growth model and drift-flux bubble transport model. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites is used. The model predictions in terms of the void fraction are compared to Moby Dick and BNL experimental data. It shows that satisfactory agreements could be obtained from the present model without any floating parameter to be adjusted with data. This result indicates that, at least for the experimental conditions considered here, the mechanistic prediction of the flashing phenomenon is possible based on the present wall nucleation based model. 43 refs., 4 figs

  15. The Multivariate Gaussian Probability Distribution

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2005-01-01

    This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical ...

  16. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    Energy Technology Data Exchange (ETDEWEB)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu [Division of Science and Mathematics, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi (United Arab Emirates)

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  17. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  18. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  19. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  20. Measurement of void fraction distribution in two-phase flow by impedance CT with neural network

    International Nuclear Information System (INIS)

    Hayashi, Hideaki; Sumida, Isao; Sakai, Sinji; Wakai, Kazunori

    1996-01-01

    This paper describes a new method for measurement of void distribution using impedance CT with a hierarchical neural network. The present method consists of four processes. First, output electric currents are calculated by simulation of various distributions of void fraction. The relationship between distribution of void fraction and electric current is called 'teaching data'. Second, the neural network learns the teaching data by the back propagation method. Third, output electric currents are measured about actual two-phase flow. Finally, distribution of void fraction is calculated by the taught neural network using the measured electric currents. In this paper, measurement and learning parameters are adjusted, experimental results obtained using the impedance CT method are compared with data obtained by the impedance probe method. The results show that our method is effective for measurement of void fraction distribution. (author)

  1. Void fraction distribution in a heated rod bundle under flow stagnation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Herrero, V.A.; Guido-Lavalle, G.; Clausse, A. [Centro Atomico Bariloche and Instituto Balseiro, Bariloche (Argentina)

    1995-09-01

    An experimental study was performed to determine the axial void fraction distribution along a heated rod bundle under flow stagnation conditions. The development of the flow pattern was investigated for different heat flow rates. It was found that in general the void fraction is overestimated by the Zuber & Findlay model while the Chexal-Lellouche correlation produces a better prediction.

  2. Sensitivity analysis of an impedance void meter to the void distribution in annular flow: A theoretical study

    Energy Technology Data Exchange (ETDEWEB)

    Lemonnier, H.; Nakach, R.; Favreau, C.; Selmer-Olsen, S. (CEA Centre d' Etudes Nucleaires de Grenoble, 38 (France). Service d' Etudes Thermohydrauliques)

    1991-04-01

    Impedance void meters are frequently used to measure the are-averaged void fraction in pipes. This is primarily for two reasons: firstly, this method is non-instrusive since the measurement can be made by electrodes flush mounted in the walls, and secondly, the signal processing equipment is simple. Impedance probes may be calibrated by using a pressure drop measurement or a quick closing valve system. In general, little attention is paid to void distribution effects. It can be proved that in annular flow, the departure from radial symmetry has a strong influence on the measured mean film thickness. This can be easily demonstrated by solving the Laplace equation for the electrical potential by simple analytical methods. When some spatial symmetry conditions are encountered, it is possible to calculate directly the conductance of the two-phase medium without a complete calculation of the potential. A solution of this problem by using the separation of variables technique is also presented. The main difficulty with this technique is the mixed nature of the boundary conditions: the boundary condition is both of Neumann and of Drichlet type on the same coordinate curve. This formulation leads to a non-separable problem, which is solved by truncating an infinite algebraic set of linear equations. The results, although strictly valid in annular flow, may give the correct trends when applied to bubbly flow. Finally, the theory provides an error estimate and a design criterion to improve the probe reliability. (orig.).

  3. Sensitivity analysis of an impedance void meter to the void distribution in annular flow: A theoretical study

    International Nuclear Information System (INIS)

    Lemonnier, H.; Nakach, R.; Favreau, C.; Selmer-Olsen, S.

    1991-01-01

    Impedance void meters are frequently used to measure the are-averaged void fraction in pipes. This is primarily for two reasons: firstly, this method is non-instrusive since the measurement can be made by electrodes flush mounted in the walls, and secondly, the signal processing equipment is simple. Impedance probes may be calibrated by using a pressure drop measurement or a quick closing valve system. In general, little attention is paid to void distribution effects. It can be proved that in annular flow, the departure from radial symmetry has a strong influence on the measured mean film thickness. This can be easily demonstrated by solving the Laplace equation for the electrical potential by simple analytical methods. When some spatial symmetry conditions are encountered, it is possible to calculate directly the conductance of the two-phase medium without a complete calculation of the potential. A solution of this problem by using the separation of variables technique is also presented. The main difficulty with this technique is the mixed nature of the boundary conditions: the boundary condition is both of Neumann and of Drichlet type on the same coordinate curve. This formulation leads to a non-separable problem, which is solved by truncating an infinite algebraic set of linear equations. The results, although strictly valid in annular flow, may give the correct trends when applied to bubbly flow. Finally, the theory provides an error estimate and a design criterion to improve the probe reliability. (orig.)

  4. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  5. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  6. A subchannel and CFD analysis of void distribution for the BWR fuel bundle test benchmark

    International Nuclear Information System (INIS)

    In, Wang-Kee; Hwang, Dae-Hyun; Jeong, Jae Jun

    2013-01-01

    Highlights: ► We analyzed subchannel void distributions using subchannel, system and CFD codes. ► The mean error and standard deviation at steady states were compared. ► The deviation of the CFD simulation was greater than those of the others. ► The large deviation of the CFD prediction is due to interface model uncertainties. -- Abstract: The subchannel grade and microscopic void distributions in the NUPEC (Nuclear Power Engineering Corporation) BFBT (BWR Full-Size Fine-Mesh Bundle Tests) facility have been evaluated with a subchannel analysis code MATRA, a system code MARS and a CFD code CFX-10. Sixteen test series from five different test bundles were selected for the analysis of the steady-state subchannel void distributions. Four test cases for a high burn-up 8 × 8 fuel bundle with a single water rod were simulated using CFX-10 for the microscopic void distribution benchmark. Two transient cases, a turbine trip without a bypass as a typical power transient and a re-circulation pump trip as a flow transient, were also chosen for this analysis. It was found that the steady-state void distributions calculated by both the MATRA and MARS codes coincided well with the measured data in the range of thermodynamic qualities from 5 to 25%. The results of the transient calculations were also similar to each other and very reasonable. The CFD simulation reproduced the overall radial void distribution trend which produces less vapor in the central part of the bundle and more vapor in the periphery. However, the predicted variation of the void distribution inside the subchannels is small, while the measured one is large showing a very high concentration in the center of the subchannels. The variations of the void distribution between the center of the subchannels and the subchannel gap are estimated to be about 5–10% for the CFD prediction and more than 20% for the experiment

  7. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  8. A variational constitutive model for the distribution and interactions of multi-sized voids

    KAUST Repository

    Liu, Jinxing

    2013-07-29

    The evolution of defects or voids, generally recognized as the basic failure mechanism in most metals and alloys, has been intensively studied. Most investigations have been limited to spatially periodic cases with non-random distributions of the radii of the voids. In this study, we use a new form of the incompressibility of the matrix to propose the formula for the volumetric plastic energy of a void inside a porous medium. As a consequence, we are able to account for the weakening effect of the surrounding voids and to propose a general model for the distribution and interactions of multi-sized voids. We found that the single parameter in classical Gurson-type models, namely void volume fraction is not sufficient for the model. The relative growth rates of voids of different sizes, which can in principle be obtained through physical or numerical experiments, are required. To demonstrate the feasibility of the model, we analyze two cases. The first case represents exactly the same assumption hidden in the classical Gurson\\'s model, while the second embodies the competitive mechanism due to void size differences despite in a much simpler manner than the general case. Coalescence is implemented by allowing an accelerated void growth after an empirical critical porosity in a way that is the same as the Gurson-Tvergaard-Needleman model. The constitutive model presented here is validated through good agreements with experimental data. Its capacity for reproducing realistic failure patterns is shown by simulating a tensile test on a notched round bar. © 2013 The Author(s).

  9. Microstructural characterization of XLPE electrical insulation in power cables: determination of void size distributions using TEM

    International Nuclear Information System (INIS)

    Markey, L; Stevens, G C

    2003-01-01

    In an effort to progress in our understanding of the ageing mechanisms of high voltage cables submitted to electrical and thermal stresses, we present a quantitative study of voids, the defects which are considered to be partly responsible for cable failure. We propose a method based on large data sets of transmission electron microscopy (TEM) observations of replicated samples allowing for the determination of void concentration distribution as a function of void size in the mesoscopic to microscopic range at any point in the cable insulation. A theory is also developed to calculate the effect of etching on the apparent size of the voids observed. We present the first results of this sort ever obtained on two industrial cables, one of which was aged in an AC field. Results clearly indicate that a much larger concentration of voids occur near the inner semiconductor compared to the bulk of the insulation, independently of ageing. An effect of ageing can also be seen near the inner semiconductor, resulting in an increase in the total void internal surface area and a slight shift of the concentration curve towards larger voids, with the peak moving from about 40 nm to about 50 nm

  10. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  11. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  12. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  13. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  14. Quantifying the distribution of paste-void spacing of hardened cement paste using X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Tae Sup, E-mail: taesup@yonsei.ac.kr [School of Civil and Environmental Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 120-749 (Korea, Republic of); Kim, Kwang Yeom, E-mail: kimky@kict.re.kr [Korea Institute of Construction Technology, 283 Goyangdae-ro, Ilsanseo-gu, Goyang, 411-712 (Korea, Republic of); Choo, Jinhyun, E-mail: jinhyun@stanford.edu [Department of Civil and Environmental Engineering, Stanford University, Stanford, CA 94305 (United States); Kang, Dong Hun, E-mail: timeriver@naver.com [School of Civil and Environmental Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 120-749 (Korea, Republic of)

    2012-11-15

    The distribution of paste-void spacing in cement-based materials is an important feature related to the freeze-thaw durability of these materials, but its reliable estimation remains an unresolved problem. Herein, we evaluate the capability of X-ray computed tomography (CT) for reliable quantification of the distribution of paste-void spacing. Using X-ray CT images of three mortar specimens having different air-entrainment characteristics, we calculate the distributions of paste-void spacing of the specimens by applying previously suggested methods for deriving the exact spacing of air-void systems. This methodology is assessed by comparing the 95th percentile of the cumulative distribution function of the paste-void spacing with spacing factors computed by applying the linear-traverse method to 3D air-void system and reconstructing equivalent air-void distribution in 3D. Results show that the distributions of equivalent void diameter and paste-void spacing follow lognormal and normal distributions, respectively, and the ratios between the 95th percentile paste-void spacing value and the spacing factors reside within the ranges reported by previous numerical studies. This experimental finding indicates that the distribution of paste-void spacing quantified using X-ray CT has the potential to be the basis for a statistical assessment of the freeze-thaw durability of cement-based materials. - Highlights: Black-Right-Pointing-Pointer The paste-void spacing in 3D can be quantified by X-ray CT. Black-Right-Pointing-Pointer The distribution of the paste-void spacing follows normal distribution. Black-Right-Pointing-Pointer The spacing factor and 95th percentile of CDF of paste-void spacing are correlated.

  15. X-ray Computed Tomography Assessment of Air Void Distribution in Concrete

    Science.gov (United States)

    Lu, Haizhu

    Air void size and spatial distribution have long been regarded as critical parameters in the frost resistance of concrete. In cement-based materials, entrained air void systems play an important role in performance as related to durability, permeability, and heat transfer. Many efforts have been made to measure air void parameters in a more efficient and reliable manner in the past several decades. Standardized measurement techniques based on optical microscopy and stereology on flat cut and polished surfaces are widely used in research as well as in quality assurance and quality control applications. Other more automated methods using image processing have also been utilized, but still starting from flat cut and polished surfaces. The emergence of X-ray computed tomography (CT) techniques provides the capability of capturing the inner microstructure of materials at the micrometer and nanometer scale. X-ray CT's less demanding sample preparation and capability to measure 3D distributions of air voids directly provide ample prospects for its wider use in air void characterization in cement-based materials. However, due to the huge number of air voids that can exist within a limited volume, errors can easily arise in the absence of a formalized data processing procedure. In this study, air void parameters in selected types of cement-based materials (lightweight concrete, structural concrete elements, pavements, and laboratory mortars) have been measured using micro X-ray CT. The focus of this study is to propose a unified procedure for processing the data and to provide solutions to deal with common problems that arise when measuring air void parameters: primarily the reliable segmentation of objects of interest, uncertainty estimation of measured parameters, and the comparison of competing segmentation parameters.

  16. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  17. Implementation of drift-flux model in artist and assessment to thetis void distribution

    International Nuclear Information System (INIS)

    Kim, H. C.; Yun, B. J.; Moon, S. K.; Jeong, J. J.; Lee, W. J.

    1998-01-01

    A system transient analysis code, ARTIST, based on the drift-flux model is being developed to enhance capability of predicting two-phase flow void distribution at low pressure and low flow conditions. The governing equations of the ARTIST code consist of three continuity equations (mixture, liquid, and noncondensibles), two energy equations (gas and mixture) and one mixture momentum euqation constituted with the drift-flux model. Area averaged one-dimensional conservation equations are established using the flow quality expressed in terms of the relative velocity. The relative velocity is obtained from the drift flux relationship. The Chexal-Lellouche void fraction correlation is used to provide the drift velocity and the concentration parameter. The implicit one-step method and the block elimination technique are employed as numerical solution scheme for the node-flowpath thermal-hydraulic network. In order to validate the ARIST code, the steady state void distributions of the THETIS boil-off tests are simulated. The axial void distributions calculated by the Chexal-Lellouche fraction correlation at low pressure and low flow are better than those of both the two-fluid model of RELAP5/MOD3 code and the homogeneous model. The drift-flux model of the ARTIST code is an efficient tool in predicting the void distribution of two-phase flow at low pressure and low flow condtions

  18. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  19. Dependence of hotspot initiation on void distribution in high explosive crystals simulated with molecular dynamics

    Science.gov (United States)

    Herring, Stuart Davis

    Microscopic defects may dramatically affect the susceptibility of high explosives to shock initiation. Such defects redirect the shock's energy and become hotspots (concentrations of stress and heat) that can initiate chemical reactions. Sufficiently large or numerous defects may produce a self-sustaining deflagration or even detonation from a shock notably too weak to detonate defect-free samples. The effects of circular or spherical voids on the shock sensitivity of a model (two- or three-dimensional) high explosive crystal are considered. We simulate a piston impact using molecular dynamics with a Reactive Empirical Bond Order (REBO) model potential for a sub-micron, sub-ns exothermic reaction in a diatomic molecular solid. In both dimensionalities, the probability of initiating chemical reactions rises more suddenly with increasing piston velocity for larger voids that collapse more deterministically. A void of even 10 nm radius (˜39 interatomic spacings) reduces the minimum initiating velocity by a factor of 4 (8 in 3D). The transition at larger velocities to detonation is studied in micron-long samples with a single void (and its periodic images). Reactions during the shock traversal increase rapidly with velocity, then become a reliable detonation. In 2D, a void of radius 2.5 nm reduces the critical velocity by 10% from the perfect crystal; a Pop plot of the detonation delays at higher velocities shows a characteristic pressure dependence. 3D samples are more likely to react but less to detonate. In square lattices of voids, reducing the (common) void radius or increasing the porosity without changing the other parameter causes the hotspots to consume the material faster and detonation to occur sooner and at lower velocities. Early behavior is seen to follow a very simple ignition and growth model; the pressure exponents are more realistic than with single voids. The hotspots collectively develop a broad pressure wave (a sonic, diffuse deflagration front

  20. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  1. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  2. Why a steady state void size distribution in irradiated UO{sub 2}? A modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Maillard, S., E-mail: serge.maillard@cea.fr [CEA, DEN, SESC, LLCC, F-13108 St Paul lez Durance (France); Martin, G. [CEA, DEN, SESC, LLCC, F-13108 St Paul lez Durance (France); CEA, DEN, SPRC, LECy, F-13108 St Paul lez Durance (France); Sabathier, C. [CEA, DEN, SESC, LLCC, F-13108 St Paul lez Durance (France)

    2016-05-01

    In UO{sub 2} pellets irradiated in standard water reactor, Xe nano-bubbles nucleate, grow, coarsen and finally reach a quasi steady state size distribution: transmission electron microscope (TEM) observations typically report a concentration around 10{sup −4} nm{sup −3} and a radius around 0.5 nm. This phenomenon is often considered as a consequence of radiation enhanced diffusion, precipitation of gas atoms and ballistic mixing. However, in UO{sub 2} thin foils irradiated with energetic ions at room temperature, a nano-void population whose size distribution reaches a similar steady state can be observed, although quasi no foreign atoms are implanted nor significant cation vacancy diffusion expected in conditions. Atomistic simulations performed at low temperature only address the first stage of the process, supporting the assumption of void heterogeneous nucleation: 25 keV sub-cascades directly produce defect aggregates (loops and voids) even in the absence of gas atoms and thermal diffusion. In this work a semi-empirical stochastic model is proposed to enlarge the time scale covered by simulation up to damage levels where every point in the material undergoes the superposition of a large number of sub-cascade impacts. To account for the accumulation of these impacts, simple rules inferred from the atomistic simulation results are used. The model satisfactorily reproduces the TEM observations of nano-voids size and concentration, which paves the way for the introduction of a more realistic damage term in rate theory models.

  3. Measurement of void fraction and bubble size distribution in two-phase flow system

    International Nuclear Information System (INIS)

    Huahun, G.

    1987-01-01

    The importance of study two phase flow parameter and microstructure has appeared increasingly, with the development of two-phase flow discipline. In the paper, the measurement methods of several important microstructure parameter in a two phase flow vertical channel have been studied. Using conductance probe the two phase flow pattern and the average void fraction have been measured previously by the authors. This paper concerns microstructure of the bubble size distribution and local void fraction. The authors studied the methods of measuring bubble velocity, size distribution and local void fraction using double conductance probes and a set of apparatus. Based on our experiments and Yoshihiro work, a formula of calculated local void fraction has been deduced by using the statistical characteristics of bubbles in two phase flow and the relation between calculated bubble size and voltage has been determined. Finally the authors checked by using photograph and fast valve, which is classical but reliable. The results are the same with what has been studied before

  4. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  5. Joint probability distributions and fluctuation theorems

    International Nuclear Information System (INIS)

    García-García, Reinaldo; Kolton, Alejandro B; Domínguez, Daniel; Lecomte, Vivien

    2012-01-01

    We derive various exact results for Markovian systems that spontaneously relax to a non-equilibrium steady state by using joint probability distribution symmetries of different entropy production decompositions. The analytical approach is applied to diverse problems such as the description of the fluctuations induced by experimental errors, for unveiling symmetries of correlation functions appearing in fluctuation–dissipation relations recently generalized to non-equilibrium steady states, and also for mapping averages between different trajectory-based dynamical ensembles. Many known fluctuation theorems arise as special instances of our approach for particular twofold decompositions of the total entropy production. As a complement, we also briefly review and synthesize the variety of fluctuation theorems applying to stochastic dynamics of both continuous systems described by a Langevin dynamics and discrete systems obeying a Markov dynamics, emphasizing how these results emerge from distinct symmetries of the dynamical entropy of the trajectory followed by the system. For Langevin dynamics, we embed the 'dual dynamics' with a physical meaning, and for Markov systems we show how the fluctuation theorems translate into symmetries of modified evolution operators

  6. Influence of the voids fraction in the power distribution for two different types of fuel assemblies

    International Nuclear Information System (INIS)

    Jacinto C, S.; Del Valle G, E.; Alonso V, G.; Martinez C, E.

    2017-09-01

    In this work an analysis of the influence of the voids fraction in the power distribution was carried out, in order to understand more about the fission process and the energy produced by the fuel assembly type BWR. The fast neutron flux was analyzed considering neutrons with energies between 0.625 eV and 10 MeV. Subsequently, the thermal neutron flux analysis was carried out in a range between 0.005 eV and 0.625 eV. Likewise, its possible implications in the power distribution of the fuel cell were also analyzed. These analyzes were carried out for different void fraction values: 0.2, 0.4 and 0.8. The variations in different burn steps were also studied: 20, 40 and 60 Mwd / kg. These values were studied in two different types of fuel cells: Ge-12 and SVEA-96, with an average initial enrichment of 4.11%. (Author)

  7. Temperature distribution, porosity migration and formation of the central void in cylindrical fuel rods

    International Nuclear Information System (INIS)

    Cotta, R.M.; Roberty, N.C.

    1982-01-01

    The porosity - and temperature distribution in cylindrical fuels rods, were studied by numerical resolution of mass-and energy equation, as well as determining the evolution of the central void radii. The finite difference method with implicit formulation for heat conduction equation and explicit formulation for continuity equation, was used. The Nichols model was used in the determination of the constitutive equation of the porous migration velocity. (E.G.) [pt

  8. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  9. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    Science.gov (United States)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  10. 3D DEM simulation and analysis of void fraction distribution in a pebble bed high temperature reactor

    International Nuclear Information System (INIS)

    Yang, Xingtuan; Gui, Nan; Tu, Jiyuan; Jiang, Shengyao

    2014-01-01

    Highlights: • We show a detailed analysis of void fraction (VF) in HTR-10 of China using DEM. • Radial distribution (RD) of VF is uniform in the core and oscillated near the wall. • Axial distribution (AD) is linearly varied along height due to effect of gravity. • Steady RD of VF in the conical base is Gaussian-like, larger than packing bed. • Joint linear and normal distribution of VF is analyzed and explained. - Abstract: The current work analyzes the radial and axial distributions of void fraction of a pebble bed high temperature reactor. A three-dimensional pebble bed corresponding to our test facility of pebble bed type gas-cooled high temperature reactor (HTR-10) in Tsinghua University is simulated via discrete element method, and the radial and axial void fraction profiles are calculated. It validates the oscillating characteristics of radial void fraction near the wall. Detailed calculations show the differences of void fraction profiles between the stationary packing bed and the dynamically discharging bed. Based on the vertically and circumferentially averaged radial distribution and horizontally averaged axial distribution of void fraction, a fully three-dimensional analytical distribution of void fraction throughout the bed is established. The results show the combined effects of gravity and void variation in the pebble bed caused by the pebble discharging. It indicates the linearly increased packing effect caused by gravity in the vertical (axial) direction and the normal distribution of void in the horizontal (radial) direction by pebble drainage. These two effects coexist in the conical base of the bed whereas only the former effect exists in the cylindrical volume of the bed

  11. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    The theoretical literature has a rich characterization of scoring rules for eliciting the subjective beliefs that an individual has for continuous events, but under the restrictive assumption of risk neutrality. It is well known that risk aversion can dramatically affect the incentives to correctly...... report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...... to distort reports. We characterize the comparable implications of the general case of a risk averse agent when facing a popular scoring rule over continuous events, and find that these concerns do not apply with anything like the same force. For empirically plausible levels of risk aversion, one can...

  12. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  13. The probability distribution of extreme precipitation

    Science.gov (United States)

    Korolev, V. Yu.; Gorshenin, A. K.

    2017-12-01

    On the basis of the negative binomial distribution of the duration of wet periods calculated per day, an asymptotic model is proposed for distributing the maximum daily rainfall volume during the wet period, having the form of a mixture of Frechet distributions and coinciding with the distribution of the positive degree of a random variable having the Fisher-Snedecor distribution. The method of proving the corresponding result is based on limit theorems for extreme order statistics in samples of a random volume with a mixed Poisson distribution. The adequacy of the models proposed and methods of their statistical analysis is demonstrated by the example of estimating the extreme distribution parameters based on real data.

  14. Probability distribution of machining center failures

    International Nuclear Information System (INIS)

    Jia Yazhou; Wang Molin; Jia Zhixin

    1995-01-01

    Through field tracing research for 24 Chinese cutter-changeable CNC machine tools (machining centers) over a period of one year, a database of operation and maintenance for machining centers was built, the failure data was fitted to the Weibull distribution and the exponential distribution, the effectiveness was tested, and the failure distribution pattern of machining centers was found. Finally, the reliability characterizations for machining centers are proposed

  15. Foundations of quantization for probability distributions

    CERN Document Server

    Graf, Siegfried

    2000-01-01

    Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.

  16. Sensitivity analysis of an impedance void distribution in annular and bubbly flow: A theoretical study

    International Nuclear Information System (INIS)

    Lemonnier, H.; Nakach, R.; Favreau, C.; Selmer-Olsen, S.

    1989-01-01

    Impedance void meters are frequently used to measure area-averaged void fraction in pipes. This is primarily due to two reasons: first, this method is non-intrusive since the measurement can be done from electrodes flush mounted in the walls, and second, the signal processing equipment is simple. Impedance probes may be calibrated by using a pressure drop measurement or quick closing valves system and low attention is generally paid to void distribution effects. It can be proved that in annular flow, the departure from radial symmetry has a strong influence on the measured mean film thickness. This can be easily demonstrated by solving the Laplace equation for the electrical potential by simple analytical methods. When some spatial symmetry conditions are encountered, it is possible to calculate directly the conductance of the two-phase medium without calculating completely the potential. A solution of this problem by using the separation of variable technique is also presented. There, the main difficulty is due to the mixity of the boundary conditions: the boundary condition is both Neumann and Dirichlet type on the same coordinate curve. This formulation leads to a non-separable problem which is solved by truncating an infinite algebraic set of linear equations. (orig.)

  17. Eliciting Subjective Probability Distributions with Binary Lotteries

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2015-01-01

    We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment....

  18. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2017-01-01

    significantly due to risk aversion. We characterize an approach for eliciting the entire subjective belief distribution that is minimally biased due to risk aversion. We offer simulated examples to demonstrate the intuition of our approach. We also provide theory to formally characterize our framework. And we...... provide experimental evidence which corroborates our theoretical results. We conclude that for empirically plausible levels of risk aversion, one can reliably elicit most important features of the latent subjective belief distribution without undertaking calibration for risk attitudes providing one...

  19. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  20. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  1. Cosmic void clumps

    Science.gov (United States)

    Lares, M.; Luparello, H. E.; Garcia Lambas, D.; Ruiz, A. N.; Ceccarelli, L.; Paz, D.

    2017-10-01

    Cosmic voids are of great interest given their relation to the large scale distribution of mass and the way they trace cosmic flows shaping the cosmic web. Here we show that the distribution of voids has, in consonance with the distribution of mass, a characteristic scale at which void pairs are preferentially located. We identify clumps of voids with similar environments and use them to define second order underdensities. Also, we characterize its properties and analyze its impact on the cosmic microwave background.

  2. Probability distributions in risk management operations

    CERN Document Server

    Artikis, Constantinos

    2015-01-01

    This book is about the formulations, theoretical investigations, and practical applications of new stochastic models for fundamental concepts and operations of the discipline of risk management. It also examines how these models can be useful in the descriptions, measurements, evaluations, and treatments of risks threatening various modern organizations. Moreover, the book makes clear that such stochastic models constitute very strong analytical tools which substantially facilitate strategic thinking and strategic decision making in many significant areas of risk management. In particular the incorporation of fundamental probabilistic concepts such as the sum, minimum, and maximum of a random number of continuous, positive, independent, and identically distributed random variables in the mathematical structure of stochastic models significantly supports the suitability of these models in the developments, investigations, selections, and implementations of proactive and reactive risk management operations. The...

  3. A comparison of void distributions in Abell cluster catalogue with numerical simulations

    International Nuclear Information System (INIS)

    Jing Yipeng.

    1989-08-01

    We measured the void probability functions (VPF) for two virtually redshift complete samples of Abell clusters, and used them to test four current theoretical models. The VPF of R ≥ 1 clusters is lower than that of the explosion model by about one order. The open CDM model (Ω = 0.2) is excluded at ≥ 3 σ also due to oversuperclustering. The VPF of the flat neutrinos dominated universe is consistent with the observations, and the flat CDM model could be marginally accepted at 2 σ in matching the observational VPF. However, all four models are unsuccessful if we combine the two-point correlation function and VPF. (author). 15 refs, 3 figs

  4. Void hierarchy and cosmic structure

    International Nuclear Information System (INIS)

    Weygaert, Rien van de; Ravi Sheth

    2004-01-01

    Within the context of hierarchical scenarios of gravitational structure formation we describe how an evolving hierarchy of voids evolves on the basis of two processes, the void-in-void process and the void-in-cloud process. The related analytical formulation in terms of a two-barrier excursion problem leads to a self-similarly evolving peaked void size distribution

  5. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    Science.gov (United States)

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  6. Implementation of drift-flux correlations in ARTIST and its assessment in comparison with THETIS void distribution

    International Nuclear Information System (INIS)

    Yun, B. J.; Kim, H. C.; Moon, S. K.; Lee, W. J.

    1998-01-01

    Non-homogeneous, non-equilibrium drift-flux model was developed in ARTIST code to enhance capability of predicting two-phase flow void distribution at low pressure and low flow conditions. The governing equations of ARTIST code consist of three continuity equations (mixture, liquid, and noncondensibles), two energy equations (gas and mixture) and one mixture momentum equation constituted with the drift-flux model. In order to provide the Co and the Vgj of drift-flux model, four drift-flux correlations, which are Chexal-Lellouche, Ohkawa-Lahey, GE Ramp and Dix models, are implemented. In order to evaluate the accuracy of the drift flux correlations, the steady state void distributions of the THETIS boil-off tests are simulated. The results show that the drift-flux model is quite satisfactory in terms of accuracy and computational efficiency. Among the four drift-flux correlations, the Chexal-Lellouche model showed wide applicability in the prediction of void fraction from low to high pressure condition. Especially, the axial void distribution at low pressure and low flow is far better than those of both the two-fluid model of RELAP5/MOD3 code and the homogeneous model. Thus, the drift-flux model of the ARTIST code can be used as an efficient tool in predicting the void distribution of two-phase flow at low pressure and low flow conditions

  7. How Can Histograms Be Useful for Introducing Continuous Probability Distributions?

    Science.gov (United States)

    Derouet, Charlotte; Parzysz, Bernard

    2016-01-01

    The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…

  8. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    Science.gov (United States)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  9. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  10. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  11. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  12. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  13. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  14. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  15. A variational constitutive model for the distribution and interactions of multi-sized voids

    KAUST Repository

    Liu, Jinxing; El Sayed, Tamer S.

    2013-01-01

    of the radii of the voids. In this study, we use a new form of the incompressibility of the matrix to propose the formula for the volumetric plastic energy of a void inside a porous medium. As a consequence, we are able to account for the weakening effect

  16. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  17. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  18. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  19. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  20. Comparative analysis through probability distributions of a data set

    Science.gov (United States)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  1. Estimation of air void and aggregate spatial distributions in concrete under uniaxial compression using computer tomography scanning

    International Nuclear Information System (INIS)

    Wong, R.C.K.; Chau, K.T.

    2005-01-01

    Normal- and high-strength concrete cylinders (designed compressive strengths of 30 and 90 MPa at 28 days) were loaded uniaxially. Computer tomography (CT) scanning technique was used to examine the evolution of air voids inside the specimens at various loading states up to 85% of the ultimate compressive strength. The normal-strength concrete yielded a very different behaviour in changes of internal microstructure as compared to the high-strength concrete. There were significant instances of nucleation and growth in air voids in the normal-strength concrete specimen, while the increase in air voids in the high-strength concrete specimen was insignificant. In addition, CT images were used for mapping the aggregate spatial distributions within the specimens. No intrinsic anisotropy was detected from the fabric analysis

  2. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  3. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  4. Development of measurement method of void fraction distribution on subcooled flow boiling using neutron radiography

    International Nuclear Information System (INIS)

    Kureta, Masatoshi; Matsubayashi, Masahito; Akimoto, Hajime

    1999-03-01

    In relation to the development of a solid target of high intensity neutron source, plasma-facing components of fusion reactor and so forth, it is indispensable to estimate the void fraction for high-heat-load subcooled flow boiling of water. Since the existing prediction method of void fraction is based on the database for tubes, it is necessary to investigate extendibility of the existing prediction method to narrow-gap rectangular channels that is used in the high-heat-load devices. However, measurement method of void fraction in the narrow-gap rectangular channel has not been established yet because of the difficulty of measurement. The objectives of this investigation are development of a new system for bubble visualization and void fraction measurement on subcooled flow boiling in narrow-gap rectangular channels using the neutron radiography, and establishment of void fraction database by using this measurement system. This report describes the void fraction measurement method by the neutron radiography technique, and summarizes the measured void fraction data in one-side heated narrow-gap rectangular channels at subcooled boiling condition. (author)

  5. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  6. A Probability Distribution over Latent Causes, in the Orbitofrontal Cortex.

    Science.gov (United States)

    Chan, Stephanie C Y; Niv, Yael; Norman, Kenneth A

    2016-07-27

    The orbitofrontal cortex (OFC) has been implicated in both the representation of "state," in studies of reinforcement learning and decision making, and also in the representation of "schemas," in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or "latent cause" that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes' rule to compute a posterior probability distribution over latent causes. To test whether such a posterior probability distribution is represented in the OFC, we tasked human participants with inferring a probability distribution over four possible latent causes, based on their observations. Using fMRI pattern similarity analyses, we found that BOLD activity in the OFC is best explained as representing the (log-transformed) posterior distribution over latent causes. Furthermore, this pattern explained OFC activity better than other task-relevant alternatives, such as the most probable latent cause, the most recent observation, or the uncertainty over latent causes. Our world is governed by hidden (latent) causes that we cannot observe, but which generate the observations we see. A range of high-level cognitive processes require inference of a probability distribution (or "belief distribution") over the possible latent causes that might be generating our current observations. This is true for reinforcement learning and decision making (where the latent cause comprises the true "state" of the task), and for episodic memory (where memories are believed to be organized by the inferred situation or "schema"). Using fMRI, we show that this belief distribution over latent causes is encoded in patterns of brain activity in the orbitofrontal cortex, an area that has been separately implicated in the representations of both states and schemas. Copyright © 2016 the authors 0270-6474/16/367817-12$15.00/0.

  7. STRUCTURE IN THE 3D GALAXY DISTRIBUTION. II. VOIDS AND WATERSHEDS OF LOCAL MAXIMA AND MINIMA

    International Nuclear Information System (INIS)

    Way, M. J.; Gazis, P. R.; Scargle, Jeffrey D.

    2015-01-01

    The major uncertainties in studies of the multi-scale structure of the universe arise not from observational errors but from the variety of legitimate definitions and detection methods for individual structures. To facilitate the study of these methodological dependencies, we have carried out 12 different analyses defining structures in various ways. This has been done in a purely geometrical way by utilizing the HOP algorithm as a unique parameter-free method of assigning groups of galaxies to local density maxima or minima. From three density estimation techniques (smoothing kernels, Bayesian blocks, and self-organizing maps) applied to three data sets (the Sloan Digital Sky Survey Data Release 7, the Millennium simulation, and randomly distributed points) we tabulate information that can be used to construct catalogs of structures connected to local density maxima and minima. We also introduce a void finder that utilizes a method to assemble Delaunay tetrahedra into connected structures and characterizes regions empty of galaxies in the source catalog

  8. The characteristics of void distribution in spalled high purity copper cylinder under sweeping detonation

    Science.gov (United States)

    Yang, Yang; Jiang, Zhi; Chen, Jixinog; Guo, Zhaoliang; Tang, Tiegang; Hu, Haibo

    2018-03-01

    The effects of different peak compression stresses (2-5 GPa) on the spallation behaviour of high purity copper cylinder during sweeping detonation were examined by Electron Backscatter Diffraction Microscopy, Doppler Pins System and Optical Microscopy techniques. The velocity history of inner surface and the characteristics of void distributions in spalled copper cylinder were investigated. The results indicated that the spall strength of copper in these experiments was less than that revealed in previous reports concerning plate impact loading. The geometry of cylindrical copper and the obliquity of incident shock during sweeping detonation may be the main reasons. Different loading stresses seemed to be responsible for the characteristics of the resultant damage fields, and the maximum damage degree increased with increasing shock stress. Spall planes in different cross-sections of sample loaded with the same shock stress of 3.29 GPa were found, and the distance from the initiation end has little effect on the maximum damage degree (the maximum damage range from 12 to 14%), which means that the spallation behaviour was stable along the direction parallel to the detonation propagation direction under the same shock stress.

  9. Evaluation of burst probability for tubes by Weibull distributions

    International Nuclear Information System (INIS)

    Kao, S.

    1975-10-01

    The investigations of candidate distributions that best describe the burst pressure failure probability characteristics of nuclear power steam generator tubes has been continued. To date it has been found that the Weibull distribution provides an acceptable fit for the available data from both the statistical and physical viewpoints. The reasons for the acceptability of the Weibull distribution are stated together with the results of tests for the suitability of fit. In exploring the acceptability of the Weibull distribution for the fitting, a graphical method to be called the ''density-gram'' is employed instead of the usual histogram. With this method a more sensible graphical observation on the empirical density may be made for cases where the available data is very limited. Based on these methods estimates of failure pressure are made for the left-tail probabilities

  10. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  11. Multimode Interference: Identifying Channels and Ridges in Quantum Probability Distributions

    OpenAIRE

    O'Connell, Ross C.; Loinaz, Will

    2004-01-01

    The multimode interference technique is a simple way to study the interference patterns found in many quantum probability distributions. We demonstrate that this analysis not only explains the existence of so-called "quantum carpets," but can explain the spatial distribution of channels and ridges in the carpets. With an understanding of the factors that govern these channels and ridges we have a limited ability to produce a particular pattern of channels and ridges by carefully choosing the ...

  12. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  13. Some applications of the fractional Poisson probability distribution

    International Nuclear Information System (INIS)

    Laskin, Nick

    2009-01-01

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  14. Numerical Loading of a Maxwellian Probability Distribution Function

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A renormalization procedure for the numerical loading of a Maxwellian probability distribution function (PDF) is formulated. The procedure, which involves the solution of three coupled nonlinear equations, yields a numerically loaded PDF with improved properties for higher velocity moments. This method is particularly useful for low-noise particle-in-cell simulations with electron dynamics

  15. Quantum Fourier transform, Heisenberg groups and quasi-probability distributions

    International Nuclear Information System (INIS)

    Patra, Manas K; Braunstein, Samuel L

    2011-01-01

    This paper aims to explore the inherent connection between Heisenberg groups, quantum Fourier transform (QFT) and (quasi-probability) distribution functions. Distribution functions for continuous and finite quantum systems are examined from three perspectives and all of them lead to Weyl-Gabor-Heisenberg groups. The QFT appears as the intertwining operator of two equivalent representations arising out of an automorphism of the group. Distribution functions correspond to certain distinguished sets in the group algebra. The marginal properties of a particular class of distribution functions (Wigner distributions) arise from a class of automorphisms of the group algebra of the Heisenberg group. We then study the reconstruction of the Wigner function from the marginal distributions via inverse Radon transform giving explicit formulae. We consider some applications of our approach to quantum information processing and quantum process tomography.

  16. Simulation of Daily Weather Data Using Theoretical Probability Distributions.

    Science.gov (United States)

    Bruhn, J. A.; Fry, W. E.; Fick, G. W.

    1980-09-01

    A computer simulation model was constructed to supply daily weather data to a plant disease management model for potato late blight. In the weather model Monte Carlo techniques were employed to generate daily values of precipitation, maximum temperature, minimum temperature, minimum relative humidity and total solar radiation. Each weather variable is described by a known theoretical probability distribution but the values of the parameters describing each distribution are dependent on the occurrence of rainfall. Precipitation occurrence is described by a first-order Markov chain. The amount of rain, given that rain has occurred, is described by a gamma probability distribution. Maximum and minimum temperature are simulated with a trivariate normal probability distribution involving maximum temperature on the previous day, maximum temperature on the current day and minimum temperature on the current day. Parameter values for this distribution are dependent on the occurrence of rain on the previous day. Both minimum relative humidity and total solar radiation are assumed to be normally distributed. The values of the parameters describing the distribution of minimum relative humidity is dependent on rainfall occurrence on the previous day and current day. Parameter values for total solar radiation are dependent on the occurrence of rain on the current day. The assumptions made during model construction were found to be appropriate for actual weather data from Geneva, New York. The performance of the weather model was evaluated by comparing the cumulative frequency distributions of simulated weather data with the distributions of actual weather data from Geneva, New York and Fort Collins, Colorado. For each location, simulated weather data were similar to actual weather data in terms of mean response, variability and autocorrelation. The possible applications of this model when used with models of other components of the agro-ecosystem are discussed.

  17. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  18. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  19. Idealized models of the joint probability distribution of wind speeds

    Science.gov (United States)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  20. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  1. Superthermal photon bunching in terms of simple probability distributions

    Science.gov (United States)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  2. Measurement of probability distributions for internal stresses in dislocated crystals

    Energy Technology Data Exchange (ETDEWEB)

    Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)

    2014-11-03

    Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.

  3. Outage probability of distributed beamforming with co-channel interference

    KAUST Repository

    Yang, Liang

    2012-03-01

    In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.

  4. Probabilities of filaments in a Poissonian distribution of points -I

    International Nuclear Information System (INIS)

    Betancort-Rijo, J.

    1989-01-01

    Statistical techniques are devised to assess the likelihood of a Poisson sample of points in two and three dimensions, containing specific filamentary structures. For that purpose, the expression of Otto et al (1986. Astrophys. J., 304) for the probability density of clumps in a Poissonian distribution of points is generalized for any value of the density contrast. A way of counting filaments differing from that of Otto et al. is proposed, because at low density contrast the filaments counted by Otto et al. are distributed in a clumpy fashion, each clump of filaments corresponding to a distinct observed filament. (author)

  5. Universal Probability Distribution Function for Bursty Transport in Plasma Turbulence

    International Nuclear Information System (INIS)

    Sandberg, I.; Benkadda, S.; Garbet, X.; Ropokis, G.; Hizanidis, K.; Castillo-Negrete, D. del

    2009-01-01

    Bursty transport phenomena associated with convective motion present universal statistical characteristics among different physical systems. In this Letter, a stochastic univariate model and the associated probability distribution function for the description of bursty transport in plasma turbulence is presented. The proposed stochastic process recovers the universal distribution of density fluctuations observed in plasma edge of several magnetic confinement devices and the remarkable scaling between their skewness S and kurtosis K. Similar statistical characteristics of variabilities have been also observed in other physical systems that are characterized by convection such as the x-ray fluctuations emitted by the Cygnus X-1 accretion disc plasmas and the sea surface temperature fluctuations.

  6. Estimating probable flaw distributions in PWR steam generator tubes

    International Nuclear Information System (INIS)

    Gorman, J.A.; Turner, A.P.L.

    1997-01-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses

  7. Probability distributions for Markov chain based quantum walks

    Science.gov (United States)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  8. Theoretical derivation of wind power probability distribution function and applications

    International Nuclear Information System (INIS)

    Altunkaynak, Abdüsselam; Erdik, Tarkan; Dabanlı, İsmail; Şen, Zekai

    2012-01-01

    Highlights: ► Derivation of wind power stochastic characteristics are standard deviation and the dimensionless skewness. ► The perturbation is expressions for the wind power statistics from Weibull probability distribution function (PDF). ► Comparisons with the corresponding characteristics of wind speed PDF abides by the Weibull PDF. ► The wind power abides with the Weibull-PDF. -- Abstract: The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper.

  9. Void lattices

    International Nuclear Information System (INIS)

    Chadderton, L.T.; Johnson, E.; Wohlenberg, T.

    1976-01-01

    Void lattices in metals apparently owe their stability to elastically anisotropic interactions. An ordered array of voids on the anion sublattice in fluorite does not fit so neatly into this scheme of things. Crowdions may play a part in the formation of the void lattice, and stability may derive from other sources. (Auth.)

  10. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  11. Scarred resonances and steady probability distribution in a chaotic microcavity

    International Nuclear Information System (INIS)

    Lee, Soo-Young; Rim, Sunghwan; Kim, Chil-Min; Ryu, Jung-Wan; Kwon, Tae-Yoon

    2005-01-01

    We investigate scarred resonances of a stadium-shaped chaotic microcavity. It is shown that two components with different chirality of the scarring pattern are slightly rotated in opposite ways from the underlying unstable periodic orbit, when the incident angles of the scarring pattern are close to the critical angle for total internal reflection. In addition, the correspondence of emission pattern with the scarring pattern disappears when the incident angles are much larger than the critical angle. The steady probability distribution gives a consistent explanation about these interesting phenomena and makes it possible to expect the emission pattern in the latter case

  12. A Bernstein-Von Mises Theorem for discrete probability distributions

    OpenAIRE

    Boucheron, S.; Gassiat, E.

    2008-01-01

    We investigate the asymptotic normality of the posterior distribution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function θ0 on ℕ∖{0} and a sequence of truncation levels (kn)n satisfying kn3≤ninf i≤knθ0(i). Let θ̂ denote the maximum likelihood estimate of (θ0(i))i≤kn and let Δn(θ0) denote the kn-dimensional vector which i-th coordinate is defined by $\\sqrt{n}(\\hat{\\theta}_{n}(i)-\\theta_{0}(i))$ for 1≤i≤kn. We check that under mild ...

  13. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  14. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  15. Diachronic changes in word probability distributions in daily press

    Directory of Open Access Journals (Sweden)

    Stanković Jelena

    2006-01-01

    Full Text Available Changes in probability distributions of individual words and word types were investigated within two samples of daily press in the span of fifty years. Two samples of daily press were used in this study. The one derived from the Corpus of Serbian Language (CSL /Kostić, Đ., 2001/ that covers period between 1945. and 1957. and the other derived from the Ebart Media Documentation (EBR that was complied from seven daily news and five weekly magazines from 2002. and 2003. Each sample consisted of about 1 million words. The obtained results indicate that nouns and adjectives were more frequent in the CSL, while verbs and prepositions are more frequent in the EBR sample, suggesting a decrease of sentence length in the last five decades. Conspicuous changes in probability distribution of individual words were observed for nouns and adjectives, while minimal or no changes were observed for verbs and prepositions. Such an outcome suggests that nouns and adjectives are most susceptible to diachronic changes, while verbs and prepositions appear to be resistant to such changes.

  16. Probability Distribution and Projected Trends of Daily Precipitation in China

    Institute of Scientific and Technical Information of China (English)

    CAO; Li-Ge; ZHONG; Jun; SU; Bu-Da; ZHAI; Jian-Qing; Macro; GEMMER

    2013-01-01

    Based on observed daily precipitation data of 540 stations and 3,839 gridded data from the high-resolution regional climate model COSMO-Climate Limited-area Modeling(CCLM)for 1961–2000,the simulation ability of CCLM on daily precipitation in China is examined,and the variation of daily precipitation distribution pattern is revealed.By applying the probability distribution and extreme value theory to the projected daily precipitation(2011–2050)under SRES A1B scenario with CCLM,trends of daily precipitation series and daily precipitation extremes are analyzed.Results show that except for the western Qinghai-Tibetan Plateau and South China,distribution patterns of the kurtosis and skewness calculated from the simulated and observed series are consistent with each other;their spatial correlation coefcients are above 0.75.The CCLM can well capture the distribution characteristics of daily precipitation over China.It is projected that in some parts of the Jianghuai region,central-eastern Northeast China and Inner Mongolia,the kurtosis and skewness will increase significantly,and precipitation extremes will increase during 2011–2050.The projected increase of maximum daily rainfall and longest non-precipitation period during flood season in the aforementioned regions,also show increasing trends of droughts and floods in the next 40 years.

  17. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  18. Non-Gaussian probability distributions of solar wind fluctuations

    Directory of Open Access Journals (Sweden)

    E. Marsch

    Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

  19. Characterizing single-molecule FRET dynamics with probability distribution analysis.

    Science.gov (United States)

    Santoso, Yusdi; Torella, Joseph P; Kapanidis, Achillefs N

    2010-07-12

    Probability distribution analysis (PDA) is a recently developed statistical tool for predicting the shapes of single-molecule fluorescence resonance energy transfer (smFRET) histograms, which allows the identification of single or multiple static molecular species within a single histogram. We used a generalized PDA method to predict the shapes of FRET histograms for molecules interconverting dynamically between multiple states. This method is tested on a series of model systems, including both static DNA fragments and dynamic DNA hairpins. By fitting the shape of this expected distribution to experimental data, the timescale of hairpin conformational fluctuations can be recovered, in good agreement with earlier published results obtained using different techniques. This method is also applied to studying the conformational fluctuations in the unliganded Klenow fragment (KF) of Escherichia coli DNA polymerase I, which allows both confirmation of the consistency of a simple, two-state kinetic model with the observed smFRET distribution of unliganded KF and extraction of a millisecond fluctuation timescale, in good agreement with rates reported elsewhere. We expect this method to be useful in extracting rates from processes exhibiting dynamic FRET, and in hypothesis-testing models of conformational dynamics against experimental data.

  20. Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Huiwu Luo

    2015-01-01

    Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.

  1. Measuring Robustness of Timetables at Stations using a Probability Distribution

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex

    Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... of the station’s infrastructure layout and plan of operation. However, these three methods do not take the timetable at the station into consideration. Therefore, two methods are introduced in this paper, making it possible to estimate the robustness of different timetables at a station or different...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time...

  2. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  3. Multiscale probability distribution of pressure fluctuations in fluidized beds

    International Nuclear Information System (INIS)

    Ghasemi, Fatemeh; Sahimi, Muhammad; Reza Rahimi Tabar, M; Peinke, Joachim

    2012-01-01

    Analysis of flow in fluidized beds, a common chemical reactor, is of much current interest due to its fundamental as well as industrial importance. Experimental data for the successive increments of the pressure fluctuations time series in a fluidized bed are analyzed by computing a multiscale probability density function (PDF) of the increments. The results demonstrate the evolution of the shape of the PDF from the short to long time scales. The deformation of the PDF across time scales may be modeled by the log-normal cascade model. The results are also in contrast to the previously proposed PDFs for the pressure fluctuations that include a Gaussian distribution and a PDF with a power-law tail. To understand better the properties of the pressure fluctuations, we also construct the shuffled and surrogate time series for the data and analyze them with the same method. It turns out that long-range correlations play an important role in the structure of the time series that represent the pressure fluctuation. (paper)

  4. Experimental investigation of void distribution in Suppression Pool during the initial blowdown period of a Loss of Coolant Accident using air–water two-phase mixture

    International Nuclear Information System (INIS)

    Rassame, Somboon; Griffiths, Matthew; Yang, Jun; Lee, Doo Yong; Ju, Peng; Choi, Sung Won; Hibiki, Takashi; Ishii, Mamoru

    2014-01-01

    Highlights: • Basic understanding of the venting phenomena in the SP during a LOCA was obtained. • A series of experiment is carried out using the PUMA-E test facility. • Two phases of experiments, namely, an initial and a quasi-steady phase were observed. • The maximum void penetration depth was experienced during the initial phase. - Abstract: During the initial blowdown period of a Loss of Coolant Accident (LOCA), the non-condensable gas initially contained in the BWR containment is discharged to the pressure suppression chamber through the blowdown pipes. The performance of Emergency Core Cooling System (ECCS) can be degraded due to the released gas ingestion into the suction intakes of the ECCS pumps. The understanding of the relevant phenomena in the pressure suppression chamber is important in analyzing potential gas intrusion into the suction intakes of ECCS pumps. To obtain the basic understanding of the relevant phenomena and the generic data of void distribution in the pressure suppression chamber during the initial blowdown period of a LOCA, tests with various blowdown conditions were conducted using the existing Suppression Pool (SP) tank of the integral test facility, called Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility, a scaled downcomer pipe installed in the PUMA-E SP, and air discharge pipe system. Two different diameter sizes of air injection pipe (0.076 and 0.102 m), a range of air volumetric flux (7.9–24.7 m/s), initial void conditions in an air injection pipe (fully void, partially void, and fully filled with water) and different air velocity ramp rates (1.0, 1.5, and 2.0 s) are used to investigate the impact of the blowdown conditions to the void distribution in the SP. Two distinct phases of experiments, namely, an initial and a quasi-steady phase were observed. The maximum void penetration depth was experienced during the initial phase. The quasi-steady phase provided less void

  5. Effect of Wall Friction and Vortex Generation on Radial Void Distribution The Wall - Vortex Effect

    Energy Technology Data Exchange (ETDEWEB)

    Rouhani, Zia

    1974-09-15

    Arguments are presented to prove the existence of rolling vortices in two-phase flow. In liquid phase they will appear in a boundary layer near the walls while in the continuous vapor phase they will occur near the interface with a liquid film. The intensity and size of these vortices are expected to depend on the local velocity gradients normal to the walls. A discussion is given of the interaction between the rotational field associated with such vortices and bubbles in liquid flow or droplets in vapor-flow. This interaction is called the wall-vortex effect. It appears that several, apparently unrelated, phenomena observed in two-phase flow systems may be interpreted in terms of this mechanism. Among these are: Radial void peaking near the walls; Slip ratios less than unity observed even in vertical upward flow; Reduced droplet diffusion near the liquid film; Reduced vapor mixing between subchannels at low steam qualities; and Accelerated flashing process in flow of depressurized liquid. Finally, a comparison with the well-known Magnus effect is also included

  6. Performance Probability Distributions for Sediment Control Best Management Practices

    Science.gov (United States)

    Ferrell, L.; Beighley, R.; Walsh, K.

    2007-12-01

    Controlling soil erosion and sediment transport can be a significant challenge during the construction process due to the extent and conditions of bare, disturbed soils. Best Management Practices (BMPs) are used as the framework for the design of sediment discharge prevention systems in stormwater pollution prevention plans which are typically required for construction sites. This research focuses on commonly-used BMP systems for perimeter control of sediment export: silt fences and fiber rolls. Although these systems are widely used, the physical and engineering parameters describing their performance are not well understood. Performance expectations are based on manufacturer results, but due to the dynamic conditions that exist on a construction site performance expectations are not always achievable in the field. Based on experimental results product performance is shown to be highly variable. Experiments using the same installation procedures show inconsistent sediment removal performances ranging from (>)85 percent to zero. The goal of this research is to improve the determination of off-site sediment yield based on probabilistic performance results of perimeter control BMPs. BMPs are evaluated in the Soil Erosion Research Laboratory (SERL) in the Civil and Environmental Engineering department at San Diego State University. SERL experiments are performed on a 3-m by 10-m tilting soil bed with a soil depth of 0.5 meters and a slope of 33 percent. The simulated storm event consists of 17 mm/hr for 20 minutes followed by 51 mm/hr for 30 minutes. The storm event is based on an ASTM design storm intended to simulate BMP failures. BMP performance is assessed based on experiments where BMPs are installed per manufacture specifications, less than optimal installations, and no treatment conditions. Preliminary results from 30 experiments are presented and used to develop probability distributions for BMP sediment removal efficiencies. The results are then combined with

  7. The effect of the advanced drift-flux model of ASSERT-PV on critical heat flux, flow and void distributions in CANDU bundle subchannels

    International Nuclear Information System (INIS)

    Hammouda, N.; Rao, Y.F.

    2017-01-01

    Highlights: • Presentation of the “advanced” drift-flux model of the subchannel code ASSERT-PV. • Study the effect of the drift-flux model of ASSERT on CHF and flow distribution. • Quantify model component effects with flow, quality and dryout power measurements. - Abstract: This paper studies the effect of the drift flux model of the subchannel code ASSERT-PV on critical heat flux (CHF), void fraction and flow distribution across fuel bundles. Numerical experiments and comparison against measurements were performed to examine the trends and relative behaviour of the different components of the model under various flow conditions. The drift flux model of ASSERT-PV is composed of three components: (a) the lateral component or diversion cross-flow, caused by pressure difference between connected subchannels, (b) the turbulent diffusion component or the turbulent mixing through gaps of subchannels, caused by instantaneous turbulent fluctuations or flow oscillations, and (c) the void drift component that occurs due to the two-phase tendency toward a preferred distribution. This study shows that the drift flux model has a significant impact on CHF, void fraction and flow distribution predictions. The lateral component of the drift flux model has a stronger effect on CHF predictions than the axial component, especially for horizontal flow. Predictions of CHF, void fraction and flow distributions are most sensitive to the turbulent diffusion component of the model, followed by the void drift component. Buoyancy drift can be significant, but it does not have as much influence on CHF and flow distribution as the turbulent diffusion and void drift.

  8. Spectro-ellipsometric studies of sputtered amorphous Titanium dioxide thin films: simultaneous determination of refractive index, extinction coefficient, and void distribution

    CERN Document Server

    Lee, S I; Oh, S G

    1999-01-01

    Amorphous titanium dioxide thin films were deposited onto silicon substrates by using RF magnetron sputtering, and the index of refraction, the extinction coefficient, and the void distribution of these films were simultaneously determined from the analyses of there ellipsometric spectra. In particular, our novel strategy, which combines the merits of multi-sample fitting, the dual dispersion function, and grid search, was proven successful in determining optical constants over a wide energy range, including the energy region where the extinction coefficient was large. Moreover, we found that the void distribution was dependent on the deposition conditions, such as the sputtering power, the substrate temperature, and the substrate surface.

  9. Optimal design of unit hydrographs using probability distribution and ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    optimization formulation is solved using binary-coded genetic algorithms. The number of variables to ... Unit hydrograph; rainfall-runoff; hydrology; genetic algorithms; optimization; probability ..... Application of the model. Data derived from the ...

  10. Alignment of voids in the cosmic web

    NARCIS (Netherlands)

    Platen, Erwin; van de Weygaert, Rien; Jones, Bernard J. T.

    2008-01-01

    We investigate the shapes and mutual alignment of voids in the large-scale matter distribution of a Lambda cold dark matter (Lambda CDM) cosmology simulation. The voids are identified using the novel watershed void finder (WVF) technique. The identified voids are quite non-spherical and slightly

  11. Experimental investigation of void distribution in suppression pool over the duration of a loss of coolant accident using steam–water two-phase mixture

    International Nuclear Information System (INIS)

    Rassame, Somboon; Griffiths, Matthew; Yang, Jun; Ju, Peng; Sharma, Subash; Hibiki, Takashi; Ishii, Mamoru

    2015-01-01

    Highlights: • Experiments were conducted to study void fraction distribution in SP during blowdown. • 3 Experimental phases, namely, an initial and a quasi-steady phase, chugging were observed. • The maximum void penetration depth was experienced during the initial phase. • The quasi-steady phase provided less void penetration depth with oscillations. • The chugging phase was experienced at the end of experimental phase. - Abstract: Studies are underway to determine if a large amount gas discharged through the downcomer pipes in the pressure suppression chamber during the blowdown of Loss of Coolant Accident (LOCA) can potentially be entrained into the Emergency Core Cooling System (ECCS) suction piping of BWR. This may result in degraded ECCS pumps performance which could affect the ability to maintain or recover the water inventory level in the Reactor Pressure Vessel (RPV) during a LOCA. Therefore, it is very important to understand the void behavior in the pressure suppression chamber during the blowdown period of a LOCA. To address this issue, a set of experiments is conducted using the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. The geometry of the test apparatus is determined based on the basic geometrical scaling analysis from a prototypical BWR containment (MARK I) with a consideration of downcomer size, downcomer water submergence depth and Suppression Pool (SP) water level. Several instruments are installed in the test facility to measure the required experimental data such as the steam mass flow rate, void fraction, pressure and temperature. In the experiments, sequential flows of air, steam–air mixture and pure steam-each with the various flow rate conditions are injected from the Drywell (DW) through a downcomer pipe in the SP. Eight tests with two different downcomer sizes, various initial gas volumetric fluxes at the downcomer, and two different initial non-condensable gas

  12. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  13. Tools for Bramwell-Holdsworth-Pinton Probability Distribution

    Directory of Open Access Journals (Sweden)

    Mirela Danubianu

    2009-01-01

    Full Text Available This paper is a synthesis of a range of papers presentedat various conferences related to distribution Bramwell-Holdsworth-Pinton. S. T. Bramwell, P. C. W. Holdsworth, J. F.Pinton introduced a new non-parametric distribution (calledBHP after studying some magnetization problems in 2D. Probabilitydensity function of distribution can be aproximated as amodified GFT (Gumbel-Fisher-Tippitt distribution.

  14. Influence of nucleon density distribution in nucleon emission probability

    International Nuclear Information System (INIS)

    Paul, Sabyasachi; Nandy, Maitreyee; Mohanty, A.K.; Sarkar, P.K.; Gambhir, Y.K.

    2014-01-01

    Different decay modes are observed in heavy ion reactions at low to intermediate energies. It is interesting to study total neutron emission in these reactions which may be contributed by all/many of these decay modes. In an attempt to understand the importance of mean field and the entrance channel angular momentum, we study their influence on the emission probability of nucleons in heavy ion reactions in this work. This study owes its significance to the fact that once population of different states are determined, emission probability governs the double differential neutron yield

  15. Estimating the population distribution of usual 24-hour sodium excretion from timed urine void specimens using a statistical approach accounting for correlated measurement errors.

    Science.gov (United States)

    Wang, Chia-Yih; Carriquiry, Alicia L; Chen, Te-Ching; Loria, Catherine M; Pfeiffer, Christine M; Liu, Kiang; Sempos, Christopher T; Perrine, Cria G; Cogswell, Mary E

    2015-05-01

    High US sodium intake and national reduction efforts necessitate developing a feasible and valid monitoring method across the distribution of low-to-high sodium intake. We examined a statistical approach using timed urine voids to estimate the population distribution of usual 24-h sodium excretion. A sample of 407 adults, aged 18-39 y (54% female, 48% black), collected each void in a separate container for 24 h; 133 repeated the procedure 4-11 d later. Four timed voids (morning, afternoon, evening, overnight) were selected from each 24-h collection. We developed gender-specific equations to calibrate total sodium excreted in each of the one-void (e.g., morning) and combined two-void (e.g., morning + afternoon) urines to 24-h sodium excretion. The calibrated sodium excretions were used to estimate the population distribution of usual 24-h sodium excretion. Participants were then randomly assigned to modeling (n = 160) or validation (n = 247) groups to examine the bias in estimated population percentiles. Median bias in predicting selected percentiles (5th, 25th, 50th, 75th, 95th) of usual 24-h sodium excretion with one-void urines ranged from -367 to 284 mg (-7.7 to 12.2% of the observed usual excretions) for men and -604 to 486 mg (-14.6 to 23.7%) for women, and with two-void urines from -338 to 263 mg (-6.9 to 10.4%) and -166 to 153 mg (-4.1 to 8.1%), respectively. Four of the 6 two-void urine combinations produced no significant bias in predicting selected percentiles. Our approach to estimate the population usual 24-h sodium excretion, which uses calibrated timed-void sodium to account for day-to-day variation and covariance between measurement errors, produced percentile estimates with relatively low biases across low-to-high sodium excretions. This may provide a low-burden, low-cost alternative to 24-h collections in monitoring population sodium intake among healthy young adults and merits further investigation in other population subgroups. © 2015 American

  16. Investigation of Probability Distributions Using Dice Rolling Simulation

    Science.gov (United States)

    Lukac, Stanislav; Engel, Radovan

    2010-01-01

    Dice are considered one of the oldest gambling devices and thus many mathematicians have been interested in various dice gambling games in the past. Dice have been used to teach probability, and dice rolls can be effectively simulated using technology. The National Council of Teachers of Mathematics (NCTM) recommends that teachers use simulations…

  17. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  18. Influence of dose distribution homogeneity on the tumor control probability in heavy-ion radiotherapy

    International Nuclear Information System (INIS)

    Wen Xiaoqiong; Li Qiang; Zhou Guangming; Li Wenjian; Wei Zengquan

    2001-01-01

    In order to estimate the influence of the un-uniform dose distribution on the clinical treatment result, the Influence of dose distribution homogeneity on the tumor control probability was investigated. Basing on the formula deduced previously for survival fraction of cells irradiated by the un-uniform heavy-ion irradiation field and the theory of tumor control probability, the tumor control probability was calculated for a tumor mode exposed to different dose distribution homogeneity. The results show that the tumor control probability responding to the same total dose will decrease if the dose distribution homogeneity gets worse. In clinical treatment, the dose distribution homogeneity should be better than 95%

  19. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    Science.gov (United States)

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  20. Modified Stieltjes Transform and Generalized Convolutions of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Lev B. Klebanov

    2018-01-01

    Full Text Available The classical Stieltjes transform is modified in such a way as to generalize both Stieltjes and Fourier transforms. This transform allows the introduction of new classes of commutative and non-commutative generalized convolutions. A particular case of such a convolution for degenerate distributions appears to be the Wigner semicircle distribution.

  1. Some explicit expressions for the probability distribution of force ...

    Indian Academy of Sciences (India)

    96: Art. No. 098001. Tighe B P, Socolar J E S, Schaeffer D G, Mitchener W G, Huber M L 2005 Force distributions in a triangular lattice of rigid bars. Phys. Rev. E 72: Art. No. 031306. Vargas W L, Murcia J C, Palacio L E, Dominguez D M 2003 Fractional diffusion model for force distribution in static granular media. Phys. Rev.

  2. Probability Distribution Function of the Upper Equatorial Pacific Current Speeds

    National Research Council Canada - National Science Library

    Chu, Peter C

    2005-01-01

    ...), constructed from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events...

  3. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  4. International Benchmark on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume II: Benchmark Results of Phase I: Void Distribution

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best

  5. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  6. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  7. Study on probability distribution of fire scenarios in risk assessment to emergency evacuation

    International Nuclear Information System (INIS)

    Chu Guanquan; Wang Jinhui

    2012-01-01

    Event tree analysis (ETA) is a frequently-used technique to analyze the probability of probable fire scenario. The event probability is usually characterized by definite value. It is not appropriate to use definite value as these estimates may be the result of poor quality statistics and limited knowledge. Without addressing uncertainties, ETA will give imprecise results. The credibility of risk assessment will be undermined. This paper presents an approach to address event probability uncertainties and analyze probability distribution of probable fire scenario. ETA is performed to construct probable fire scenarios. The activation time of every event is characterized as stochastic variable by considering uncertainties of fire growth rate and other input variables. To obtain probability distribution of probable fire scenario, Markov Chain is proposed to combine with ETA. To demonstrate the approach, a case study is presented.

  8. Evaluation of the probability distribution of intake from a single measurement on a personal air sampler

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1988-01-01

    An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)

  9. Cosmological constraints from the convergence 1-point probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Patton, Kenneth [The Ohio State Univ., Columbus, OH (United States); Blazek, Jonathan [The Ohio State Univ., Columbus, OH (United States); Ecole Polytechnique Federale de Lausanne (EPFL), Versoix (Switzerland); Honscheid, Klaus [The Ohio State Univ., Columbus, OH (United States); Huff, Eric [The Ohio State Univ., Columbus, OH (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Melchior, Peter [Princeton Univ., Princeton, NJ (United States); Ross, Ashley J. [The Ohio State Univ., Columbus, OH (United States); Suchyta, Eric D. [The Ohio State Univ., Columbus, OH (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-06-29

    Here, we examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast l-picola simulations and a Fisher analysis. We find competitive constraints in the Ωm–σ8 plane from the convergence PDF with 188 arcmin2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of 2–3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.

  10. Generalization of Poisson distribution for the case of changing probability of consequential events

    International Nuclear Information System (INIS)

    Kushnirenko, E.

    1995-01-01

    The generalization of the Poisson distribution for the case of changing probabilities of the consequential events is done. It is shown that the classical Poisson distribution is the special case of this generalized distribution when the probabilities of the consequential events are constant. The using of the generalized Poisson distribution gives the possibility in some cases to obtain analytical result instead of making Monte-Carlo calculation

  11. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  12. Probability distributions for first neighbor distances between resonances that belong to two different families

    International Nuclear Information System (INIS)

    Difilippo, F.C.

    1994-01-01

    For a mixture of two families of resonances, we found the probability distribution for the distance, as first neighbors, between resonances that belong to different families. Integration of this distribution gives the probability of accidental overlapping of resonances of one isotope by resonances of the other, provided that the resonances of each isotope belong to a single family. (author)

  13. Family of probability distributions derived from maximal entropy principle with scale invariant restrictions.

    Science.gov (United States)

    Sonnino, Giorgio; Steinbrecher, György; Cardinali, Alessandro; Sonnino, Alberto; Tlidi, Mustapha

    2013-01-01

    Using statistical thermodynamics, we derive a general expression of the stationary probability distribution for thermodynamic systems driven out of equilibrium by several thermodynamic forces. The local equilibrium is defined by imposing the minimum entropy production and the maximum entropy principle under the scale invariance restrictions. The obtained probability distribution presents a singularity that has immediate physical interpretation in terms of the intermittency models. The derived reference probability distribution function is interpreted as time and ensemble average of the real physical one. A generic family of stochastic processes describing noise-driven intermittency, where the stationary density distribution coincides exactly with the one resulted from entropy maximization, is presented.

  14. Approximate solution for the reactor neutron probability distribution

    International Nuclear Information System (INIS)

    Ruby, L.; McSwine, T.L.

    1985-01-01

    Several authors have studied the Kolmogorov equation for a fission-driven chain-reacting system, written in terms of the generating function G(x,y,z,t) where x, y, and z are dummy variables referring to the neutron, delayed neutron precursor, and detector-count populations, n, m, and c, respectively. Pal and Zolotukhin and Mogil'ner have shown that if delayed neutrons are neglected, the solution is approximately negative binomial for the neutron population. Wang and Ruby have shown that if the detector effect is neglected, the solution, including the effect of delayed neutrons, is approximately negative binomial. All of the authors assumed prompt-neutron emission not exceeding two neutrons per fission. An approximate method of separating the detector effect from the statistics of the neutron and precursor populations has been proposed by Ruby. In this weak-coupling limit, it is assumed that G(x,y,z,t) = H(x,y)I(z,t). Substitution of this assumption into the Kolmogorov equation separates the latter into two equations, one for H(x,y) and the other for I(z,t). Solution of the latter then gives a generating function, which indicates that in the weak-coupling limit, the detector counts are Poisson distributed. Ruby also showed that if the detector effect is neglected in the equation for H(x,y), i.e., the detector efficiency is set to zero, then the resulting equation is identical with that considered by Wang and Ruby. The authors present here an approximate solution for H(x,y) that does not set the detector efficiency to zero

  15. Separating the contributions of variability and parameter uncertainty in probability distributions

    International Nuclear Information System (INIS)

    Sankararaman, S.; Mahadevan, S.

    2013-01-01

    This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters

  16. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  17. Collective motions of globally coupled oscillators and some probability distributions on circle

    Energy Technology Data Exchange (ETDEWEB)

    Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)

    2017-06-28

    In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.

  18. CT measurements of SAP voids in concrete

    DEFF Research Database (Denmark)

    Laustsen, Sara; Bentz, Dale P.; Hasholt, Marianne Tange

    2010-01-01

    X-ray computed tomography (CT) scanning is used to determine the SAP void distribution in hardened concrete. Three different approaches are used to analyse a binary data set created from CT measurement. One approach classifies a cluster of connected, empty voxels (volumetric pixel of a 3D image......) as one void, whereas the other two approaches are able to classify a cluster of connected, empty voxels as a number of individual voids. Superabsorbent polymers (SAP) have been used to incorporate air into concrete. An advantage of using SAP is that it enables control of the amount and size...... of the created air voids. The results indicate the presence of void clusters. To identify the individual voids, special computational approaches are needed. The addition of SAP results in a dominant peak in two of the three air void distributions. Based on the position (void diameter) of the peak, it is possible...

  19. New family of probability distributions with applications to Monte Carlo studies

    International Nuclear Information System (INIS)

    Johnson, M.E.; Tietjen, G.L.; Beckman, R.J.

    1980-01-01

    A new probability distribution is presented that offers considerable potential for providing stochastic inputs to Monte Carlo simulation studies. The distribution includes the exponential power family as a special case. An efficient computational strategy is proposed for random variate generation. An example for testing the hypothesis of unit variance illustrates the advantages of the proposed distribution

  20. Calculation of ruin probabilities for a dense class of heavy tailed distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady

    2015-01-01

    In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate...... any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest...... such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation...

  1. Experimental investigations of two-phase mixture level swell and axial void fraction distribution under high pressure, low heat flux conditions in rod bundle geometry

    International Nuclear Information System (INIS)

    Anklam, T.M.; White, M.D.

    1981-01-01

    Experimental data is reported from a series of quasi-steady-state two-phase mixture level swell and void fraction distribution tests. Testing was performed at ORNL in the Thermal Hydraulic Test Facility - a large electrically heated test loop configured to produce conditions similar to those expected in a small break loss of coolant accident. Pressure was varied from 2.7 to 8.2 MPa and linear power ranged from 0.33 to 1.95 kW/m. Mixture swell was observed to vary linearly with the total volumetric vapor generation rate over the power range of primary interest in small break analysis. Void fraction data was fit by a drift-flux model and both the drift-velocity and concentration parameter were observed to decrease with increasing pressure

  2. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  3. A measure of mutual divergence among a number of probability distributions

    Directory of Open Access Journals (Sweden)

    J. N. Kapur

    1987-01-01

    major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.

  4. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP

  5. Cross-sectional void fraction distribution measurements in a vertical annulus two-phase flow by high speed X-ray computed tomography and real-time neutron radiography techniques

    International Nuclear Information System (INIS)

    Harvel, G.D.; Hori, K.; Kawanishi, K.

    1995-01-01

    A Real-Time Neutron Radiography (RTNR) system and a high speed X-ray Computed tomography (X-CT) system are compared for measurement of two-phase flow. Each system is used to determine the flow regime, and the void fraction distribution in a vertical annulus flow channel. A standard optical video system is also used to observe the flow regime. The annulus flow channel is operated as a bubble column and measurements obtained for gas flow rates from 0.0 to 30.01/min. The flow regimes observed by all three measurement systems through image analysis shows that the two-dimensional void fraction distribution can be obtained. The X-CT system is shown to have a superior temporal resolution capable of resolving the void fraction distribution in an (r,θ) plane in 33.0 ms. Void fraction distribution for bubbly flow and slug flow is determined

  6. Cross-sectional void fraction distribution measurements in a vertical annulus two-phase flow by high speed X-ray computed tomography and real-time neutron radiography techniques

    Energy Technology Data Exchange (ETDEWEB)

    Harvel, G.D. [McMaster Univ., Ontario (Canada)]|[Combustion and Heat Transfer Lab., Takasago (Japan); Hori, K.; Kawanishi, K. [Combustion and Heat Transfer Lab., Takasago (Japan)] [and others

    1995-09-01

    A Real-Time Neutron Radiography (RTNR) system and a high speed X-ray Computed tomography (X-CT) system are compared for measurement of two-phase flow. Each system is used to determine the flow regime, and the void fraction distribution in a vertical annulus flow channel. A standard optical video system is also used to observe the flow regime. The annulus flow channel is operated as a bubble column and measurements obtained for gas flow rates from 0.0 to 30.01/min. The flow regimes observed by all three measurement systems through image analysis shows that the two-dimensional void fraction distribution can be obtained. The X-CT system is shown to have a superior temporal resolution capable of resolving the void fraction distribution in an (r,{theta}) plane in 33.0 ms. Void fraction distribution for bubbly flow and slug flow is determined.

  7. Probability Distribution and Deviation Information Fusion Driven Support Vector Regression Model and Its Application

    Directory of Open Access Journals (Sweden)

    Changhao Fan

    2017-01-01

    Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.

  8. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...... done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...

  9. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  10. Probability distributions in conservative energy exchange models of multiple interacting agents

    International Nuclear Information System (INIS)

    Scafetta, Nicola; West, Bruce J

    2007-01-01

    Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions

  11. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  12. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  13. WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS

    Directory of Open Access Journals (Sweden)

    Mr. Vladimir A. Smagin

    2016-12-01

    Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.

  14. Loaded dice in Monte Carlo : importance sampling in phase space integration and probability distributions for discrepancies

    NARCIS (Netherlands)

    Hameren, Andreas Ferdinand Willem van

    2001-01-01

    Discrepancies play an important role in the study of uniformity properties of point sets. Their probability distributions are a help in the analysis of the efficiency of the Quasi Monte Carlo method of numerical integration, which uses point sets that are distributed more uniformly than sets of

  15. Probability distribution of long-run indiscriminate felling of trees in ...

    African Journals Online (AJOL)

    The study was undertaken to determine the probability distribution of Long-run indiscriminate felling of trees in northern senatorial district of Adamawa State. Specifically, the study focused on examining the future direction of indiscriminate felling of trees as well as its equilibrium distribution. A multi-stage and simple random ...

  16. Feynman quasi probability distribution for spin-(1/2), and its generalizations

    International Nuclear Information System (INIS)

    Colucci, M.

    1999-01-01

    It has been examined the Feynman's paper Negative probability, in which, after a discussion about the possibility of attributing a real physical meaning to quasi probability distributions, he introduces a new kind of distribution for spin-(1/2), with a possible method of generalization to systems with arbitrary number of states. The principal aim of this article is to shed light upon the method of construction of these distributions, taking into consideration their application to some experiments, and discussing their positive and negative aspects

  17. The distribution function of a probability measure on a space with a fractal structure

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Granero, M.A.; Galvez-Rodriguez, J.F.

    2017-07-01

    In this work we show how to define a probability measure with the help of a fractal structure. One of the keys of this approach is to use the completion of the fractal structure. Then we use the theory of a cumulative distribution function on a Polish ultrametric space and describe it in this context. Finally, with the help of fractal structures, we prove that a function satisfying the properties of a cumulative distribution function on a Polish ultrametric space is a cumulative distribution function with respect to some probability measure on the space. (Author)

  18. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  19. The p-sphere and the geometric substratum of power-law probability distributions

    International Nuclear Information System (INIS)

    Vignat, C.; Plastino, A.

    2005-01-01

    Links between power law probability distributions and marginal distributions of uniform laws on p-spheres in R n show that a mathematical derivation of the Boltzmann-Gibbs distribution necessarily passes through power law ones. Results are also given that link parameters p and n to the value of the non-extensivity parameter q that characterizes these power laws in the context of non-extensive statistics

  20. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  1. Investigating and improving student understanding of the probability distributions for measuring physical observables in quantum mechanics

    International Nuclear Information System (INIS)

    Marshman, Emily; Singh, Chandralekha

    2017-01-01

    A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations. (paper)

  2. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    Science.gov (United States)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  3. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  4. Calculation of magnetization curves and probability distribution for monoclinic and uniaxial systems

    International Nuclear Information System (INIS)

    Sobh, Hala A.; Aly, Samy H.; Yehia, Sherif

    2013-01-01

    We present the application of a simple classical statistical mechanics-based model to selected monoclinic and hexagonal model systems. In this model, we treat the magnetization as a classical vector whose angular orientation is dictated by the laws of equilibrium classical statistical mechanics. We calculate for these anisotropic systems, the magnetization curves, energy landscapes and probability distribution for different sets of relevant parameters and magnetic fields of different strengths and directions. Our results demonstrate a correlation between the most probable orientation of the magnetization vector, the system's parameters, and the external magnetic field. -- Highlights: ► We calculate magnetization curves and probability angular distribution of the magnetization. ► The magnetization curves are consistent with probability results for the studied systems. ► Monoclinic and hexagonal systems behave differently due to their different anisotropies

  5. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    Science.gov (United States)

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  6. The dark matter of galaxy voids

    Science.gov (United States)

    Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H.; Warren, Michael S.

    2014-03-01

    How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ˜20 per cent, they have somewhat shallower density profiles and they have centres offset by ˜ 0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though the centres may be offset. When this offset is taken into account, we recover almost identical radial density profiles between galaxies and dark matter. All mock catalogues used in this work are available at http://www.cosmicvoids.net.

  7. On the probability distribution of the stochastic saturation scale in QCD

    International Nuclear Information System (INIS)

    Marquet, C.; Soyez, G.; Xiao Bowen

    2006-01-01

    It was recently noticed that high-energy scattering processes in QCD have a stochastic nature. An event-by-event scattering amplitude is characterised by a saturation scale which is a random variable. The statistical ensemble of saturation scales formed with all the events is distributed according to a probability law whose cumulants have been recently computed. In this work, we obtain the probability distribution from the cumulants. We prove that it can be considered as Gaussian over a large domain that we specify and our results are confirmed by numerical simulations

  8. Reliability Impact of Stockpile Aging: Stress Voiding; TOPICAL

    International Nuclear Information System (INIS)

    ROBINSON, DAVID G.

    1999-01-01

    The objective of this research is to statistically characterize the aging of integrated circuit interconnects. This report supersedes the stress void aging characterization presented in SAND99-0975, ''Reliability Degradation Due to Stockpile Aging,'' by the same author. The physics of the stress voiding, before and after wafer processing have been recently characterized by F. G. Yost in SAND99-0601, ''Stress Voiding during Wafer Processing''. The current effort extends this research to account for uncertainties in grain size, storage temperature, void spacing and initial residual stress and their impact on interconnect failure after wafer processing. The sensitivity of the life estimates to these uncertainties is also investigated. Various methods for characterizing the probability of failure of a conductor line were investigated including: Latin hypercube sampling (LHS), quasi-Monte Carlo sampling (qMC), as well as various analytical methods such as the advanced mean value (Ah/IV) method. The comparison was aided by the use of the Cassandra uncertainty analysis library. It was found that the only viable uncertainty analysis methods were those based on either LHS or quasi-Monte Carlo sampling. Analytical methods such as AMV could not be applied due to the nature of the stress voiding problem. The qMC method was chosen since it provided smaller estimation error for a given number of samples. The preliminary results indicate that the reliability of integrated circuits due to stress voiding is very sensitive to the underlying uncertainties associated with grain size and void spacing. In particular, accurate characterization of IC reliability depends heavily on not only the frost and second moments of the uncertainty distribution, but more specifically the unique form of the underlying distribution

  9. The joint probability distribution of structure factors incorporating anomalous-scattering and isomorphous-replacement data

    International Nuclear Information System (INIS)

    Peschar, R.; Schenk, H.

    1991-01-01

    A method to derive joint probability distributions of structure factors is presented which incorporates anomalous-scattering and isomorphous-replacement data in a unified procedure. The structure factors F H and F -H , whose magnitudes are different due to anomalous scattering, are shown to be isomorphously related. This leads to a definition of isomorphism by means of which isomorphous-replacement and anomalous-scattering data can be handled simultaneously. The definition and calculation of the general term of the joint probability distribution for isomorphous structure factors turns out to be crucial. Its analytical form leads to an algorithm by means of which any particular joint probability distribution of structure factors can be constructed. The calculation of the general term is discussed for the case of four isomorphous structure factors in P1, assuming the atoms to be independently and uniformly distributed. A main result is the construction of the probability distribution of the 64 triplet phase sums present in space group P1 amongst four isomorphous structure factors F H , four isomorphous F K and four isomorphous F -H-K . The procedure is readily generalized in the case where an arbitrary number of isomorphous structure factors are available for F H , F K and F -H-K . (orig.)

  10. Elastic wave scattering from multiple voids (porosity)

    International Nuclear Information System (INIS)

    Thompson, D.O.; Rose, J.H.; Thompson, R.B.; Wormley, S.J.

    1983-01-01

    This paper describes the development of an ultrasonic backscatter measurement technique which provides a convenient way to determine certain characteristics of a distribution of voids (porosity) in materials. A typical ultrasonic sample prepared by placing the ''frit'' in a crucible in an RF induction heater is shown. The results of the measurements were Fourier transformed into an amplitude-frequency description, and were then deconvolved with the transducer response function. Several properties needed to characterize a void distribution are obtained from the experimental results, including average void size, the spatial extent of the voids region, the average void separation, and the volume fraction of material contained in the void distribution. A detailed comparison of values obtained from the ultrasonic measurements with visually determined results is also given

  11. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  12. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  13. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  14. Examining barrier distributions and, in extension, energy derivative of probabilities for surrogate experiments

    International Nuclear Information System (INIS)

    Romain, P.; Duarte, H.; Morillon, B.

    2012-01-01

    The energy derivatives of probabilities are functions suited to a best understanding of certain mechanisms. Applied to compound nuclear reactions, they can bring information on fusion barrier distributions as originally introduced, and also, as presented here, on fission barrier distributions and heights. Extendedly, they permit to access the compound nucleus spin-parity states preferentially populated according to an entrance channel, at a given energy. (authors)

  15. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  16. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.

  17. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  18. Spectral shaping of a randomized PWM DC-DC converter using maximum entropy probability distributions

    CSIR Research Space (South Africa)

    Dove, Albert

    2017-01-01

    Full Text Available maintaining constraints in a DC-DC converter is investigated. A probability distribution whose aim is to ensure maximal harmonic spreading and yet mainaint constraints is presented. The PDFs are determined from a direct application of the method of Maximum...

  19. Extreme points of the convex set of joint probability distributions with ...

    Indian Academy of Sciences (India)

    Here we address the following problem: If G is a standard ... convex set of all joint probability distributions on the product Borel space (X1 ×X2, F1 ⊗. F2) which .... cannot be identically zero when X and Y vary in A1 and u and v vary in H2. Thus.

  20. Providing probability distributions for the causal pathogen of clinical mastitis using naive Bayesian networks

    NARCIS (Netherlands)

    Steeneveld, W.; Gaag, van der L.C.; Barkema, H.W.; Hogeveen, H.

    2009-01-01

    Clinical mastitis (CM) can be caused by a wide variety of pathogens and farmers must start treatment before the actual causal pathogen is known. By providing a probability distribution for the causal pathogen, naive Bayesian networks (NBN) can serve as a management tool for farmers to decide which

  1. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  2. Analysis of Void Fraction Distribution and Departure from Nucleate Boiling in Single Subchannel and Bundle Geometries Using Subchannel, System, and Computational Fluid Dynamics Codes

    Directory of Open Access Journals (Sweden)

    Taewan Kim

    2012-01-01

    Full Text Available In order to assess the accuracy and validity of subchannel, system, and computational fluid dynamics codes, the Paul Scherrer Institut has participated in the OECD/NRC PSBT benchmark with the thermal-hydraulic system code TRACE5.0 developed by US NRC, the subchannel code FLICA4 developed by CEA, and the computational fluid dynamic code STAR-CD developed by CD-adapco. The PSBT benchmark consists of a series of void distribution exercises and departure from nucleate boiling exercises. The results reveal that the prediction by the subchannel code FLICA4 agrees with the experimental data reasonably well in both steady-state and transient conditions. The analyses of single-subchannel experiments by means of the computational fluid dynamic code STAR-CD with the CD-adapco boiling model indicate that the prediction of the void fraction has no significant discrepancy from the experiments. The analyses with TRACE point out the necessity to perform additional assessment of the subcooled boiling model and bulk condensation model of TRACE.

  3. Quantile selection procedure and assoiated distribution of ratios of order statistics from a restricted family of probability distributions

    International Nuclear Information System (INIS)

    Gupta, S.S.; Panchapakesan, S.

    1975-01-01

    A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure

  4. Statistics and geometry of cosmic voids

    International Nuclear Information System (INIS)

    Gaite, José

    2009-01-01

    We introduce new statistical methods for the study of cosmic voids, focusing on the statistics of largest size voids. We distinguish three different types of distributions of voids, namely, Poisson-like, lognormal-like and Pareto-like distributions. The last two distributions are connected with two types of fractal geometry of the matter distribution. Scaling voids with Pareto distribution appear in fractal distributions with box-counting dimension smaller than three (its maximum value), whereas the lognormal void distribution corresponds to multifractals with box-counting dimension equal to three. Moreover, voids of the former type persist in the continuum limit, namely, as the number density of observable objects grows, giving rise to lacunar fractals, whereas voids of the latter type disappear in the continuum limit, giving rise to non-lacunar (multi)fractals. We propose both lacunar and non-lacunar multifractal models of the cosmic web structure of the Universe. A non-lacunar multifractal model is supported by current galaxy surveys as well as cosmological N-body simulations. This model suggests, in particular, that small dark matter halos and, arguably, faint galaxies are present in cosmic voids

  5. Cosmology with void-galaxy correlations.

    Science.gov (United States)

    Hamaus, Nico; Wandelt, Benjamin D; Sutter, P M; Lavaux, Guilhem; Warren, Michael S

    2014-01-31

    Galaxy bias, the unknown relationship between the clustering of galaxies and the underlying dark matter density field is a major hurdle for cosmological inference from large-scale structure. While traditional analyses focus on the absolute clustering amplitude of high-density regions mapped out by galaxy surveys, we propose a relative measurement that compares those to the underdense regions, cosmic voids. On the basis of realistic mock catalogs we demonstrate that cross correlating galaxies and voids opens up the possibility to calibrate galaxy bias and to define a static ruler thanks to the observable geometric nature of voids. We illustrate how the clustering of voids is related to mass compensation and show that volume-exclusion significantly reduces the degree of stochasticity in their spatial distribution. Extracting the spherically averaged distribution of galaxies inside voids from their cross correlations reveals a remarkable concordance with the mass-density profile of voids.

  6. Precipitation intensity probability distribution modelling for hydrological and construction design purposes

    International Nuclear Information System (INIS)

    Koshinchanov, Georgy; Dimitrov, Dobri

    2008-01-01

    The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: - The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; - As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. - Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; - Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; - Next method is considering only

  7. Gas Hydrate Formation Probability Distributions: The Effect of Shear and Comparisons with Nucleation Theory.

    Science.gov (United States)

    May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M

    2018-03-13

    Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.

  8. Partial discharges in spheroidal voids: Void orientation

    DEFF Research Database (Denmark)

    McAllister, Iain Wilson

    1997-01-01

    Partial discharge transients can be described in terms of the charge induced on the detecting electrode. The influence of the void parameters upon the induced charge is examined and discussed for spheroidal voids. It is shown that a quantitative interpretation of the induced charge requires...

  9. Archaeology of Void Spaces

    Science.gov (United States)

    Look, Cory

    variety of activity areas that make up a site can imbue a site with an identity of purpose and shed light on how different sites may have served different purposes within a regional framework. Excavations at the site of Indian Creek identified a series of raised middens that enclosed an open space for approximately 1500 years. This research explores this open space, and questions the meaning of 'void' and 'empty' with respect to past human activities. While archaeologists recognize that areas void of material remains are certainly part of the larger site, the question remains, without an understand of these spaces; what aspects of past life are we possibly masking? The integration of anthrosols alongside archaeological excavations and spatial analysis indicate that the site of Indian Creek contained a ceremonial plaza that formed early on and was maintained until abandonment. The spatial distribution of material objects combined with anthrosol studies provided additional evidence of ritual deposits concentrated in one part of the plaza associated with a nearby creek-bed. The second site, Doigs represents one of the last intact undisturbed Early Ceramic Age site of its kind in the Eastern Caribbean. Since its discovery in the 1970's, Doig's has been partially surveyed and excavated. The identification of residential activity areas including several potential structures, bead manufacturing loci, and cooking hearths were used to help test chemical signatures with archaeologically defined activity areas. Findings from this site illustrated the uniqueness of elemental patterns associated with activity areas, and also generated new questions regarding void spaces enriched with elemental patterns associated with concentrations of plant and vegetation debris. It is the hope of this study to contribute to our general knowledge for the identification of ancient activity areas as well as the different places that give sites their identity. These assemblages of activity areas can

  10. On the issues of probability distribution of GPS carrier phase observations

    Science.gov (United States)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS

  11. Tumour control probability (TCP) for non-uniform activity distribution in radionuclide therapy

    International Nuclear Information System (INIS)

    Uusijaervi, Helena; Bernhardt, Peter; Forssell-Aronsson, Eva

    2008-01-01

    Non-uniform radionuclide distribution in tumours will lead to a non-uniform absorbed dose. The aim of this study was to investigate how tumour control probability (TCP) depends on the radionuclide distribution in the tumour, both macroscopically and at the subcellular level. The absorbed dose in the cell nuclei of tumours was calculated for 90 Y, 177 Lu, 103m Rh and 211 At. The radionuclides were uniformly distributed within the subcellular compartment and they were uniformly, normally or log-normally distributed among the cells in the tumour. When all cells contain the same amount of activity, the cumulated activities required for TCP = 0.99 (A-tilde TCP=0.99 ) were 1.5-2 and 2-3 times higher when the activity was distributed on the cell membrane compared to in the cell nucleus for 103m Rh and 211 At, respectively. TCP for 90 Y was not affected by different radionuclide distributions, whereas for 177 Lu, it was slightly affected when the radionuclide was in the nucleus. TCP for 103m Rh and 211 At were affected by different radionuclide distributions to a great extent when the radionuclides were in the cell nucleus and to lesser extents when the radionuclides were distributed on the cell membrane or in the cytoplasm. When the activity was distributed in the nucleus, A-tilde TCP=0.99 increased when the activity distribution became more heterogeneous for 103m Rh and 211 At, and the increase was large when the activity was normally distributed compared to log-normally distributed. When the activity was distributed on the cell membrane, A-tilde TCP=0.99 was not affected for 103m Rh and 211 At when the activity distribution became more heterogeneous. A-tilde TCP=0.99 for 90 Y and 177 Lu were not affected by different activity distributions, neither macroscopic nor subcellular

  12. THREE-MOMENT BASED APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING SYSTEMS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2014-03-01

    Full Text Available The paper deals with the problem of approximation of probability distributions of random variables defined in positive area of real numbers with coefficient of variation different from unity. While using queueing systems as models for computer networks, calculation of characteristics is usually performed at the level of expectation and variance. At the same time, one of the main characteristics of multimedia data transmission quality in computer networks is delay jitter. For jitter calculation the function of packets time delay distribution should be known. It is shown that changing the third moment of distribution of packets delay leads to jitter calculation difference in tens or hundreds of percent, with the same values of the first two moments – expectation value and delay variation coefficient. This means that delay distribution approximation for the calculation of jitter should be performed in accordance with the third moment of delay distribution. For random variables with coefficients of variation greater than unity, iterative approximation algorithm with hyper-exponential two-phase distribution based on three moments of approximated distribution is offered. It is shown that for random variables with coefficients of variation less than unity, the impact of the third moment of distribution becomes negligible, and for approximation of such distributions Erlang distribution with two first moments should be used. This approach gives the possibility to obtain upper bounds for relevant characteristics, particularly, the upper bound of delay jitter.

  13. Pediatric Voiding Cystourethrogram

    Science.gov (United States)

    Scan for mobile link. Children's (Pediatric) Voiding Cystourethrogram A children’s (pediatric) voiding cystourethrogram uses fluoroscopy – a form of real-time x-ray – to examine a child’s bladder ...

  14. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  15. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  16. Rank-Ordered Multifractal Analysis (ROMA of probability distributions in fluid turbulence

    Directory of Open Access Journals (Sweden)

    C. C. Wu

    2011-04-01

    Full Text Available Rank-Ordered Multifractal Analysis (ROMA was introduced by Chang and Wu (2008 to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF simultaneously is introduced.

  17. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  18. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    International Nuclear Information System (INIS)

    Xu, Xin-Ping; Ide, Yusuke

    2016-01-01

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  19. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn [School of Physical Science and Technology, Soochow University, Suzhou 215006 (China); Ide, Yusuke [Department of Information Systems Creation, Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa, 221-8686 (Japan)

    2016-10-15

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  20. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    Science.gov (United States)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  1. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  2. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400, La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)

    2009-09-15

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  3. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    Science.gov (United States)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  4. The limiting conditional probability distribution in a stochastic model of T cell repertoire maintenance.

    Science.gov (United States)

    Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen

    2010-04-01

    The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.

  5. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  6. The probability distribution of intergranular stress corrosion cracking life for sensitized 304 stainless steels in high temperature, high purity water

    International Nuclear Information System (INIS)

    Akashi, Masatsune; Kenjyo, Takao; Matsukura, Shinji; Kawamoto, Teruaki

    1984-01-01

    In order to discuss the probability distribution of intergranular stress corrsion carcking life for sensitized 304 stainless steels, a series of the creviced bent beem (CBB) and the uni-axial constant load tests were carried out in oxygenated high temperature, high purity water. The following concludions were resulted; (1) The initiation process of intergranular stress corrosion cracking has been assumed to be approximated by the Poisson stochastic process, based on the CBB test results. (2) The probability distribution of intergranular stress corrosion cracking life may consequently be approximated by the exponential probability distribution. (3) The experimental data could be fitted to the exponential probability distribution. (author)

  7. Analytical models of probability distribution and excess noise factor of solid state photomultiplier signals with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2012-01-01

    Silicon Photomultipliers (SiPM), also called Solid State Photomultipliers (SSPM), are based on Geiger mode avalanche breakdown that is limited by a strong negative feedback. An SSPM can detect and resolve single photons due to the high gain and ultra-low excess noise of avalanche multiplication in this mode. Crosstalk and afterpulsing processes associated with the high gain introduce specific excess noise and deteriorate the photon number resolution of the SSPM. The probabilistic features of these processes are widely studied because of its significance for the SSPM design, characterization, optimization and application, but the process modeling is mostly based on Monte Carlo simulations and numerical methods. In this study, crosstalk is considered to be a branching Poisson process, and analytical models of probability distribution and excess noise factor (ENF) of SSPM signals based on the Borel distribution as an advance on the geometric distribution models are presented and discussed. The models are found to be in a good agreement with the experimental probability distributions for dark counts and a few photon spectrums in a wide range of fired pixels number as well as with observed super-linear behavior of crosstalk ENF.

  8. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  9. Research on Energy-Saving Design of Overhead Travelling Crane Camber Based on Probability Load Distribution

    Directory of Open Access Journals (Sweden)

    Tong Yifei

    2014-01-01

    Full Text Available Crane is a mechanical device, used widely to move materials in modern production. It is reported that the energy consumptions of China are at least 5–8 times of other developing countries. Thus, energy consumption becomes an unavoidable topic. There are several reasons influencing the energy loss, and the camber of the girder is the one not to be neglected. In this paper, the problem of the deflections induced by the moving payload in the girder of overhead travelling crane is examined. The evaluation of a camber giving a counterdeflection of the girder is proposed in order to get minimum energy consumptions for trolley to move along a nonstraight support. To this aim, probabilistic payload distributions are considered instead of fixed or rated loads involved in other researches. Taking 50/10 t bridge crane as a research object, the probability loads are determined by analysis of load distribution density functions. According to load distribution, camber design under different probability loads is discussed in detail as well as energy consumptions distribution. The research results provide the design reference of reasonable camber to obtain the least energy consumption for climbing corresponding to different P0; thus energy-saving design can be achieved.

  10. New method for extracting tumors in PET/CT images based on the probability distribution

    International Nuclear Information System (INIS)

    Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori

    2006-01-01

    In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)

  11. Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs.

    Science.gov (United States)

    Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor

    2017-08-01

    Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network.

  12. Flux-probability distributions from the master equation for radiation transport in stochastic media

    International Nuclear Information System (INIS)

    Franke, Brian C.; Prinja, Anil K.

    2011-01-01

    We present numerical investigations into the accuracy of approximations in the master equation for radiation transport in discrete binary random media. Our solutions of the master equation yield probability distributions of particle flux at each element of phase space. We employ the Levermore-Pomraning interface closure and evaluate the effectiveness of closures for the joint conditional flux distribution for estimating scattering integrals. We propose a parameterized model for this joint-pdf closure, varying between correlation neglect and a full-correlation model. The closure is evaluated for a variety of parameter settings. Comparisons are made with benchmark results obtained through suites of fixed-geometry realizations of random media in rod problems. All calculations are performed using Monte Carlo techniques. Accuracy of the approximations in the master equation is assessed by examining the probability distributions for reflection and transmission and by evaluating the moments of the pdfs. The results suggest the correlation-neglect setting in our model performs best and shows improved agreement in the atomic-mix limit. (author)

  13. Measuring sensitivity in pharmacoeconomic studies. Refining point sensitivity and range sensitivity by incorporating probability distributions.

    Science.gov (United States)

    Nuijten, M J

    1999-07-01

    The aim of the present study is to describe a refinement of a previously presented method, based on the concept of point sensitivity, to deal with uncertainty in economic studies. The original method was refined by the incorporation of probability distributions which allow a more accurate assessment of the level of uncertainty in the model. In addition, a bootstrap method was used to create a probability distribution for a fixed input variable based on a limited number of data points. The original method was limited in that the sensitivity measurement was based on a uniform distribution of the variables and that the overall sensitivity measure was based on a subjectively chosen range which excludes the impact of values outside the range on the overall sensitivity. The concepts of the refined method were illustrated using a Markov model of depression. The application of the refined method substantially changed the ranking of the most sensitive variables compared with the original method. The response rate became the most sensitive variable instead of the 'per diem' for hospitalisation. The refinement of the original method yields sensitivity outcomes, which greater reflect the real uncertainty in economic studies.

  14. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  15. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham

    2017-04-07

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.

  16. Probability distribution of wave packet delay time for strong overlapping of resonance levels

    International Nuclear Information System (INIS)

    Lyuboshits, V.L.

    1983-01-01

    Time behaviour of nuclear reactions in the case of high level densities is investigated basing on the theory of overlapping resonances. In the framework of a model of n equivalent channels an analytical expression is obtained for the probability distribution function for wave packet delay time at the compound nucleus production. It is shown that at strong overlapping of the resonance levels the relative fluctuation of the delay time is small at the stage of compound nucleus production. A possible increase in the duration of nuclear reactions with the excitation energy rise is discussed

  17. Concise method for evaluating the probability distribution of the marginal cost of power generation

    International Nuclear Information System (INIS)

    Zhang, S.H.; Li, Y.Z.

    2000-01-01

    In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)

  18. The Bayesian count rate probability distribution in measurement of ionizing radiation by use of a ratemeter

    Energy Technology Data Exchange (ETDEWEB)

    Weise, K.

    2004-06-01

    Recent metrological developments concerning measurement uncertainty, founded on Bayesian statistics, give rise to a revision of several parts of the DIN 25482 and ISO 11929 standard series. These series stipulate detection limits and decision thresholds for ionizing-radiation measurements. Part 3 and, respectively, part 4 of them deal with measurements by use of linear-scale analogue ratemeters. A normal frequency distribution of the momentary ratemeter indication for a fixed count rate value is assumed. The actual distribution, which is first calculated numerically by solving an integral equation, differs, however, considerably from the normal distribution although this one represents an approximation of it for sufficiently large values of the count rate to be measured. As is shown, this similarly holds true for the Bayesian probability distribution of the count rate for sufficiently large given measured values indicated by the ratemeter. This distribution follows from the first one mentioned by means of the Bayes theorem. Its expectation value and variance are needed for the standards to be revised on the basis of Bayesian statistics. Simple expressions are given by the present standards for estimating these parameters and for calculating the detection limit and the decision threshold. As is also shown, the same expressions can similarly be used as sufficient approximations by the revised standards if, roughly, the present indicated value exceeds the reciprocal ratemeter relaxation time constant. (orig.)

  19. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B

    2007-01-01

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed

  20. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    Energy Technology Data Exchange (ETDEWEB)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)

    2007-11-15

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.

  1. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  2. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  3. Probability distribution of magnetization in the one-dimensional Ising model: effects of boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Antal, T [Physics Department, Simon Fraser University, Burnaby, BC V5A 1S6 (Canada); Droz, M [Departement de Physique Theorique, Universite de Geneve, CH 1211 Geneva 4 (Switzerland); Racz, Z [Institute for Theoretical Physics, Eoetvoes University, 1117 Budapest, Pazmany setany 1/a (Hungary)

    2004-02-06

    Finite-size scaling functions are investigated both for the mean-square magnetization fluctuations and for the probability distribution of the magnetization in the one-dimensional Ising model. The scaling functions are evaluated in the limit of the temperature going to zero (T {yields} 0), the size of the system going to infinity (N {yields} {infinity}) while N[1 - tanh(J/k{sub B}T)] is kept finite (J being the nearest neighbour coupling). Exact calculations using various boundary conditions (periodic, antiperiodic, free, block) demonstrate explicitly how the scaling functions depend on the boundary conditions. We also show that the block (small part of a large system) magnetization distribution results are identical to those obtained for free boundary conditions.

  4. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically......We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... adjusted according to past claims experience....

  5. PHOTOMETRIC REDSHIFT PROBABILITY DISTRIBUTIONS FOR GALAXIES IN THE SDSS DR8

    International Nuclear Information System (INIS)

    Sheldon, Erin S.; Cunha, Carlos E.; Mandelbaum, Rachel; Brinkmann, J.; Weaver, Benjamin A.

    2012-01-01

    We present redshift probability distributions for galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 8 imaging data. We used the nearest-neighbor weighting algorithm to derive the ensemble redshift distribution N(z), and individual redshift probability distributions P(z) for galaxies with r < 21.8 and u < 29.0. As part of this technique, we calculated weights for a set of training galaxies with known redshifts such that their density distribution in five-dimensional color-magnitude space was proportional to that of the photometry-only sample, producing a nearly fair sample in that space. We estimated the ensemble N(z) of the photometric sample by constructing a weighted histogram of the training-set redshifts. We derived P(z)'s for individual objects by using training-set objects from the local color-magnitude space around each photometric object. Using the P(z) for each galaxy can reduce the statistical error in measurements that depend on the redshifts of individual galaxies. The spectroscopic training sample is substantially larger than that used for the DR7 release. The newly added PRIMUS catalog is now the most important training set used in this analysis by a wide margin. We expect the primary sources of error in the N(z) reconstruction to be sample variance and spectroscopic failures: The training sets are drawn from relatively small volumes of space, and some samples have large incompleteness. Using simulations we estimated the uncertainty in N(z) due to sample variance at a given redshift to be ∼10%-15%. The uncertainty on calculations incorporating N(z) or P(z) depends on how they are used; we discuss the case of weak lensing measurements. The P(z) catalog is publicly available from the SDSS Web site.

  6. On void nucleation

    International Nuclear Information System (INIS)

    Subbotin, A.V.

    1978-01-01

    Nucleation of viable voids in irradiated materials is considered. The mechanism of evaporation and absorption of interstitials and vacancies disregarding the possibility of void merging is laid down into the basis of the discussion. The effect of irradiated material structure on void nucleation is separated from the effect of the properties of supersaturated solutions of vacancies and interstitials. An analytical expression for the nucleation rate is obtained and analyzed in different cases. The interstitials are concluded to effect severely the nucleation rate of viable voids

  7. Understanding the distinctively skewed and heavy tailed character of atmospheric and oceanic probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Sardeshmukh, Prashant D., E-mail: Prashant.D.Sardeshmukh@noaa.gov [CIRES, University of Colorado, Boulder, Colorado 80309 (United States); NOAA/Earth System Research Laboratory, Boulder, Colorado 80305 (United States); Penland, Cécile [NOAA/Earth System Research Laboratory, Boulder, Colorado 80305 (United States)

    2015-03-15

    The probability distributions of large-scale atmospheric and oceanic variables are generally skewed and heavy-tailed. We argue that their distinctive departures from Gaussianity arise fundamentally from the fact that in a quadratically nonlinear system with a quadratic invariant, the coupling coefficients between system components are not constant but depend linearly on the system state in a distinctive way. In particular, the skewness arises from a tendency of the system trajectory to linger near states of weak coupling. We show that the salient features of the observed non-Gaussianity can be captured in the simplest such nonlinear 2-component system. If the system is stochastically forced and linearly damped, with one component damped much more strongly than the other, then the strongly damped fast component becomes effectively decoupled from the weakly damped slow component, and its impact on the slow component can be approximated as a stochastic noise forcing plus an augmented nonlinear damping. In the limit of large time-scale separation, the nonlinear augmentation of the damping becomes small, and the noise forcing can be approximated as an additive noise plus a correlated additive and multiplicative noise (CAM noise) forcing. Much of the diversity of observed large-scale atmospheric and oceanic probability distributions can be interpreted in this minimal framework.

  8. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  9. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  10. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  11. The effects of radiotherapy treatment uncertainties on the delivered dose distribution and tumour control probability

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2001-01-01

    Uncertainty in the precise quantity of radiation dose delivered to tumours in external beam radiotherapy is present due to many factors, and can result in either spatially uniform (Gaussian) or spatially non-uniform dose errors. These dose errors are incorporated into the calculation of tumour control probability (TCP) and produce a distribution of possible TCP values over a population. We also study the effect of inter-patient cell sensitivity heterogeneity on the population distribution of patient TCPs. This study aims to investigate the relative importance of these three uncertainties (spatially uniform dose uncertainty, spatially non-uniform dose uncertainty, and inter-patient cell sensitivity heterogeneity) on the delivered dose and TCP distribution following a typical course of fractionated external beam radiotherapy. The dose distributions used for patient treatments are modelled in one dimension. Geometric positioning uncertainties during and before treatment are considered as shifts of a pre-calculated dose distribution. Following the simulation of a population of patients, distributions of dose across the patient population are used to calculate mean treatment dose, standard deviation in mean treatment dose, mean TCP, standard deviation in TCP, and TCP mode. These parameters are calculated with each of the three uncertainties included separately. The calculations show that the dose errors in the tumour volume are dominated by the spatially uniform component of dose uncertainty. This could be related to machine specific parameters, such as linear accelerator calibration. TCP calculation is affected dramatically by inter-patient variation in the cell sensitivity and to a lesser extent by the spatially uniform dose errors. The positioning errors with the 1.5 cm margins used cause dose uncertainty outside the tumour volume and have a small effect on mean treatment dose (in the tumour volume) and tumour control. Copyright (2001) Australasian College of

  12. Probability distributions of placental morphological measurements and origins of variability of placental shapes.

    Science.gov (United States)

    Yampolsky, M; Salafia, C M; Shlakhter, O

    2013-06-01

    While the mean shape of human placenta is round with centrally inserted umbilical cord, significant deviations from this ideal are fairly common, and may be clinically meaningful. Traditionally, they are explained by trophotropism. We have proposed a hypothesis explaining typical variations in placental shape by randomly determined fluctuations in the growth process of the vascular tree. It has been recently reported that umbilical cord displacement in a birth cohort has a log-normal probability distribution, which indicates that the displacement between an initial point of origin and the centroid of the mature shape is a result of accumulation of random fluctuations of the dynamic growth of the placenta. To confirm this, we investigate statistical distributions of other features of placental morphology. In a cohort of 1023 births at term digital photographs of placentas were recorded at delivery. Excluding cases with velamentous cord insertion, or missing clinical data left 1001 (97.8%) for which placental surface morphology features were measured. Best-fit statistical distributions for them were obtained using EasyFit. The best-fit distributions of umbilical cord displacement, placental disk diameter, area, perimeter, and maximal radius calculated from the cord insertion point are of heavy-tailed type, similar in shape to log-normal distributions. This is consistent with a stochastic origin of deviations of placental shape from normal. Deviations of placental shape descriptors from average have heavy-tailed distributions similar in shape to log-normal. This evidence points away from trophotropism, and towards a spontaneous stochastic evolution of the variants of placental surface shape features. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  14. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anurag; Kumar, Bimlesh, E-mail: anurag.sharma@iitg.ac.in, E-mail: bimk@iitg.ac.in [Department of Civil Engineering, Indian Institute of Technology Guwahati, 781039 (India)

    2017-02-15

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram–Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points

  15. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    Science.gov (United States)

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity

  16. Effect of void cluster on ductile failure evolution

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2016-01-01

    The behavior of a non-uniform void distribution in a ductile material is investigated by using a cell model analysis to study a material with a periodic pattern of void clusters. The special clusters considered consist of a number of uniformly spaced voids located along a plane perpendicular...

  17. Towards a theoretical determination of the geographical probability distribution of meteoroid impacts on Earth

    Science.gov (United States)

    Zuluaga, Jorge I.; Sucerquia, Mario

    2018-06-01

    Tunguska and Chelyabinsk impact events occurred inside a geographical area of only 3.4 per cent of the Earth's surface. Although two events hardly constitute a statistically significant demonstration of a geographical pattern of impacts, their spatial coincidence is at least tantalizing. To understand if this concurrence reflects an underlying geographical and/or temporal pattern, we must aim at predicting the spatio-temporal distribution of meteoroid impacts on Earth. For this purpose we designed, implemented, and tested a novel numerical technique, the `Gravitational Ray Tracing' (GRT) designed to compute the relative impact probability (RIP) on the surface of any planet. GRT is inspired by the so-called ray-casting techniques used to render realistic images of complex 3D scenes. In this paper we describe the method and the results of testing it at the time of large impact events. Our findings suggest a non-trivial pattern of impact probabilities at any given time on the Earth. Locations at 60-90° from the apex are more prone to impacts, especially at midnight. Counterintuitively, sites close to apex direction have the lowest RIP, while in the antapex RIP are slightly larger than average. We present here preliminary maps of RIP at the time of Tunguska and Chelyabinsk events and found no evidence of a spatial or temporal pattern, suggesting that their coincidence was fortuitous. We apply the GRT method to compute theoretical RIP at the location and time of 394 large fireballs. Although the predicted spatio-temporal impact distribution matches marginally the observed events, we successfully predict their impact speed distribution.

  18. Spatial distribution and occurrence probability of regional new particle formation events in eastern China

    Directory of Open Access Journals (Sweden)

    X. Shen

    2018-01-01

    Full Text Available In this work, the spatial extent of new particle formation (NPF events and the relative probability of observing particles originating from different spatial origins around three rural sites in eastern China were investigated using the NanoMap method, using particle number size distribution (PNSD data and air mass back trajectories. The length of the datasets used were 7, 1.5, and 3 years at rural sites Shangdianzi (SDZ in the North China Plain (NCP, Mt. Tai (TS in central eastern China, and Lin'an (LAN in the Yangtze River Delta region in eastern China, respectively. Regional NPF events were observed to occur with the horizontal extent larger than 500 km at SDZ and TS, favoured by the fast transport of northwesterly air masses. At LAN, however, the spatial footprint of NPF events was mostly observed around the site within 100–200 km. Difference in the horizontal spatial distribution of new particle source areas at different sites was connected to typical meteorological conditions at the sites. Consecutive large-scale regional NPF events were observed at SDZ and TS simultaneously and were associated with a high surface pressure system dominating over this area. Simultaneous NPF events at SDZ and LAN were seldom observed. At SDZ the polluted air masses arriving over the NCP were associated with higher particle growth rate (GR and new particle formation rate (J than air masses from Inner Mongolia (IM. At TS the same phenomenon was observed for J, but GR was somewhat lower in air masses arriving over the NCP compared to those arriving from IM. The capability of NanoMap to capture the NPF occurrence probability depends on the length of the dataset of PNSD measurement but also on topography around the measurement site and typical air mass advection speed during NPF events. Thus the long-term measurements of PNSD in the planetary boundary layer are necessary in the further study of spatial extent and the probability of NPF events. The spatial

  19. Void Measurement by the ({gamma}, n) Reaction

    Energy Technology Data Exchange (ETDEWEB)

    Rouhani, S Zia

    1962-09-15

    It is proposed to use the ({gamma}, n) reaction for the measurement of the integrated void volume fraction in two phase flow of D{sub 2}O inside a duct. This method is applicable to different channel geometries, and it is shown to be insensitive to the pattern of void distribution over the cross-sectional area of the channels The method has been tested on mock-ups of voids in a round duct of 6 mm inside diameter. About 40 m.c. {sup 24}Na was used as a source of gamma-rays. The test results show that the maximum measured error in this arrangement is less than 2.5 % (net void) for a range of 2.7 % to 44.44 % actual void volume fractions.

  20. Void Measurement by the (γ, n) Reaction

    International Nuclear Information System (INIS)

    Rouhani, S. Zia

    1962-09-01

    It is proposed to use the (γ, n) reaction for the measurement of the integrated void volume fraction in two phase flow of D 2 O inside a duct. This method is applicable to different channel geometries, and it is shown to be insensitive to the pattern of void distribution over the cross-sectional area of the channels The method has been tested on mock-ups of voids in a round duct of 6 mm inside diameter. About 40 m.c. 24 Na was used as a source of gamma-rays. The test results show that the maximum measured error in this arrangement is less than 2.5 % (net void) for a range of 2.7 % to 44.44 % actual void volume fractions

  1. Temperature controlled 'void' formation

    International Nuclear Information System (INIS)

    Dasgupta, P.; Sharma, B.D.

    1975-01-01

    The nucleation and growth of voids in structural materials during high temperature deformation or irradiation is essentially dependent upon the existence of 'vacancy supersaturation'. The role of temperature dependent diffusion processes in 'void' formation under varying conditions, and the mechanical property changes associated with this microstructure are briefly reviewed. (author)

  2. Void nucleation at heterogeneities

    International Nuclear Information System (INIS)

    Seyyedi, S.A.; Hadji-Mirzai, M.; Russell, K.C.

    The energetics and kinetics of void nucleation at dislocations and interfaces are analyzed. These are potential void nucleation sites only when they are not point defect sinks. Both kinds of site are found to be excellent catalysts in the presence of inert gas

  3. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  4. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  5. Various models for pion probability distributions from heavy-ion collisions

    International Nuclear Information System (INIS)

    Mekjian, A.Z.; Mekjian, A.Z.; Schlei, B.R.; Strottman, D.; Schlei, B.R.

    1998-01-01

    Various models for pion multiplicity distributions produced in relativistic heavy ion collisions are discussed. The models include a relativistic hydrodynamic model, a thermodynamic description, an emitting source pion laser model, and a description which generates a negative binomial description. The approach developed can be used to discuss other cases which will be mentioned. The pion probability distributions for these various cases are compared. Comparison of the pion laser model and Bose-Einstein condensation in a laser trap and with the thermal model are made. The thermal model and hydrodynamic model are also used to illustrate why the number of pions never diverges and why the Bose-Einstein correction effects are relatively small. The pion emission strength η of a Poisson emitter and a critical density η c are connected in a thermal model by η/n c =e -m/T <1, and this fact reduces any Bose-Einstein correction effects in the number and number fluctuation of pions. Fluctuations can be much larger than Poisson in the pion laser model and for a negative binomial description. The clan representation of the negative binomial distribution due to Van Hove and Giovannini is discussed using the present description. Applications to CERN/NA44 and CERN/NA49 data are discussed in terms of the relativistic hydrodynamic model. copyright 1998 The American Physical Society

  6. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-06-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  7. Constituent quarks as clusters in quark-gluon-parton model. [Total cross sections, probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1976-12-01

    We present a quark-gluon-parton model in which quark-partons and gluons make clusters corresponding to two or three constituent quarks (or anti-quarks) in the meson or in the baryon, respectively. We explicitly construct the constituent quark state (cluster), by employing the Kuti-Weisskopf theory and by requiring the scaling. The quark additivity of the hadronic total cross sections and the quark counting rules on the threshold powers of various distributions are satisfied. For small x (Feynman fraction), it is shown that the constituent quarks and quark-partons have quite different probability distributions. We apply our model to hadron-hadron inclusive reactions, and clarify that the fragmentation and the diffractive processes relate to the constituent quark distributions, while the processes in or near the central region are controlled by the quark-partons. Our model gives the reasonable interpretation for the experimental data and much improves the usual ''constituent interchange model'' result near and in the central region (x asymptotically equals x sub(T) asymptotically equals 0).

  8. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    Science.gov (United States)

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.

  9. The correlation of defect distribution in collisional phase with measured cascade collapse probability

    International Nuclear Information System (INIS)

    Morishita, K.; Ishino, S.; Sekimura, N.

    1995-01-01

    The spatial distributions of atomic displacement at the end of the collisional phase of cascade damage processes were calculated using the computer simulation code MARLOWE, which is based on the binary collision approximation (BCA). The densities of the atomic displacement were evaluated in high dense regions (HDRs) of cascades in several pure metals (Fe, Ni, Cu, Ag, Au, Mo and W). They were compared with the measured cascade collapse probabilities reported in the literature where TEM observations were carried out using thin metal foils irradiated by low-dose ions at room temperature. We found that there exists the minimum or ''critical'' values of the atomic displacement densities for the HDR to collapse into TEM-visible vacancy clusters. The critical densities are generally independent of the cascade energy in the same metal. Furthermore, the material dependence of the critical densities can be explained by the difference in the vacancy mobility at the melting temperature of target materials. This critical density calibration, which is extracted from the ion-irradiation experiments and the BCA simulations, is applied to estimation of cascade collapse probabilities in the metals irradiated by fusion neutrons. (orig.)

  10. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  11. Wave functions and two-electron probability distributions of the Hooke's-law atom and helium

    International Nuclear Information System (INIS)

    O'Neill, Darragh P.; Gill, Peter M. W.

    2003-01-01

    The Hooke's-law atom (hookium) provides an exactly soluble model for a two-electron atom in which the nuclear-electron Coulombic attraction has been replaced by a harmonic one. Starting from the known exact position-space wave function for the ground state of hookium, we present the momentum-space wave function. We also look at the intracules, two-electron probability distributions, for hookium in position, momentum, and phase space. These are compared with the Hartree-Fock results and the Coulomb holes (the difference between the exact and Hartree-Fock intracules) in position, momentum, and phase space are examined. We then compare these results with analogous results for the ground state of helium using a simple, explicitly correlated wave function

  12. Light Scattering of Rough Orthogonal Anisotropic Surfaces with Secondary Most Probable Slope Distributions

    International Nuclear Information System (INIS)

    Li Hai-Xia; Cheng Chuan-Fu

    2011-01-01

    We study the light scattering of an orthogonal anisotropic rough surface with secondary most probable slope distribution. It is found that the scattered intensity profiles have obvious secondary maxima, and in the direction perpendicular to the plane of incidence, the secondary maxima are oriented in a curve on the observation plane, which is called the orientation curve. By numerical calculation of the scattering wave fields with the height data of the sample, it is validated that the secondary maxima are induced by the side face element, which constitutes the prismoid structure of the anisotropic surface. We derive the equation of the quadratic orientation curve. Experimentally, we construct the system for light scattering measurement using a CCD. The scattered intensity profiles are extracted from the images at different angles of incidence along the orientation curves. The experimental results conform to the theory. (fundamental areas of phenomenology(including applications))

  13. Estimating species occurrence, abundance, and detection probability using zero-inflated distributions.

    Science.gov (United States)

    Wenger, Seth J; Freeman, Mary C

    2008-10-01

    Researchers have developed methods to account for imperfect detection of species with either occupancy (presence absence) or count data using replicated sampling. We show how these approaches can be combined to simultaneously estimate occurrence, abundance, and detection probability by specifying a zero-inflated distribution for abundance. This approach may be particularly appropriate when patterns of occurrence and abundance arise from distinct processes operating at differing spatial or temporal scales. We apply the model to two data sets: (1) previously published data for a species of duck, Anas platyrhynchos, and (2) data for a stream fish species, Etheostoma scotti. We show that in these cases, an incomplete-detection zero-inflated modeling approach yields a superior fit to the data than other models. We propose that zero-inflated abundance models accounting for incomplete detection be considered when replicate count data are available.

  14. Probability distribution of distance in a uniform ellipsoid: Theory and applications to physics

    International Nuclear Information System (INIS)

    Parry, Michelle; Fischbach, Ephraim

    2000-01-01

    A number of authors have previously found the probability P n (r) that two points uniformly distributed in an n-dimensional sphere are separated by a distance r. This result greatly facilitates the calculation of self-energies of spherically symmetric matter distributions interacting by means of an arbitrary radially symmetric two-body potential. We present here the analogous results for P 2 (r;ε) and P 3 (r;ε) which respectively describe an ellipse and an ellipsoid whose major and minor axes are 2a and 2b. It is shown that for ε=(1-b 2 /a 2 ) 1/2 ≤1, P 2 (r;ε) and P 3 (r;ε) can be obtained as an expansion in powers of ε, and our results are valid through order ε 4 . As an application of these results we calculate the Coulomb energy of an ellipsoidal nucleus, and compare our result to an earlier result quoted in the literature. (c) 2000 American Institute of Physics

  15. Probability distribution functions for ELM bursts in a series of JET tokamak discharges

    International Nuclear Information System (INIS)

    Greenhough, J; Chapman, S C; Dendy, R O; Ward, D J

    2003-01-01

    A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour

  16. A statistical model for deriving probability distributions of contamination for accidental releases

    International Nuclear Information System (INIS)

    ApSimon, H.M.; Davison, A.C.

    1986-01-01

    Results generated from a detailed long-range transport model, MESOS, simulating dispersal of a large number of hypothetical releases of radionuclides in a variety of meteorological situations over Western Europe have been used to derive a simpler statistical model, MESOSTAT. This model may be used to generate probability distributions of different levels of contamination at a receptor point 100-1000 km or so from the source (for example, across a frontier in another country) without considering individual release and dispersal scenarios. The model is embodied in a series of equations involving parameters which are determined from such factors as distance between source and receptor, nuclide decay and deposition characteristics, release duration, and geostrophic windrose at the source. Suitable geostrophic windrose data have been derived for source locations covering Western Europe. Special attention has been paid to the relatively improbable extreme values of contamination at the top end of the distribution. The MESOSTAT model and its development are described, with illustrations of its use and comparison with the original more detailed modelling techniques. (author)

  17. Influence of the voids fraction in the power distribution for two different types of fuel assemblies; Influencia de la fraccion de vacios en la distribucion de potencia para dos diferentes tipos de ensambles de combustible

    Energy Technology Data Exchange (ETDEWEB)

    Jacinto C, S.; Del Valle G, E. [IPN, Escuela Superior de Fisica y Matematicas, Av. IPN s/n, 07738 Ciudad de Mexico (Mexico); Alonso V, G.; Martinez C, E., E-mail: sid.jcl@gmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2017-09-15

    In this work an analysis of the influence of the voids fraction in the power distribution was carried out, in order to understand more about the fission process and the energy produced by the fuel assembly type BWR. The fast neutron flux was analyzed considering neutrons with energies between 0.625 eV and 10 MeV. Subsequently, the thermal neutron flux analysis was carried out in a range between 0.005 eV and 0.625 eV. Likewise, its possible implications in the power distribution of the fuel cell were also analyzed. These analyzes were carried out for different void fraction values: 0.2, 0.4 and 0.8. The variations in different burn steps were also studied: 20, 40 and 60 Mwd / kg. These values were studied in two different types of fuel cells: Ge-12 and SVEA-96, with an average initial enrichment of 4.11%. (Author)

  18. Air void clustering.

    Science.gov (United States)

    2015-06-01

    Air void clustering around coarse aggregate in concrete has been identified as a potential source of : low strengths in concrete mixes by several Departments of Transportation around the country. Research was : carried out to (1) develop a quantitati...

  19. VIDE: The Void IDentification and Examination toolkit

    Science.gov (United States)

    Sutter, P. M.; Lavaux, G.; Hamaus, N.; Pisani, A.; Wandelt, B. D.; Warren, M.; Villaescusa-Navarro, F.; Zivick, P.; Mao, Q.; Thompson, B. B.

    2015-03-01

    We present VIDE, the Void IDentification and Examination toolkit, an open-source Python/C++ code for finding cosmic voids in galaxy redshift surveys and N-body simulations, characterizing their properties, and providing a platform for more detailed analysis. At its core, VIDE uses a substantially enhanced version of ZOBOV (Neyinck 2008) to calculate a Voronoi tessellation for estimating the density field and performing a watershed transform to construct voids. Additionally, VIDE provides significant functionality for both pre- and post-processing: for example, VIDE can work with volume- or magnitude-limited galaxy samples with arbitrary survey geometries, or dark matter particles or halo catalogs in a variety of common formats. It can also randomly subsample inputs and includes a Halo Occupation Distribution model for constructing mock galaxy populations. VIDE uses the watershed levels to place voids in a hierarchical tree, outputs a summary of void properties in plain ASCII, and provides a Python API to perform many analysis tasks, such as loading and manipulating void catalogs and particle members, filtering, plotting, computing clustering statistics, stacking, comparing catalogs, and fitting density profiles. While centered around ZOBOV, the toolkit is designed to be as modular as possible and accommodate other void finders. VIDE has been in development for several years and has already been used to produce a wealth of results, which we summarize in this work to highlight the capabilities of the toolkit. VIDE is publicly available at http://bitbucket.org/cosmicvoids/vide_public and http://www.cosmicvoids.net.

  20. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov [Brookhaven National Laboratory, Bldg 510, Upton, NY, 11973 (United States)

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  1. Effects of translation-rotation coupling on the displacement probability distribution functions of boomerang colloidal particles

    Science.gov (United States)

    Chakrabarty, Ayan; Wang, Feng; Sun, Kai; Wei, Qi-Huo

    Prior studies have shown that low symmetry particles such as micro-boomerangs exhibit behaviour of Brownian motion rather different from that of high symmetry particles because convenient tracking points (TPs) are usually inconsistent with the center of hydrodynamic stress (CoH) where the translational and rotational motions are decoupled. In this paper we study the effects of the translation-rotation coupling on the displacement probability distribution functions (PDFs) of the boomerang colloid particles with symmetric arms. By tracking the motions of different points on the particle symmetry axis, we show that as the distance between the TP and the CoH is increased, the effects of translation-rotation coupling becomes pronounced, making the short-time 2D PDF for fixed initial orientation to change from elliptical to crescent shape and the angle averaged PDFs from ellipsoidal-particle-like PDF to a shape with a Gaussian top and long displacement tails. We also observed that at long times the PDFs revert to Gaussian. This crescent shape of 2D PDF provides a clear physical picture of the non-zero mean displacements observed in boomerangs particles.

  2. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka; Slosar, Anze

    2018-01-01

    The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.

  3. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  4. Evaluating the suitability of wind speed probability distribution models: A case of study of east and southeast parts of Iran

    International Nuclear Information System (INIS)

    Alavi, Omid; Mohammadi, Kasra; Mostafaeipour, Ali

    2016-01-01

    Highlights: • Suitability of different wind speed probability functions is assessed. • 5 stations distributed in east and south-east of Iran are considered as case studies. • Nakagami distribution is tested for first time and compared with 7 other functions. • Due to difference in wind features, best function is not similar for all stations. - Abstract: Precise information of wind speed probability distribution is truly significant for many wind energy applications. The objective of this study is to evaluate the suitability of different probability functions for estimating wind speed distribution at five stations, distributed in the east and southeast of Iran. Nakagami distribution function is utilized for the first time to estimate the distribution of wind speed. The performance of Nakagami function is compared with seven typically used distribution functions. The achieved results reveal that the more effective function is not similar among all stations. Wind speed characteristics, quantity and quality of the recorded wind speed data can be considered as influential parameters on the performance of the distribution functions. Also, the skewness of the recorded wind speed data may have influence on the accuracy of the Nakagami distribution. For Chabahar and Khaf stations the Nakagami distribution shows the highest performance while for Lutak, Rafsanjan and Zabol stations the Gamma, Generalized Extreme Value and Inverse-Gaussian distributions offer the best fits, respectively. Based on the analysis, the Nakagami distribution can generally be considered as an effective distribution since it provides the best fits in 2 stations and ranks 3rd to 5th in the remaining stations; however, due to the close performance of the Nakagami and Weibull distributions and also flexibility of the Weibull function as its widely proven feature, more assessments on the performance of the Nakagami distribution are required.

  5. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Science.gov (United States)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  6. Measurement of air distribution and void fraction of an upwards air–water flow using electrical resistance tomography and a wire-mesh sensor

    International Nuclear Information System (INIS)

    Olerni, Claudio; Jia, Jiabin; Wang, Mi

    2013-01-01

    Measurements on an upwards air–water flow are reported that were obtained simultaneously with a dual-plane electrical resistance tomograph (ERT) and a wire-mesh sensor (WMS). The ultimate measurement target of both ERT and WMS is the same, the electrical conductivity of the medium. The ERT is a non-intrusive device whereas the WMS requires a net of wires that physically crosses the flow. This paper presents comparisons between the results obtained simultaneously from the ERT and the WMS for evaluation and calibration of the ERT. The length of the vertical testing pipeline section is 3 m with an internal diameter of 50 mm. Two distinct sets of air–water flow rate scenarios, bubble and slug regimes, were produced in the experiments. The fast impedance camera ERT recorded the data at an approximate time resolution of 896 frames per second (fps) per plane in contrast with the 1024 fps of the wire-mesh sensor WMS200. The set-up of the experiment was based on well established knowledge of air–water upwards flow, particularly the specific flow regimes and wall peak effects. The local air void fraction profiles and the overall air void fraction were produced from two systems to establish consistency for comparison of the data accuracy. Conventional bulk flow measurements in air mass and electromagnetic flow metering, as well as pressure and temperature, were employed, which brought the necessary calibration to the flow measurements. The results show that the profiles generated from the two systems have a certain level of inconsistency, particularly in a wall peak and a core peak from the ERT and WMS respectively, whereas the two tomography instruments achieve good agreement on the overall air void fraction for bubble flow. For slug flow, when the void fraction is over 30%, the ERT underestimates the void fraction, but a linear relation between ERT and WMS is still observed. (paper)

  7. Dependence of calculated void reactivity on film-boiling representation

    International Nuclear Information System (INIS)

    Whitlock, J.; Garland, W.

    1992-01-01

    Partial voiding of a fuel channel can lead to complicated neutronic analysis, because of highly nonuniform spatial distributions. An investigation of the distribution dependence of void reactivity in a Canada deuterium uranium (CANDU) lattice, specifically in the regime of film boiling, was done. Although the core is not expected to be critical at the time of sheath dryout, this study augments current knowledge of void reactivity in this type of lattice

  8. Nucleation of voids - the impurity effect

    International Nuclear Information System (INIS)

    Chen, I-W; Taiwo, A.

    1984-01-01

    Nucleation of voids under irradiation in multicomponent alloys remains an unsolved theoretical problem. Of particular interest are the effects of nonequilibrium solute segregation phenomena on the critical nucleus and the nucleation rate. The resolution of the multicomponent nucleation in a dissipative system also has broader implication to the field of irreversible thermodynamics. The present paper describes a recent study of solute segregation effects in void nucleation. We begin with a thermodynamic model for a nonequilibrium void with interfacial segregation. The thermodynamic model is coupled with kinetic considerations of solute/solvent diffusion under a bias, which is itself related to segregation by the coating effect, to assess the stability of void embryos. To determine nucleation rate, we develop a novel technique by extending the most probable path method in statistical mechanics for nonequilibrium steady state to simulate large fluctuation with nonlinear dissipation. The path of nucleation is determined by solving an analogous problem on particle trajectory in classical dynamics. The results of both the stability analysis and the fluctuation analysis establish the paramount significance of the impurity effect via the mechanism of nonequilibrium segregation. We conclude that over-segregation is probably the most general cause for the apparently low nucleation barriers that are responsible for nearly ubiquitous occurrence of void swelling in common metals

  9. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    Science.gov (United States)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  10. Characterisation of seasonal flood types according to timescales in mixed probability distributions

    Science.gov (United States)

    Fischer, Svenja; Schumann, Andreas; Schulte, Markus

    2016-08-01

    When flood statistics are based on annual maximum series (AMS), the sample often contains flood peaks, which differ in their genesis. If the ratios among event types change over the range of observations, the extrapolation of a probability distribution function (pdf) can be dominated by a majority of events that belong to a certain flood type. If this type is not typical for extraordinarily large extremes, such an extrapolation of the pdf is misleading. To avoid this breach of the assumption of homogeneity, seasonal models were developed that differ between winter and summer floods. We show that a distinction between summer and winter floods is not always sufficient if seasonal series include events with different geneses. Here, we differentiate floods by their timescales into groups of long and short events. A statistical method for such a distinction of events is presented. To demonstrate their applicability, timescales for winter and summer floods in a German river basin were estimated. It is shown that summer floods can be separated into two main groups, but in our study region, the sample of winter floods consists of at least three different flood types. The pdfs of the two groups of summer floods are combined via a new mixing model. This model considers that information about parallel events that uses their maximum values only is incomplete because some of the realisations are overlaid. A statistical method resulting in an amendment of statistical parameters is proposed. The application in a German case study demonstrates the advantages of the new model, with specific emphasis on flood types.

  11. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.

  12. Positive void reactivity

    International Nuclear Information System (INIS)

    Diamond, D.J.

    1992-09-01

    This report is a review of some of the important aspects of the analysis of large loss-of-coolant accidents (LOCAs). One important aspect is the calculation of positive void reactivity. To study this subject the lattice physics codes used for void worth calculations and the coupled neutronic and thermal-hydraulic codes used for the transient analysis are reviewed. Also reviewed are the measurements used to help validate the codes. The application of these codes to large LOCAs is studied with attention focused on the uncertainty factor for the void worth used to bias the results. Another aspect of the subject dealt with in the report is the acceptance criteria that are applied. This includes the criterion for peak fuel enthalpy and the question of whether prompt criticality should also be a criterion. To study the former, fuel behavior measurements and calculations are reviewed. (Author) (49 refs., 2 figs., tab.)

  13. Void effects on BWR Doppler and void reactivity feedback

    International Nuclear Information System (INIS)

    Hsiang-Shou Cheng; Diamond, D.J.

    1978-01-01

    The significance of steam voids and control rods on the Doppler feedback in a gadolinia shimmed BWR is demonstrated. The importance of bypass voids when determining void feedback is also shown. Calculations were done using a point model, i.e., feedback was expressed in terms of reactivity coefficients which were determined for individual four-bundle configurations and then appropriately combined to yield reactor results. For overpower transients the inclusion of the void effect of control rods is to reduce Doppler feedback. For overpressurization transients the inclusion of the effect of bypass void wil increase the reactivity due to void collapse. (author)

  14. Probability Distribution of Long-run Indiscriminate Felling of Trees in ...

    African Journals Online (AJOL)

    Bright

    conditionally independent of every prior state given the current state (Obodos, ... of events or experiments in which the probability of occurrence for an event ... represent the exhaustive and mutually exclusive outcomes (states) of a system at.

  15. Anderson transition on the Cayley tree as a traveling wave critical point for various probability distributions

    International Nuclear Information System (INIS)

    Monthus, Cecile; Garel, Thomas

    2009-01-01

    For Anderson localization on the Cayley tree, we study the statistics of various observables as a function of the disorder strength W and the number N of generations. We first consider the Landauer transmission T N . In the localized phase, its logarithm follows the traveling wave form T N ≅(ln T N )-bar + ln t* where (i) the disorder-averaged value moves linearly (ln(T N ))-bar≅-N/ξ loc and the localization length diverges as ξ loc ∼(W-W c ) -ν loc with ν loc = 1 and (ii) the variable t* is a fixed random variable with a power-law tail P*(t*) ∼ 1/(t*) 1+β(W) for large t* with 0 N are governed by rare events. In the delocalized phase, the transmission T N remains a finite random variable as N → ∞, and we measure near criticality the essential singularity (ln(T ∞ ))-bar∼-|W c -W| -κ T with κ T ∼ 0.25. We then consider the statistical properties of normalized eigenstates Σ x |ψ(x)| 2 = 1, in particular the entropy S = -Σ x |ψ(x)| 2 ln |ψ(x)| 2 and the inverse participation ratios (IPR) I q = Σ x |ψ(x)| 2q . In the localized phase, the typical entropy diverges as S typ ∼( W-W c ) -ν S with ν S ∼ 1.5, whereas it grows linearly as S typ (N) ∼ N in the delocalized phase. Finally for the IPR, we explain how closely related variables propagate as traveling waves in the delocalized phase. In conclusion, both the localized phase and the delocalized phase are characterized by the traveling wave propagation of some probability distributions, and the Anderson localization/delocalization transition then corresponds to a traveling/non-traveling critical point. Moreover, our results point toward the existence of several length scales that diverge with different exponents ν at criticality

  16. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks

    International Nuclear Information System (INIS)

    Zhuang Jiancang; Ogata, Yosihiko

    2006-01-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata et al., Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method

  17. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks.

    Science.gov (United States)

    Zhuang, Jiancang; Ogata, Yosihiko

    2006-04-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.

  18. Simulation of dust voids in complex plasmas

    Science.gov (United States)

    Goedheer, W. J.; Land, V.

    2008-12-01

    In dusty radio-frequency (RF) discharges under micro-gravity conditions often a void is observed, a dust free region in the discharge center. This void is generated by the drag of the positive ions pulled out of the discharge by the electric field. We have developed a hydrodynamic model for dusty RF discharges in argon to study the behaviour of the void and the interaction between the dust and the plasma background. The model is based on a recently developed theory for the ion drag force and the charging of the dust. With this model, we studied the plasma inside the void and obtained an understanding of the way it is sustained by heat generated in the surrounding dust cloud. When this heating mechanism is suppressed by lowering the RF power, the plasma density inside the void decreases, even below the level where the void collapses, as was recently shown in experiments on board the International Space Station. In this paper we present results of simulations of this collapse. At reduced power levels the collapsed central cloud behaves as an electronegative plasma with corresponding low time-averaged electric fields. This enables the creation of relatively homogeneous Yukawa balls, containing more than 100 000 particles. On earth, thermophoresis can be used to balance gravity and obtain similar dust distributions.

  19. Simulation of n-qubit quantum systems. IV. Parametrizations of quantum states, matrices and probability distributions

    Science.gov (United States)

    Radtke, T.; Fritzsche, S.

    2008-11-01

    , quantum information science has contributed to our understanding of quantum mechanics and has provided also new and efficient protocols, based on the use of entangled quantum states. To determine the behavior and entanglement of n-qubit quantum registers, symbolic and numerical simulations need to be applied in order to analyze how these quantum information protocols work and which role the entanglement plays hereby. Solution method: Using the computer algebra system Maple, we have developed a set of procedures that support the definition, manipulation and analysis of n-qubit quantum registers. These procedures also help to deal with (unitary) logic gates and (nonunitary) quantum operations that act upon the quantum registers. With the parameterization of various frequently-applied objects, that are implemented in the present version, the program now facilitates a wider range of symbolic and numerical studies. All commands can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems, both in ideal and noisy quantum circuits. Reasons for new version: In the first version of the FEYNMAN program [1], we implemented the data structures and tools that are necessary to create, manipulate and to analyze the state of quantum registers. Later [2,3], support was added to deal with quantum operations (noisy channels) as an ingredient which is essential for studying the effects of decoherence. With the present extension, we add a number of parametrizations of objects frequently utilized in decoherence and entanglement studies, such that as hermitian and unitary matrices, probability distributions, or various kinds of quantum states. This extension therefore provides the basis, for example, for the optimization of a given function over the set of pure states or the simple generation of random objects. Running time: Most commands that act upon quantum registers with five or less qubits take ⩽10 seconds of processor time on a Pentium 4 processor

  20. Stochastic Economic Dispatch with Wind using Versatile Probability Distribution and L-BFGS-B Based Dual Decomposition

    DEFF Research Database (Denmark)

    Huang, Shaojun; Sun, Yuanzhang; Wu, Qiuwei

    2018-01-01

    This paper focuses on economic dispatch (ED) in power systems with intermittent wind power, which is a very critical issue in future power systems. A stochastic ED problem is formed based on the recently proposed versatile probability distribution (VPD) of wind power. The problem is then analyzed...

  1. Benchmarking PARTISN with Analog Monte Carlo: Moments of the Neutron Number and the Cumulative Fission Number Probability Distributions

    Energy Technology Data Exchange (ETDEWEB)

    O' Rourke, Patrick Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-27

    The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.

  2. Dislocation and void segregation in copper during neutron irradiation

    DEFF Research Database (Denmark)

    Singh, Bachu Narain; Leffers, Torben; Horsewell, Andy

    1986-01-01

    ); the irradiation experiments were carried out at 250 degree C. The irradiated specimens were examined by transmission electron microscopy. At both doses, the irradiation-induced structure was found to be highly segregated; the dislocation loops and segments were present in the form of irregular walls and the voids...... density, the void swelling rate was very high (approximately 2. 5% per dpa). The implications of the segregated distribution of sinks for void formation and growth are briefly discussed....

  3. Assessing the Adequacy of Probability Distributions for Estimating the Extreme Events of Air Temperature in Dabaa Region

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2015-01-01

    Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October

  4. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  5. Void shape effects and voids starting from cracked inclusion

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2011-01-01

    Numerical, axisymmetric cell model analyses are used to study the growth of voids in ductile metals, until the mechanism of coalescence with neighbouring voids sets in. A special feature of the present analyses is that extremely small values of the initial void volume fraction are considered, dow...

  6. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  7. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  8. Uncertainty of Hydrological Drought Characteristics with Copula Functions and Probability Distributions: A Case Study of Weihe River, China

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2017-05-01

    Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.

  9. In favor of general probability distributions: lateral prefrontal and insular cortices respond to stimulus inherent, but irrelevant differences.

    Science.gov (United States)

    Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A

    2016-04-01

    A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.

  10. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  11. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  12. Combining Marginal Probability Distributions via Minimization of Weighted Sum of Kullback-Leibler Divergences

    Czech Academy of Sciences Publication Activity Database

    Kracík, Jan

    2011-01-01

    Roč. 52, č. 6 (2011), s. 659-671 ISSN 0888-613X R&D Projects: GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : combining probabilities * Kullback-Leibler divergence * maximum likelihood * expert opinions * linear opinion pool Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.948, year: 2011 http://library.utia.cas.cz/separaty/2011/AS/kracik-0359399. pdf

  13. Sampling informative/complex a priori probability distributions using Gibbs sampling assisted by sequential simulation

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou

    2010-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....

  14. Medium-range correlation of Ag ions in superionic melts of Ag{sub 2}Se and AgI by reverse Monte Carlo structural modelling-connectivity and void distribution

    Energy Technology Data Exchange (ETDEWEB)

    Tahara, Shuta; Ohno, Satoru [Faculty of Pharmacy, Niigata University of Pharmacy and Applied Life Sciences, 265-1 Higashijima, Akiha-ku, Niigata 956-8603 (Japan); Ueno, Hiroki; Takeda, Shin' ichi [Department of Physics, Faculty of Sciences, Kyushu University 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581 (Japan); Ohara, Koji; Kohara, Shinji [Research and Utilization Division, Japan Synchrotron Radiation Research Institute (JASRI, SPring-8), 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan); Kawakita, Yukinobu [J-PARC Center, Japan Atomic Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki 319-1195 (Japan)

    2011-06-15

    High-energy x-ray diffraction measurements on molten Ag{sub 2}Se were performed. Partial structure factors and radial distribution functions were deduced by reverse Monte Carlo (RMC) structural modelling on the basis of our new x-ray and earlier published neutron diffraction data. These partial functions were compared with those of molten AgI. Both AgI and Ag{sub 2}Se have a superionic solid phase prior to melting. New RMC structural modelling for molten AgI was performed to revise our previous model with a bond-angle restriction to reduce the number of unphysical Ag triangles. The refined model of molten AgI revealed that isolated unbranched chains formed by Ag ions are the cause of the medium-range order of Ag. In contrast with molten AgI, molten Ag{sub 2}Se has 'cage-like' structures with approximately seven Ag ions surrounding a Se ion. Connectivity analysis revealed that most of the Ag ions in molten Ag{sub 2}Se are located within 2.9 A of each other and only small voids are found, which is in contrast to the wide distribution of Ag-void radii in molten AgI. It is conjectured that the collective motion of Ag ions through small voids is required to realize the well-known fast diffusion of Ag ions in molten Ag{sub 2}Se, which is comparable to that in molten AgI.

  15. Probability distribution of dose rates in the body tissue as a function of the rhytm of Sr90 administration and the age of animals

    International Nuclear Information System (INIS)

    Rasin, I.M.; Sarapul'tsev, I.A.

    1975-01-01

    The probability distribution of tissue radiation doses in the skeleton were studied in experiments on swines and dogs. When introducing Sr-90 into the organism from the day of birth till 90 days dose rate probability distribution is characterized by one, or, for adult animals, by two independent aggregates. Each of these aggregates correspond to the normal distribution law

  16. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  17. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    Science.gov (United States)

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  18. On the probability distribution of stock returns in the Mike-Farmer model

    Science.gov (United States)

    Gu, G.-F.; Zhou, W.-X.

    2009-02-01

    Recently, Mike and Farmer have constructed a very powerful and realistic behavioral model to mimick the dynamic process of stock price formation based on the empirical regularities of order placement and cancelation in a purely order-driven market, which can successfully reproduce the whole distribution of returns, not only the well-known power-law tails, together with several other important stylized facts. There are three key ingredients in the Mike-Farmer (MF) model: the long memory of order signs characterized by the Hurst index Hs, the distribution of relative order prices x in reference to the same best price described by a Student distribution (or Tsallis’ q-Gaussian), and the dynamics of order cancelation. They showed that different values of the Hurst index Hs and the freedom degree αx of the Student distribution can always produce power-law tails in the return distribution fr(r) with different tail exponent αr. In this paper, we study the origin of the power-law tails of the return distribution fr(r) in the MF model, based on extensive simulations with different combinations of the left part L(x) for x 0 of fx(x). We find that power-law tails appear only when L(x) has a power-law tail, no matter R(x) has a power-law tail or not. In addition, we find that the distributions of returns in the MF model at different timescales can be well modeled by the Student distributions, whose tail exponents are close to the well-known cubic law and increase with the timescale.

  19. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  20. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  1. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  2. Voids and the Cosmic Web: cosmic depression & spatial complexity

    NARCIS (Netherlands)

    van de Weygaert, Rien; Shandarin, S.; Saar, E.; Einasto, J.

    2016-01-01

    Voids form a prominent aspect of the Megaparsec distribution of galaxies and matter. Not only do theyrepresent a key constituent of the Cosmic Web, they also are one of the cleanest probesand measures of global cosmological parameters. The shape and evolution of voids are highly sensitive tothe

  3. Theory of void swelling, irradiation creep and growth

    International Nuclear Information System (INIS)

    Wood, M.H.; Bullough, R.; Hayns, M.R.

    Recent progress in our understanding of the fundamental mechanisms involved in swelling, creep and growth of materials subjected to irradiation is reviewed. The topics discussed are: the sink types and their strengths in the lossy continuum; swelling and void distribution analysis, including recent work on void nucleation; and, irradiation creep and growth of zirconium and zircaloy are taken as an example

  4. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    Science.gov (United States)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  5. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    Directory of Open Access Journals (Sweden)

    S. K. Morley

    2007-11-01

    Full Text Available We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  6. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  7. Probability Distributions for Cyclone Key Parameters and Cyclonic Wind Speed for the East Coast of Indian Region

    Directory of Open Access Journals (Sweden)

    Pradeep K. Goyal

    2011-09-01

    Full Text Available This paper presents a study conducted on the probabilistic distribution of key cyclone parameters and the cyclonic wind speed by analyzing the cyclone track records obtained from India meteorological department for east coast region of India. The dataset of historical landfalling storm tracks in India from 1975–2007 with latitude /longitude and landfall locations are used to map the cyclone tracks in a region of study. The statistical tests were performed to find a best fit distribution to the track data for each cyclone parameter. These parameters include central pressure difference, the radius of maximum wind speed, the translation velocity, track angle with site and are used to generate digital simulated cyclones using wind field simulation techniques. For this, different sets of values for all the cyclone key parameters are generated randomly from their probability distributions. Using these simulated values of the cyclone key parameters, the distribution of wind velocity at a particular site is obtained. The same distribution of wind velocity at the site is also obtained from actual track records and using the distributions of the cyclone key parameters as published in the literature. The simulated distribution is compared with the wind speed distributions obtained from actual track records. The findings are useful in cyclone disaster mitigation.

  8. Multiparameter probability distributions for heavy rainfall modeling in extreme southern Brazil

    Directory of Open Access Journals (Sweden)

    Samuel Beskow

    2015-09-01

    New hydrological insights for the region: The Anderson–Darling and Filliben tests were the most restrictive in this study. Based on the Anderson–Darling test, it was found that the Kappa distribution presented the best performance, followed by the GEV. This finding provides evidence that these multiparameter distributions result, for the region of study, in greater accuracy for the generation of intensity–duration–frequency curves and the prediction of peak streamflows and design hydrographs. As a result, this finding can support the design of hydraulic structures and flood management in river basins.

  9. Assessment of fragment projection hazard: probability distributions for the initial direction of fragments.

    Science.gov (United States)

    Tugnoli, Alessandro; Gubinelli, Gianfilippo; Landucci, Gabriele; Cozzani, Valerio

    2014-08-30

    The evaluation of the initial direction and velocity of the fragments generated in the fragmentation of a vessel due to internal pressure is an important information in the assessment of damage caused by fragments, in particular within the quantitative risk assessment (QRA) of chemical and process plants. In the present study an approach is proposed to the identification and validation of probability density functions (pdfs) for the initial direction of the fragments. A detailed review of a large number of past accidents provided the background information for the validation procedure. A specific method was developed for the validation of the proposed pdfs. Validated pdfs were obtained for both the vertical and horizontal angles of projection and for the initial velocity of the fragments. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Tree mortality estimates and species distribution probabilities in southeastern United States forests

    Science.gov (United States)

    Martin A. Spetich; Zhaofei Fan; Zhen Sui; Michael Crosby; Hong S. He; Stephen R. Shifley; Theodor D. Leininger; W. Keith Moser

    2017-01-01

    Stresses to trees under a changing climate can lead to changes in forest tree survival, mortality and distribution.  For instance, a study examining the effects of human-induced climate change on forest biodiversity by Hansen and others (2001) predicted a 32% reduction in loblolly–shortleaf pine habitat across the eastern United States.  However, they also...

  11. Hydrological model calibration for derived flood frequency analysis using stochastic rainfall and probability distributions of peak flows

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2014-01-01

    Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the

  12. A probability distribution model of tooth pits for evaluating time-varying mesh stiffness of pitting gears

    Science.gov (United States)

    Lei, Yaguo; Liu, Zongyao; Wang, Delong; Yang, Xiao; Liu, Huan; Lin, Jing

    2018-06-01

    Tooth damage often causes a reduction in gear mesh stiffness. Thus time-varying mesh stiffness (TVMS) can be treated as an indication of gear health conditions. This study is devoted to investigating the mesh stiffness variations of a pair of external spur gears with tooth pitting, and proposes a new model for describing tooth pitting based on probability distribution. In the model, considering the appearance and development process of tooth pitting, we model the pitting on the surface of spur gear teeth as a series of pits with a uniform distribution in the direction of tooth width and a normal distribution in the direction of tooth height, respectively. In addition, four pitting degrees, from no pitting to severe pitting, are modeled. Finally, influences of tooth pitting on TVMS are analyzed in details and the proposed model is validated by comparing with a finite element model. The comparison results show that the proposed model is effective for the TVMS evaluations of pitting gears.

  13. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  14. Displacive stability of a void in a void lattice

    International Nuclear Information System (INIS)

    Brailsford, A.D.

    1977-01-01

    It has recently been suggested that the stability of the void-lattice structure in irradiated metals may be attributed to the effect of the overlapping of the point-defect diffusion fields associated with each void. It is shown here, however, that the effect is much too weak. When one void is displaced from its lattice site, the displacement is shown to relax to zero as proposed, but a conservative estimate indicates that the characteristic time is equivalent to an irradiation dose of the order of 300 displacements per atom which is generally much greater than the dose necessary for void-lattice formation

  15. Confidence limits with multiple channels and arbitrary probability distributions for sensitivity and expected background

    CERN Document Server

    Perrotta, A

    2002-01-01

    A MC method is proposed to compute upper limits, in a pure Bayesian approach, when the errors associated with the experimental sensitivity and expected background content are not Gaussian distributed or not small enough to apply usual approximations. It is relatively easy to extend the procedure to the multichannel case (for instance when different decay branching, luminosities or experiments have to be combined). Some of the searches for supersymmetric particles performed in the DELPHI experiment at the LEP electron- positron collider use such a procedure to propagate systematics into the calculation of cross-section upper limits. One of these searches is described as an example. (6 refs).

  16. The mean distance to the nth neighbour in a uniform distribution of random points: an application of probability theory

    International Nuclear Information System (INIS)

    Bhattacharyya, Pratip; Chakrabarti, Bikas K

    2008-01-01

    We study different ways of determining the mean distance (r n ) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating (r n ). Next, we describe two alternative means of deriving the exact expression of (r n ): we review the method using absolute probability and develop an alternative method using conditional probability. Finally, we obtain an approximation to (r n ) from the mean volume between the reference point and its nth neighbour and compare it with the heuristic and exact results

  17. The Visualization of the Space Probability Distribution for a Particle Moving in a Double Ring-Shaped Coulomb Potential

    Directory of Open Access Journals (Sweden)

    Yuan You

    2018-01-01

    Full Text Available The analytical solutions to a double ring-shaped Coulomb potential (RSCP are presented. The visualizations of the space probability distribution (SPD are illustrated for the two- (contour and three-dimensional (isosurface cases. The quantum numbers (n,l,m are mainly relevant for those quasi-quantum numbers (n′,l′,m′ via the double RSCP parameter c. The SPDs are of circular ring shape in spherical coordinates. The properties for the relative probability values (RPVs P are also discussed. For example, when we consider the special case (n,l,m=(6,5,0, the SPD moves towards two poles of z-axis when P increases. Finally, we discuss the different cases for the potential parameter b, which is taken as negative and positive values for c>0. Compared with the particular case b=0, the SPDs are shrunk for b=-0.5, while they are spread out for b=0.5.

  18. Exact valence bond entanglement entropy and probability distribution in the XXX spin chain and the potts model.

    Science.gov (United States)

    Jacobsen, J L; Saleur, H

    2008-02-29

    We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.

  19. The probability distribution of the delay time of a wave packet in strong overlap of resonance levels

    International Nuclear Information System (INIS)

    Lyuboshitz, V.L.

    1982-01-01

    The time development of nuclear reactions at a large density of levels is investigated using the theory of overlapping resonances. The analytical expression for the function describing the time delay probability distribution of a wave packet is obtained in the framework of the model of n equi - valent channels. It is shown that a relative fluctuation of the time delay at the stage of the compound nucleus is snall. The possibility is discussed of increasing the duration of nuclear raactions with rising excitation energy

  20. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    Science.gov (United States)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  1. Numerical modelling of local deposition patients, activity distributions and cellular hit probabilities of inhaled radon progenies in human airways

    International Nuclear Information System (INIS)

    Farkas, A.; Balashazy, I.; Szoeke, I.

    2003-01-01

    The general objective of our research is modelling the biophysical processes of the effects of inhaled radon progenies. This effort is related to the rejection or support of the linear no threshold (LNT) dose-effect hypothesis, which seems to be one of the most challenging tasks of current radiation protection. Our approximation and results may also serve as a useful tool for lung cancer models. In this study, deposition patterns, activity distributions and alpha-hit probabilities of inhaled radon progenies in the large airways of the human tracheobronchial tree are computed. The airflow fields and related particle deposition patterns strongly depend on the shape of airway geometry and breathing pattern. Computed deposition patterns of attached an unattached radon progenies are strongly inhomogeneous creating hot spots at the carinal regions and downstream of the inner sides of the daughter airways. The results suggest that in the vicinity of the carinal regions the multiple hit probabilities are quite high even at low average doses and increase exponentially in the low-dose range. Thus, even the so-called low doses may present high doses for large clusters of cells. The cell transformation probabilities are much higher in these regions and this phenomenon cannot be modeled with average burdens. (authors)

  2. Joint Probability Distribution Function for the Electric Microfield and its Ion-Octupole Inhomogeneity Tensor

    International Nuclear Information System (INIS)

    Halenka, J.; Olchawa, W.

    2005-01-01

    From experiments, see e.g. [W. Wiese, D. Kelleher, and D. Paquette, Phys. Rev. A 6, 1132 (1972); V. Helbig and K. Nich, J. Phys. B 14, 3573 (1981).; J. Halenka, Z. Phys. D 16, 1 (1990); . Djurovic, D. Nikolic, I. Savic, S. Sorge, and A.V. Demura, Phys. Rev. E 71, 036407 (2005)], results that the hydrogen lines formed in plasma with N e φ 10 16 cm -3 are asymmetrical. The inhomogeneity of ionic micro field and the higher order corrections (quadratic and next ones) in perturbation theory are the reason for such asymmetry. So far, the ion-emitter quadrupole interaction and the quadratic Stark effect have been included in calculations. The recent work shows that a significant discrepancy between calculations and measurements occurs in the wings of H-beta line in plasmas with cm -3 . It should be stressed here that e.g. for the energy operator the correction raised by the quadratic Stark effect is proportional to (where is the emitter-perturber distance) similarly as the correction caused by the emitter-perturber octupole interaction and the quadratic correction from emitter-perturber quadrupole interaction. Thus, it is obvious that a model of the profile calculation is consistent one if all the aforementioned corrections are simultaneously included. Such calculations are planned in the future paper. A statistics of the octupole inhomogeneity tensor in a plasma is necessarily needed in the first step of such calculations. For the first time the distribution functions of the octupole inhomogeneity have been calculated in this paper using the Mayer-Mayer cluster expansion method similarly as for the quadrupole function in the paper [J. Halenka, Z. Phys. D 16, 1 (1990)]. The quantity is the reduced scale of the micro field strength, where is the Holtsmark normal field and is the mean distance defined by the relationship, that is approximately equal to the mean ion-ion distance; whereas is the screening parameter, where is the electronic Debye radius. (author)

  3. Introduction and application of non-stationary standardized precipitation index considering probability distribution function and return period

    Science.gov (United States)

    Park, Junehyeong; Sung, Jang Hyun; Lim, Yoon-Jin; Kang, Hyun-Suk

    2018-05-01

    The widely used meteorological drought index, the Standardized Precipitation Index (SPI), basically assumes stationarity, but recent changes in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process was proposed. The results were evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered that the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite that these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the probability distribution wider than before. This implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.

  4. Exploration of probability distribution of velocities of saltating sand particles based on the stochastic particle-bed collisions

    International Nuclear Information System (INIS)

    Zheng Xiaojing; Xie Li; Zhou Youhe

    2005-01-01

    The wind-blown sand saltating movement is mainly categorized into two mechanical processes, that is, the interaction between the moving sand particles and the wind in the saltation layer, and the collisions of incident particles with sand bed, and the latter produces a lift-off velocity of a sand particle moving into saltation. In this Letter a methodology of phenomenological analysis is presented to get probability density (distribution) function (pdf) of the lift-off velocity of sand particles from sand bed based on the stochastic particle-bed collision. After the sand particles are dealt with by uniform circular disks and a 2D collision between an incident particle and the granular bed is employed, we get the analytical formulas of lift-off velocity of ejected and rebound particles in saltation, which are functions of some random parameters such as angle and magnitude of incident velocity of the impacting particles, impact and contact angles between the collision particles, and creeping velocity of sand particles, etc. By introducing the probability density functions (pdf's) of these parameters in communion with all possible patterns of sand bed and all possible particle-bed collisions, and using the essential arithmetic of multi-dimension random variables' pdf, the pdf's of lift-off velocities are deduced out and expressed by the pdf's of the random parameters in the collisions. The numerical results of the distributions of lift-off velocities display that they agree well with experimental ones

  5. Extinction probabilities and stationary distributions of mobile genetic elements in prokaryotes: The birth-death-diversification model.

    Science.gov (United States)

    Drakos, Nicole E; Wahl, Lindi M

    2015-12-01

    Theoretical approaches are essential to our understanding of the complex dynamics of mobile genetic elements (MGEs) within genomes. Recently, the birth-death-diversification model was developed to describe the dynamics of mobile promoters (MPs), a particular class of MGEs in prokaryotes. A unique feature of this model is that genetic diversification of elements was included. To explore the implications of diversification on the longterm fate of MGE lineages, in this contribution we analyze the extinction probabilities, extinction times and equilibrium solutions of the birth-death-diversification model. We find that diversification increases both the survival and growth rate of MGE families, but the strength of this effect depends on the rate of horizontal gene transfer (HGT). We also find that the distribution of MGE families per genome is not necessarily monotonically decreasing, as observed for MPs, but may have a peak in the distribution that is related to the HGT rate. For MPs specifically, we find that new families have a high extinction probability, and predict that the number of MPs is increasing, albeit at a very slow rate. Additionally, we develop an extension of the birth-death-diversification model which allows MGEs in different regions of the genome, for example coding and non-coding, to be described by different rates. This extension may offer a potential explanation as to why the majority of MPs are located in non-promoter regions of the genome. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Structural control of void formation in dual phase steels

    DEFF Research Database (Denmark)

    Azuma, Masafumi

    The objective of this study is to explore the void formation mechanisms and to clarify the influence of the hardness and structural parameters (volume fraction, size and morphology) of martensite particles on the void formation and mechanical properties in dual phase steels composed of ferrite...... and (iii) strain localization. The critical strain for void formation depends on hardness of the martensite, but is independent of the volume fraction, shape, size and distribution of the martensite. The strain partitioning between the martensite and ferrite depends on the volume fraction and hardness...... of the martensite accelerates the void formation in the martensite by enlarging the size of voids both in the martensite and ferrite. It is suggested that controlling the hardness and structural parameters associated with the martensite particles such as morphology, size and volume fraction are the essential...

  7. Determination of the equivalent intergranular void ratio - Application to the instability and the critical state of silty sand

    Directory of Open Access Journals (Sweden)

    Nguyen Trung-Kien

    2017-01-01

    Full Text Available This paper presents an experimental study of mechanical response of natural Camargue silty sand. The analysis of test results used the equivalent intergranular void ratio instead of the global void ratio. The calculation of equivalent intergranular void ratio requires the determination of parameter b which represents, physically, the fraction of active fines participating on the chain forces network, hence the strength of the soil. A new formula for determining the parameter b by using an approach based on the coordination number distribution and probability calculation is proposed. The validation of the developed relationship was done through back-analysis of published datasets in literature on the effect of fines content on silty sand behavior. It is shown that the equivalent intergranular void ratio calculated with the b value obtained by the new formula is able to provide strong correlation to not only the critical state of but also the onset of instability of various silty sands, in different terms as peak deviator stress, peak stress ratio or cyclic resistance. Therefore, it is suggested that the use of the equivalent void ratio concept and the new b calculating formula is highly desirable in predicting of the silty sand behavior.

  8. "Dark energy" in the Local Void

    Science.gov (United States)

    Villata, M.

    2012-05-01

    The unexpected discovery of the accelerated cosmic expansion in 1998 has filled the Universe with the embarrassing presence of an unidentified "dark energy", or cosmological constant, devoid of any physical meaning. While this standard cosmology seems to work well at the global level, improved knowledge of the kinematics and other properties of our extragalactic neighborhood indicates the need for a better theory. We investigate whether the recently suggested repulsive-gravity scenario can account for some of the features that are unexplained by the standard model. Through simple dynamical considerations, we find that the Local Void could host an amount of antimatter (˜5×1015 M ⊙) roughly equivalent to the mass of a typical supercluster, thus restoring the matter-antimatter symmetry. The antigravity field produced by this "dark repulsor" can explain the anomalous motion of the Local Sheet away from the Local Void, as well as several other properties of nearby galaxies that seem to require void evacuation and structure formation much faster than expected from the standard model. At the global cosmological level, gravitational repulsion from antimatter hidden in voids can provide more than enough potential energy to drive both the cosmic expansion and its acceleration, with no need for an initial "explosion" and dark energy. Moreover, the discrete distribution of these dark repulsors, in contrast to the uniformly permeating dark energy, can also explain dark flows and other recently observed excessive inhomogeneities and anisotropies of the Universe.

  9. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Shabbir, Aqsa

    2016-07-07

    In this doctoral work, pattern recognition techniques are developed and applied to data from tokamak plasmas, in order to contribute to a systematic analysis of edge-localized modes (ELMs). We employ probabilistic models for a quantitative data description geared towards an enhanced systematization of ELM phenomenology. Hence, we start from the point of view that the fundamental object resulting from the observation of a system is a probability distribution, with every single measurement providing a sample from this distribution. In exploring the patterns emerging from the various ELM regimes and relations, we need methods that can handle the intrinsic probabilistic nature of the data. The original contributions of this work are twofold. First, several novel pattern recognition methods in non-Euclidean spaces of probability distribution functions (PDFs) are developed and validated. The second main contribution lies in the application of these and other techniques to a systematic analysis of ELMs in tokamak plasmas. In regard to the methodological aims of the work, we employ the framework of information geometry to develop pattern visualization and classification methods in spaces of probability distributions. In information geometry, a family of probability distributions is considered as a Riemannian manifold. Every point on the manifold represents a single PDF and the distribution parameters provide local coordinates on the manifold. The Fisher information plays the role of a Riemannian metric tensor, enabling calculation of geodesic curves on the surface. The length of such curves yields the geodesic distance (GD) on probabilistic manifolds, which is a natural similarity (distance) measure between PDFs. Equipped with a suitable distance measure, we extrapolate several distance-based pattern recognition methods to the manifold setting. This includes k-nearest neighbor (kNN) and conformal predictor (CP) methods for classification, as well as multidimensional

  10. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    International Nuclear Information System (INIS)

    Shabbir, Aqsa

    2016-01-01

    In this doctoral work, pattern recognition techniques are developed and applied to data from tokamak plasmas, in order to contribute to a systematic analysis of edge-localized modes (ELMs). We employ probabilistic models for a quantitative data description geared towards an enhanced systematization of ELM phenomenology. Hence, we start from the point of view that the fundamental object resulting from the observation of a system is a probability distribution, with every single measurement providing a sample from this distribution. In exploring the patterns emerging from the various ELM regimes and relations, we need methods that can handle the intrinsic probabilistic nature of the data. The original contributions of this work are twofold. First, several novel pattern recognition methods in non-Euclidean spaces of probability distribution functions (PDFs) are developed and validated. The second main contribution lies in the application of these and other techniques to a systematic analysis of ELMs in tokamak plasmas. In regard to the methodological aims of the work, we employ the framework of information geometry to develop pattern visualization and classification methods in spaces of probability distributions. In information geometry, a family of probability distributions is considered as a Riemannian manifold. Every point on the manifold represents a single PDF and the distribution parameters provide local coordinates on the manifold. The Fisher information plays the role of a Riemannian metric tensor, enabling calculation of geodesic curves on the surface. The length of such curves yields the geodesic distance (GD) on probabilistic manifolds, which is a natural similarity (distance) measure between PDFs. Equipped with a suitable distance measure, we extrapolate several distance-based pattern recognition methods to the manifold setting. This includes k-nearest neighbor (kNN) and conformal predictor (CP) methods for classification, as well as multidimensional

  11. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...

  12. Measurements of void fraction by an improved multi-channel conductance void meter

    International Nuclear Information System (INIS)

    Song, Chul-Hwa; Chung, Moon Ki; No, Hee Cheon

    1998-01-01

    An improved multi-channel Conductance Void Meter (CVM) was developed to measure a void fraction. Its measuring principle is basically based upon the differences of electrical conductance of a two-phase mixture due to the variation of void fraction around a sensor. The sensor is designed to be flush-mounted to the inner wall of the test section to avoid the flow disturbances. The signal processor with three channels is specially designed so as to minimize the inherent error due to the phase difference between channels. It is emphasized that the guard electrodes are electrically shielded in order not to affect the measurements of two-phase mixture conductance, but to make the electric fields evenly distributed in a measuring volume. Void fraction is measured for bubbly and slug flow regimes in a vertical air-water loop, and statistical signal processing techniques are applied to show that CVM has a good dynamic resolution which is required to investigate the structural developments of bubbly flow and the propagation of void waves in a flow channel. (author)

  13. Dropping Probability Reduction in OBS Networks: A Simple Approach

    KAUST Repository

    Elrasad, Amr

    2016-08-01

    In this paper, we propose and derive a slotted-time model for analyzing the burst blocking probability in Optical Burst Switched (OBS) networks. We evaluated the immediate and delayed signaling reservation schemes. The proposed model compares the performance of both just-in-time (JIT) and just-enough-time (JET) signaling protocols associated with of void/non-void filling link scheduling schemes. It also considers none and limited range wavelength conversions scenarios. Our model is distinguished by being adaptable to different offset-time and burst length distributions. We observed that applying a limited range of wavelength conversion, burst blocking probability is reduced by several orders of magnitudes and yields a better burst delivery ratio compared with full wavelength conversion.

  14. A Case Series of the Probability Density and Cumulative Distribution of Laryngeal Disease in a Tertiary Care Voice Center.

    Science.gov (United States)

    de la Fuente, Jaime; Garrett, C Gaelyn; Ossoff, Robert; Vinson, Kim; Francis, David O; Gelbard, Alexander

    2017-11-01

    To examine the distribution of clinic and operative pathology in a tertiary care laryngology practice. Probability density and cumulative distribution analyses (Pareto analysis) was used to rank order laryngeal conditions seen in an outpatient tertiary care laryngology practice and those requiring surgical intervention during a 3-year period. Among 3783 new clinic consultations and 1380 operative procedures, voice disorders were the most common primary diagnostic category seen in clinic (n = 3223), followed by airway (n = 374) and swallowing (n = 186) disorders. Within the voice strata, the most common primary ICD-9 code used was dysphonia (41%), followed by unilateral vocal fold paralysis (UVFP) (9%) and cough (7%). Among new voice patients, 45% were found to have a structural abnormality. The most common surgical indications were laryngotracheal stenosis (37%), followed by recurrent respiratory papillomatosis (18%) and UVFP (17%). Nearly 55% of patients presenting to a tertiary referral laryngology practice did not have an identifiable structural abnormality in the larynx on direct or indirect examination. The distribution of ICD-9 codes requiring surgical intervention was disparate from that seen in clinic. Application of the Pareto principle may improve resource allocation in laryngology, but these initial results require confirmation across multiple institutions.

  15. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  16. Forecasting the Stock Market with Linguistic Rules Generated from the Minimize Entropy Principle and the Cumulative Probability Distribution Approaches

    Directory of Open Access Journals (Sweden)

    Chung-Ho Su

    2010-12-01

    Full Text Available To forecast a complex and non-linear system, such as a stock market, advanced artificial intelligence algorithms, like neural networks (NNs and genetic algorithms (GAs have been proposed as new approaches. However, for the average stock investor, two major disadvantages are argued against these advanced algorithms: (1 the rules generated by NNs and GAs are difficult to apply in investment decisions; and (2 the time complexity of the algorithms to produce forecasting outcomes is very high. Therefore, to provide understandable rules for investors and to reduce the time complexity of forecasting algorithms, this paper proposes a novel model for the forecasting process, which combines two granulating methods (the minimize entropy principle approach and the cumulative probability distribution approach and a rough set algorithm. The model verification demonstrates that the proposed model surpasses the three listed conventional fuzzy time-series models and a multiple regression model (MLR in forecast accuracy.

  17. Influence of Coloured Correlated Noises on Probability Distribution and Mean of Tumour Cell Number in the Logistic Growth Model

    Institute of Scientific and Technical Information of China (English)

    HAN Li-Bo; GONG Xiao-Long; CAO Li; WU Da-Jin

    2007-01-01

    An approximate Fokker-P1anck equation for the logistic growth model which is driven by coloured correlated noises is derived by applying the Novikov theorem and the Fox approximation. The steady-state probability distribution (SPD) and the mean of the tumour cell number are analysed. It is found that the SPD is the single extremum configuration when the degree of correlation between the multiplicative and additive noises, λ, is in -1<λ ≤ 0 and can be the double extrema in 0<λ<1. A configuration transition occurs because of the variation of noise parameters. A minimum appears in the curve of the mean of the steady-state tumour cell number, 〈x〉, versus λ. The position and the value of the minimum are controlled by the noise-correlated times.

  18. The effect of fog on the probability density distribution of the ranging data of imaging laser radar

    Science.gov (United States)

    Song, Wenhua; Lai, JianCheng; Ghassemlooy, Zabih; Gu, Zhiyong; Yan, Wei; Wang, Chunyong; Li, Zhenhua

    2018-02-01

    This paper outlines theoretically investigations of the probability density distribution (PDD) of ranging data for the imaging laser radar (ILR) system operating at a wavelength of 905 nm under the fog condition. Based on the physical model of the reflected laser pulses from a standard Lambertian target, a theoretical approximate model of PDD of the ranging data is developed under different fog concentrations, which offer improved precision target ranging and imaging. An experimental test bed for the ILR system is developed and its performance is evaluated using a dedicated indoor atmospheric chamber under homogeneously controlled fog conditions. We show that the measured results are in good agreement with both the accurate and approximate models within a given margin of error of less than 1%.

  19. The effect of fog on the probability density distribution of the ranging data of imaging laser radar

    Directory of Open Access Journals (Sweden)

    Wenhua Song

    2018-02-01

    Full Text Available This paper outlines theoretically investigations of the probability density distribution (PDD of ranging data for the imaging laser radar (ILR system operating at a wavelength of 905 nm under the fog condition. Based on the physical model of the reflected laser pulses from a standard Lambertian target, a theoretical approximate model of PDD of the ranging data is developed under different fog concentrations, which offer improved precision target ranging and imaging. An experimental test bed for the ILR system is developed and its performance is evaluated using a dedicated indoor atmospheric chamber under homogeneously controlled fog conditions. We show that the measured results are in good agreement with both the accurate and approximate models within a given margin of error of less than 1%.

  20. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    Science.gov (United States)

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  1. PRECISION COSMOGRAPHY WITH STACKED VOIDS

    International Nuclear Information System (INIS)

    Lavaux, Guilhem; Wandelt, Benjamin D.

    2012-01-01

    We present a purely geometrical method for probing the expansion history of the universe from the observation of the shape of stacked voids in spectroscopic redshift surveys. Our method is an Alcock-Paczyński (AP) test based on the average sphericity of voids posited on the local isotropy of the universe. It works by comparing the temporal extent of cosmic voids along the line of sight with their angular, spatial extent. We describe the algorithm that we use to detect and stack voids in redshift shells on the light cone and test it on mock light cones produced from N-body simulations. We establish a robust statistical model for estimating the average stretching of voids in redshift space and quantify the contamination by peculiar velocities. Finally, assuming that the void statistics that we derive from N-body simulations is preserved when considering galaxy surveys, we assess the capability of this approach to constrain dark energy parameters. We report this assessment in terms of the figure of merit (FoM) of the dark energy task force and in particular of the proposed Euclid mission which is particularly suited for this technique since it is a spectroscopic survey. The FoM due to stacked voids from the Euclid wide survey may double that of all other dark energy probes derived from Euclid data alone (combined with Planck priors). In particular, voids seem to outperform baryon acoustic oscillations by an order of magnitude. This result is consistent with simple estimates based on mode counting. The AP test based on stacked voids may be a significant addition to the portfolio of major dark energy probes and its potentialities must be studied in detail.

  2. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    International Nuclear Information System (INIS)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J.

    2012-01-01

    Greenhouse gas (CO 2 , CH 4 and N 2 O, hereinafter GHG) and criteria air pollutant (CO, NO x , VOC, PM 10 , PM 2.5 and SO x , hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life-cycle modeling with GREET.

  3. Updated greenhouse gas and criteria air pollutant emission factors and their probability distribution functions for electricity generating units

    Energy Technology Data Exchange (ETDEWEB)

    Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)

    2012-07-06

    Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life

  4. A void fraction model for annular two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Tandon, T.N.; Gupta, C.P.; Varma, H.K.

    1985-01-01

    An analytical model has been developed for predicting void fraction in two-phase annular flow. In the analysis, the Lockhart-Martinelli method has been used to calculate two-phase frictional pressure drop and von Karman's universal velocity profile is used to represent the velocity distribution in the annular liquid film. Void fractions predicted by the proposed model are generally in good agreement with a available experimental data. This model appears to be as good as Smith's correlation and better than the Wallis and Zivi correlations for computing void fraction.

  5. Constructing probability distributions of uncertain variables in models of the performance of the Waste Isolation Pilot Plant: The 1990 performance simulations

    International Nuclear Information System (INIS)

    Tierney, M.S.

    1990-12-01

    A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialists. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. (author)

  6. Constructing probability distributions of uncertain variables in models of the performance of the Waste Isolation Pilot Plant: The 1990 performance simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, M S

    1990-12-15

    A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialists. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. (author)

  7. A bioinformatic survey of distribution, conservation, and probable functions of LuxR solo regulators in bacteria

    Science.gov (United States)

    Subramoni, Sujatha; Florez Salcedo, Diana Vanessa; Suarez-Moreno, Zulma R.

    2015-01-01

    LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal) and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal), but are not associated with a cognate N-acyl homoserine lactone (AHL) synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs) available in the InterPro database (IPR005143), and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information. PMID:25759807

  8. A bioinformatic survey of distribution, conservation, and probable functions of LuxR solo regulators in bacteria.

    Science.gov (United States)

    Subramoni, Sujatha; Florez Salcedo, Diana Vanessa; Suarez-Moreno, Zulma R

    2015-01-01

    LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal) and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal), but are not associated with a cognate N-acyl homoserine lactone (AHL) synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs) available in the InterPro database (IPR005143), and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information.

  9. A bioinformatic survey of distribution, conservation and probable functions of LuxR solo regulators in bacteria

    Directory of Open Access Journals (Sweden)

    Sujatha eSubramoni

    2015-02-01

    Full Text Available LuxR solo transcriptional regulators contain both an autoinducer binding domain (ABD; N-terminal and a DNA binding Helix-Turn-Helix domain (HTH; C-terminal, but are not associated with a cognate N-acyl homoserine lactone (AHL synthase coding gene in the same genome. Although a few LuxR solos have been characterized, their distributions as well as their role in bacterial signal perception and other processes are poorly understood. In this study we have carried out a systematic survey of distribution of all ABD containing LuxR transcriptional regulators (QS domain LuxRs available in the InterPro database (IPR005143, and identified those lacking a cognate AHL synthase. These LuxR solos were then analyzed regarding their taxonomical distribution, predicted functions of neighboring genes and the presence of complete AHL-QS systems in the genomes that carry them. Our analyses reveal the presence of one or multiple predicted LuxR solos in many proteobacterial genomes carrying QS domain LuxRs, some of them harboring genes for one or more AHL-QS circuits. The presence of LuxR solos in bacteria occupying diverse environments suggests potential ecological functions for these proteins beyond AHL and interkingdom signaling. Based on gene context and the conservation levels of invariant amino acids of ABD, we have classified LuxR solos into functionally meaningful groups or putative orthologs. Surprisingly, putative LuxR solos were also found in a few non-proteobacterial genomes which are not known to carry AHL-QS systems. Multiple predicted LuxR solos in the same genome appeared to have different levels of conservation of invariant amino acid residues of ABD questioning their binding to AHLs. In summary, this study provides a detailed overview of distribution of LuxR solos and their probable roles in bacteria with genome sequence information.

  10. Void fraction measurements using neutron radiography

    International Nuclear Information System (INIS)

    Glickstein, S.S.; Vance, W.H.; Joo, H.

    1992-01-01

    Real-time neutron radiography is being evaluated for studying the dynamic behavior of two phase flow and for measuring void fraction in vertical and inclined water ducts. This technique provides a unique means of visualizing the behavior of fluid flow inside thick metal enclosures. To simulate vapor conditions encountered in a fluid flow duct, an air-water flow system was constructed. Air was injected into the bottom of the duct at flow rates up to 0.47 I/s (1 cfm). The water flow rate was varied between 0--3.78 I/m (0--1 gpm). The experiments were performed at the Pennsylvania State University nuclear reactor facility using a real-time neutron radiography camera. With a thermal neutron flux on the order of 10 6 n/cm 2 /s directed through the thin duct dimension, the dynamic behavior of the air bubbles was clearly visible through 5 cm (2 in.) thick aluminum support plates placed on both sides of the duct wall. Image analysis techniques were employed to extract void fractions from the data which was recorded on videotape. This consisted of time averaging 256 video frames and measuring the gray level distribution throughout the region. The distribution of the measured void fraction across the duct was determined for various air/water mixtures. Details of the results of experiments for a variety of air and water flow conditions are presented

  11. Air void clustering : [technical summary].

    Science.gov (United States)

    2015-06-01

    Air void clustering around coarse aggregate in concrete has been : identified as a potential source of low strengths in concrete mixes by : several Departments of Transportation around the country. Research : was carried out to (1) develop a quantita...

  12. (100) faceted anion voids in electron irradiated fluorite

    International Nuclear Information System (INIS)

    Johnson, E.

    1979-01-01

    High fluence electron irradiation of fluorite crystals in the temperature range 150 to 320 K results in formation of a simple cubic anion void superlattice. Above 320 K the damage structure changes to a random distribution of large [001] faceted anion voids. This voidage behaviour, similar to that observed in a range of irradiated metals, is discussed in terms points defect rather than conventional colour centre terminology. (Auth.)

  13. Analyses of moments in pseudorapidity intervals at √s = 546 GeV by means of two probability distributions in pure-birth process

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Suzuki, N.

    1988-01-01

    Moments in pseudorapidity intervals at the CERN Sp-barpS collider (√s = 546 GeV) are analyzed by means of two probability distributions in the pure-birth stochastic process. Our results show that a probability distribution obtained from the Poisson distribution as an initial condition is more useful than that obtained from the Kronecker δ function. Analyses of moments by Koba-Nielsen-Olesen scaling functions derived from solutions of the pure-birth stochastic process are also made. Moreover, analyses of preliminary data at √s = 200 and 900 GeV are added

  14. On cavitation instabilities with interacting voids

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2012-01-01

    voids so far apart that the radius of the plastic zone around each void is less than 1% of the current spacing between the voids, can still affect each others at the occurrence of a cavitation instability such that one void stops growing while the other grows in an unstable manner. On the other hand...

  15. Demography of the Early Neolithic Population in Central Balkans: Population Dynamics Reconstruction Using Summed Radiocarbon Probability Distributions.

    Directory of Open Access Journals (Sweden)

    Marko Porčić

    Full Text Available The Central Balkans region is of great importance for understanding the spread of the Neolithic in Europe but the Early Neolithic population dynamics of the region is unknown. In this study we apply the method of summed calibrated probability distributions to a set of published radiocarbon dates from the Republic of Serbia in order to reconstruct population dynamics in the Early Neolithic in this part of the Central Balkans. The results indicate that there was a significant population growth after ~6200 calBC, when the Neolithic was introduced into the region, followed by a bust at the end of the Early Neolithic phase (~5400 calBC. These results are broadly consistent with the predictions of the Neolithic Demographic Transition theory and the patterns of population booms and busts detected in other regions of Europe. These results suggest that the cultural process that underlies the patterns observed in Central and Western Europe was also in operation in the Central Balkan Neolithic and that the population increase component of this process can be considered as an important factor for the spread of the Neolithic as envisioned in the demic diffusion hypothesis.

  16. Effect of Dark Energy Perturbation on Cosmic Voids Formation

    Science.gov (United States)

    Endo, Takao; Nishizawa, Atsushi J.; Ichiki, Kiyotomo

    2018-05-01

    In this paper, we present the effects of dark energy perturbation on the formation and abundance of cosmic voids. We consider dark energy to be a fluid with a negative pressure characterised by a constant equation of state w and speed of sound c_s^2. By solving fluid equations for two components, namely, dark matter and dark energy fluids, we quantify the effects of dark energy perturbation on the sizes of top-hat voids. We also explore the effects on the size distribution of voids based on the excursion set theory. We confirm that dark energy perturbation negligibly affects the size evolution of voids; c_s^2=0 varies the size only by 0.1% as compared to the homogeneous dark energy model. We also confirm that dark energy perturbation suppresses the void size when w -1 (Basse et al. 2011). In contrast to the negligible impact on the size, we find that the size distribution function on scales larger than 10 Mpc/h highly depends on dark energy perturbation; compared to the homogeneous dark energy model, the number of large voids of radius 30Mpc is 25% larger for the model with w = -0.9 and c_s^2=0 while they are 20% less abundant for the model with w = -1.3 and c_s^2=0.

  17. Combining scenarios in a calculation of the overall probability distribution of cumulative releases of radioactivity from the Waste Isolation Pilot Plant, southeastern New Mexico

    International Nuclear Information System (INIS)

    Tierney, M.S.

    1991-11-01

    The Waste Isolation Pilot Plant (WIPP), in southeastern New Mexico, is a research and development facility to demonstrate safe disposal of defense-generated transuranic waste. The US Department of Energy will designate WIPP as a disposal facility if it meets the US Environmental Protection Agency's standard for disposal of such waste; the standard includes a requirement that estimates of cumulative releases of radioactivity to the accessible environment be incorporated in an overall probability distribution. The WIPP Project has chosen an approach to calculation of an overall probability distribution that employs the concept of scenarios for release and transport of radioactivity to the accessible environment. This report reviews the use of Monte Carlo methods in the calculation of an overall probability distribution and presents a logical and mathematical foundation for use of the scenario concept in such calculations. The report also draws preliminary conclusions regarding the shape of the probability distribution for the WIPP system; preliminary conclusions are based on the possible occurrence of three events and the presence of one feature: namely, the events ''attempted boreholes over rooms and drifts,'' ''mining alters ground-water regime,'' ''water-withdrawal wells provide alternate pathways,'' and the feature ''brine pocket below room or drift.'' Calculation of the WIPP systems's overall probability distributions for only five of sixteen possible scenario classes that can be obtained by combining the four postulated events or features

  18. Void analysis of target residues at SPS energy -evidence of correlation with fractal behaviour

    International Nuclear Information System (INIS)

    Ghosh, Dipak; Deb, Argha; Das, Rupa . E-mail : dipakghosh_in@yahoo.com

    2007-01-01

    This paper presents an analysis of the target residues in 32 S -AgBr and 16 0 -AgBr interactions at 200 AGeV and 60AGeV respectively in terms of fractal moment by Takagi method and void probability scaling. The study reveals an interesting feature of the production process. In 16 O- AgBr interactions multifractal behaviour is present in both hemispheres and void probability does not show a scaling behaviour, but at high energy the situation changes. In 32 S -AgBr interactions for both hemisphere monofractal behaviour is indicated by that data and void probability also shows good scaling behaviour. This suggests that a possible correlation of void probability with fractal behaviour of target residues. (author)

  19. The dipole moment of a wall-charged void in a bulk dielectric

    DEFF Research Database (Denmark)

    McAllister, Iain Wilson

    1993-01-01

    The dipole moment of a wall-charged void is examined with reference to the spatial extent of the surface charge density σ and the distribution of this charge. The salient factors influencing the void dipole moment are also examined. From a study of spherical voids, it is shown that, although the σ......-distribution influences the dipole moment, the spatial extent of σ has a greater influence. This behavior is not unexpected. For a void of fixed dimensions, the smaller the charged surface area, the greater is the charges, and thus the greater the dipole moment...

  20. Air void structure and frost resistance

    DEFF Research Database (Denmark)

    Hasholt, Marianne Tange

    2014-01-01

    ). This observation is interesting as the parameter of total surface area of air voids normally is not included in air void analysis. The following reason for the finding is suggested: In the air voids conditions are favourable for ice nucleation. When a capillary pore is connected to an air void, ice formation...... on that capillary pores are connected to air voids. The chance that a capillary pore is connected to an air void depends on the total surface area of air voids in the system, not the spacing factor.......This article compiles results from 4 independent laboratory studies. In each study, the same type of concrete is tested at least 10 times, the air void structure being the only variable. For each concrete mix both air void analysis of the hardened concrete and a salt frost scaling test...

  1. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Science.gov (United States)

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  2. Finding Brazing Voids by Holography

    Science.gov (United States)

    Galluccio, R.

    1986-01-01

    Vibration-induced interference fringes reveal locations of defects. Holographic apparatus used to view object while vibrated ultrasonically. Interference fringes in hologram reveal brazing defects. Holographic technique locates small voids in large brazed joints. Identifies unbrazed regions 1 in. to second power (6 cm to the second power) or less in area.

  3. Modelling the void deformation and closure by hot forging of ingot castings

    DEFF Research Database (Denmark)

    Christiansen, Peter; Hattel, Jesper Henri; Kotas, Petr

    2012-01-01

    by mechanical deformation. The aim of this paper is to analyze numerically if and to what degree the voids areclosed by the forging. Using the commercial simulation software ABAQUS, both simplified model ingots and physically manufactured ingots containing prescribed void distributions are deformed and analyzed....... The analysis concernsboth the void density change and the location of the voids in the part after deformation. The latter can be important for the subsequent reliability of the parts, for instance regarding fatigue properties. The analysis incorporates the Gurson yield criterion for metals containing voids...... and focuses on how the voids deform depending on their size and distribution in the ingot as well ashow the forging forces are applied....

  4. Path probability distribution of stochastic motion of non dissipative systems: a classical analog of Feynman factor of path integral

    International Nuclear Information System (INIS)

    Lin, T.L.; Wang, R.; Bi, W.P.; El Kaabouchi, A.; Pujos, C.; Calvayrac, F.; Wang, Q.A.

    2013-01-01

    We investigate, by numerical simulation, the path probability of non dissipative mechanical systems undergoing stochastic motion. The aim is to search for the relationship between this probability and the usual mechanical action. The model of simulation is a one-dimensional particle subject to conservative force and Gaussian random displacement. The probability that a sample path between two fixed points is taken is computed from the number of particles moving along this path, an output of the simulation, divided by the total number of particles arriving at the final point. It is found that the path probability decays exponentially with increasing action of the sample paths. The decay rate increases with decreasing randomness. This result supports the existence of a classical analog of the Feynman factor in the path integral formulation of quantum mechanics for Hamiltonian systems

  5. Effect of voids-controlled vacancy supersaturations on B diffusion

    International Nuclear Information System (INIS)

    Marcelot, O.; Claverie, A.; Cristiano, F.; Cayrel, F.; Alquier, D.; Lerch, W.; Paul, S.; Rubin, L.; Jaouen, H.; Armand, C.

    2007-01-01

    We present here preliminary results on boron diffusion in presence of pre-formed voids of different characteristics. The voids were fabricated by helium implantation followed by annealing allowing the desorption of He prior to boron implantation. We show that under such conditions boron diffusion is always largely reduced and can even be suppressed in some cases. Boron diffusion suppression can be observed in samples not containing nanovoids in the boron-rich region. It is suggested that direct trapping of Si(int)s by the voids is not the mechanism responsible for the reduction of boron diffusion in such layers. Alternatively, our experimental results suggest that this reduction of diffusivity is more probably due to the competition between two Ostwald ripening phenomena taking place at the same time: in the boron-rich region, the competitive growth of extrinsic defects at the origin of TED and, in the void region, the Ostwald ripening of the voids which involves large supersaturations of Vs

  6. Effect of voids-controlled vacancy supersaturations on B diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Marcelot, O. [CEMES/CNRS, 29 rue Jeanne Marvig, 31055 Toulouse (France)]. E-mail: marcelot@cemes.fr; Claverie, A. [CEMES/CNRS, 29 rue Jeanne Marvig, 31055 Toulouse (France); Cristiano, F. [LAAS/CNRS, 7 av. du Col. Roche, 31077 Toulouse (France); Cayrel, F. [LMP, Universite de Tours, 16 rue Pierre et Marie Curie, BP 7155, 37071 Tours (France); Alquier, D. [LMP, Universite de Tours, 16 rue Pierre et Marie Curie, BP 7155, 37071 Tours (France); Lerch, W. [Mattson Thermal Products GmbH, Daimlerstr. 10, D-89160 Dornstadt (Germany); Paul, S. [Mattson Thermal Products GmbH, Daimlerstr. 10, D-89160 Dornstadt (Germany); Rubin, L. [Axcelis Technologies, 108 Cherry Hill Drive, Beverly MA 01915 (United States); Jaouen, H. [STMicroelectronics, 850 rue Jean Monnet, 38926 Crolles (France); Armand, C. [LNMO/INSA, Service analyseur ionique, 135 av. de Rangueil, 31077 Toulouse (France)

    2007-04-15

    We present here preliminary results on boron diffusion in presence of pre-formed voids of different characteristics. The voids were fabricated by helium implantation followed by annealing allowing the desorption of He prior to boron implantation. We show that under such conditions boron diffusion is always largely reduced and can even be suppressed in some cases. Boron diffusion suppression can be observed in samples not containing nanovoids in the boron-rich region. It is suggested that direct trapping of Si(int)s by the voids is not the mechanism responsible for the reduction of boron diffusion in such layers. Alternatively, our experimental results suggest that this reduction of diffusivity is more probably due to the competition between two Ostwald ripening phenomena taking place at the same time: in the boron-rich region, the competitive growth of extrinsic defects at the origin of TED and, in the void region, the Ostwald ripening of the voids which involves large supersaturations of Vs.

  7. Probability distribution functions of δ15N and δ18O in groundwater nitrate to probabilistically solve complex mixing scenarios

    Science.gov (United States)

    Chrystal, A.; Heikoop, J. M.; Davis, P.; Syme, J.; Hagerty, S.; Perkins, G.; Larson, T. E.; Longmire, P.; Fessenden, J. E.

    2010-12-01

    Elevated nitrate (NO3-) concentrations in drinking water pose a health risk to the public. The dual stable isotopic signatures of δ15N and δ18O in NO3- in surface- and groundwater are often used to identify and distinguish among sources of NO3- (e.g., sewage, fertilizer, atmospheric deposition). In oxic groundwaters where no denitrification is occurring, direct calculations of mixing fractions using a mass balance approach can be performed if three or fewer sources of NO3- are present, and if the stable isotope ratios of the source terms are defined. There are several limitations to this approach. First, direct calculations of mixing fractions are not possible when four or more NO3- sources may be present. Simple mixing calculations also rely upon treating source isotopic compositions as a single value; however these sources themselves exhibit ranges in stable isotope ratios. More information can be gained by using a probabilistic approach to account for the range and distribution of stable isotope ratios in each source. Fitting probability density functions (PDFs) to the isotopic compositions for each source term reveals that some values within a given isotopic range are more likely to occur than others. We compiled a data set of dual isotopes in NO3- sources by combining our measurements with data collected through extensive literature review. We fit each source term with a PDF, and show a new method to probabilistically solve multiple component mixing scenarios with source isotopic composition uncertainty. This method is based on a modified use of a tri-linear diagram. First, source term PDFs are sampled numerous times using a variation of stratified random sampling, Latin Hypercube Sampling. For each set of sampled source isotopic compositions, a reference point is generated close to the measured groundwater sample isotopic composition. This point is used as a vertex to form all possible triangles between all pairs of sampled source isotopic compositions

  8. A New Kind of Void Soap-free P(MMA-EA-MAA) Latex Particles

    Institute of Scientific and Technical Information of China (English)

    Kai KANG; Cheng You KAN; Yi DU; Yu Zhong LI; De Shan LIU

    2005-01-01

    Soap-free P(MMA-EA-MAA) particles with narrow size distribution were synthesized by seeded emulsion polymerization of methyl methacrylate (MMA), ethyl acrylate (EA) and methacrylic acid (MAA), and large voids inside the particles were generated by alkali posttreatment in the presence of 2-butanone. Results indicated that the size of void and the particle volume were related with the amount of 2-butanone. The generation mechanism of voids was proposed.

  9. Dependence of calculated void reactivity on film boiling representation in a CANDU lattice

    Energy Technology Data Exchange (ETDEWEB)

    Whitlock, J [McMaster Univ., Hamilton, ON (Canada). Dept. of Engineering Physics

    1994-12-31

    The distribution dependence of void reactivity in a CANDU (CANada Deuterium Uranium) lattice is studied, specifically in the regime of film boiling. A heterogeneous model of this phenomenon predicts a 4% increase in void reactivity over a homogeneous model for fresh fuel, and 11% at discharge. An explanation for this difference is offered, with regard to differing changes in neutron mean free path upon voiding. (author). 9 refs., 4 tabs., 6 figs.

  10. Two-dimensional void reconstruction by neutron transmission

    International Nuclear Information System (INIS)

    Zakaib, G.D.; Harms, A.A.; Vlachopoulos, J.

    1978-01-01

    Contemporary algebraic reconstruction methods are utilized in investigating the two-dimensional void distribution in a water analog from neutron transmission measurements. It is sought to ultimately apply these techniques to the determination of time-averaged void distribution in two-phase flow systems as well as for potential usage in neutron radiography. Initially, projection data were obtained from a digitized model of a hypothetical two-phase representation and later from neutron beam traverses across a voided methacrylate plastic model. From 10 to 15 views were incorporated, and decoupling of overlapped measurements was utilized to afford greater resolution. In general, the additive Algebraic Reconstruction Technique yielded the best reconstructions, with others showing promise for noisy data. Results indicate the need for some further development of the method in interpreting real data

  11. The probability distribution of side-chain conformations in [Leu] and [Met]enkephalin determines the potency and selectivity to mu and delta opiate receptors

    DEFF Research Database (Denmark)

    Nielsen, Bjørn Gilbert; Jensen, Morten Østergaard; Bohr, Henrik

    2003-01-01

    The structure of enkephalin, a small neuropeptide with five amino acids, has been simulated on computers using molecular dynamics. Such simulations exhibit a few stable conformations, which also have been identified experimentally. The simulations provide the possibility to perform cluster analysis...... in the space defined by potentially pharmacophoric measures such as dihedral angles, side-chain orientation, etc. By analyzing the statistics of the resulting clusters, the probability distribution of the side-chain conformations may be determined. These probabilities allow us to predict the selectivity...... of [Leu]enkephalin and [Met]enkephalin to the known mu- and delta-type opiate receptors to which they bind as agonists. Other plausible consequences of these probability distributions are discussed in relation to the way in which they may influence the dynamics of the synapse....

  12. Snow-melt flood frequency analysis by means of copula based 2D probability distributions for the Narew River in Poland

    Directory of Open Access Journals (Sweden)

    Bogdan Ozga-Zielinski

    2016-06-01

    New hydrological insights for the region: The results indicated that the 2D normal probability distribution model gives a better probabilistic description of snowmelt floods characterized by the 2-dimensional random variable (Qmax,f, Vf compared to the elliptical Gaussian copula and Archimedean 1-parameter Gumbel–Hougaard copula models, in particular from the view point of probability of exceedance as well as complexity and time of computation. Nevertheless, the copula approach offers a new perspective in estimating the 2D probability distribution for multidimensional random variables. Results showed that the 2D model for snowmelt floods built using the Gumbel–Hougaard copula is much better than the model built using the Gaussian copula.

  13. Classroom Research: Assessment of Student Understanding of Sampling Distributions of Means and the Central Limit Theorem in Post-Calculus Probability and Statistics Classes

    Science.gov (United States)

    Lunsford, M. Leigh; Rowell, Ginger Holmes; Goodson-Espy, Tracy

    2006-01-01

    We applied a classroom research model to investigate student understanding of sampling distributions of sample means and the Central Limit Theorem in post-calculus introductory probability and statistics courses. Using a quantitative assessment tool developed by previous researchers and a qualitative assessment tool developed by the authors, we…

  14. Void Fraction Measurement in Subcooled-Boiling Flow Using High-Frame-Rate Neutron Radiography

    International Nuclear Information System (INIS)

    Kureta, Masatoshi; Akimoto, Hajime; Hibiki, Takashi; Mishima, Kaichiro

    2001-01-01

    A high-frame-rate neutron radiography (NR) technique was applied to measure the void fraction distribution in forced-convective subcooled-boiling flow. The focus was experimental technique and error estimation of the high-frame-rate NR. The results of void fraction measurement in the boiling flow were described. Measurement errors on instantaneous and time-averaged void fractions were evaluated experimentally and analytically. Measurement errors were within 18 and 2% for instantaneous void fraction (measurement time is 0.89 ms), and time-averaged void fraction, respectively. The void fraction distribution of subcooled boiling was measured using atmospheric-pressure water in rectangular channels with channel width 30 mm, heated length 100 mm, channel gap 3 and 5 mm, inlet water subcooling from 10 to 30 K, and mass velocity ranging from 240 to 2000 kg/(m 2 .s). One side of the channel was heated homogeneously. Instantaneous void fraction and time-averaged void fraction distribution were measured parametrically. The effects of flow parameters on void fraction were investigated

  15. Analysis on void reactivity of DCA lattice

    International Nuclear Information System (INIS)

    Min, B. J.; Noh, K. H.; Choi, H. B.; Yang, M. K.

    2001-01-01

    In case of loss of coolant accident, the void reactivity of CANDU fuel provides the positive reactivity and increases the reactor power rapidly. Therefore, it is required to secure credibility of the void reactivity for the design and analysis of reactor, which motivated a study to assess the measurement data of void reactivity. The assessment of lattice code was performed with the experimental data of void reactivity at 30, 70, 87 and 100% of void fractions. The infinite multiplication factors increased in four types of fuels as the void fractions of them grow. The infinite multiplication factors of uranium fuels are almost within 1%, but those of Pu fuels are over 10% by the results of WIMS-AECL and MCNP-4B codes. Moreover, coolant void reactivity of the core loaded with plutonium fuel is more negative compared with that with uranium fuel because of spectrum hardening resulting from large void fraction

  16. Constitutive modeling of rate dependence and microinertia effects in porous-plastic materials with multi-sized voids (MSVs)

    KAUST Repository

    Liu, Jinxing

    2012-11-27

    Micro-voids of varying sizes exist in most metals and alloys. Both experiments and numerical studies have demonstrated the critical influence of initial void sizes on void growth. The classical Gurson-Tvergaard-Needleman model summarizes the influence of voids with a single parameter, namely the void-volume fraction, excluding any possible effects of the void-size distribution. We extend our newly proposed model including the multi-sized void (MSV) effect and the void-interaction effect for the capability of working for both moderate and high loading rate cases, where either rate dependence or microinertia becomes considerable or even dominant. Parametric studies show that the MSV-related competitive mechanism among void growth leads to the dependence of the void growth rate on void size, which directly influences the void\\'s contribution to the total energy composition. We finally show that the stress-strain constitutive behavior is also affected by this MSV-related competitive mechanism. The stabilizing effect due to rate sensitivity and microinertia is emphasized. © 2013 IOP Publishing Ltd.

  17. Void migration in fusion materials

    International Nuclear Information System (INIS)

    Cottrell, G.A.

    2002-01-01

    Neutron irradiation in a fusion power plant will cause helium bubbles and voids to form in the armour and blanket structural materials. If sufficiently large densities of such defects accumulate on the grain boundaries of the materials, the strength and the lifetimes of the metals will be reduced by helium embrittlement and grain boundary failure. This Letter discusses void migration in metals, both by random Brownian motion and by biassed flow in temperature gradients. In the assumed five-year blanket replacement time of a fusion power plant, approximate calculations show that the metals most resilient to failure are tungsten and molybdenum, and marginally vanadium. Helium embrittlement and grain boundary failure is expected to be more severe in steel and beryllium

  18. Void migration in fusion materials

    Science.gov (United States)

    Cottrell, G. A.

    2002-04-01

    Neutron irradiation in a fusion power plant will cause helium bubbles and voids to form in the armour and blanket structural materials. If sufficiently large densities of such defects accumulate on the grain boundaries of the materials, the strength and the lifetimes of the metals will be reduced by helium embrittlement and grain boundary failure. This Letter discusses void migration in metals, both by random Brownian motion and by biassed flow in temperature gradients. In the assumed five-year blanket replacement time of a fusion power plant, approximate calculations show that the metals most resilient to failure are tungsten and molybdenum, and marginally vanadium. Helium embrittlement and grain boundary failure is expected to be more severe in steel and beryllium.

  19. Closure behavior of spherical void in slab during hot rolling process

    Science.gov (United States)

    Cheng, Rong; Zhang, Jiongming; Wang, Bo

    2018-04-01

    The mechanical properties of steels are heavily deteriorated by voids. The influence of voids on the product quality should be eliminated through rolling processes. The study on the void closure during hot rolling processes is necessary. In present work, the closure behavior of voids at the center of a slab at 800 °C during hot rolling processes has been simulated with a 3D finite element model. The shape of the void and the plastic strain distribution of the slab are obtained by this model. The void decreases along the slab thickness direction and spreads along the rolling direction but hardly changes along the strip width direction. The relationship between closure behavior of voids and the plastic strain at the center of the slab is analyzed. The effects of rolling reduction, slab thickness and roller diameter on the closure behavior of voids are discussed. The larger reduction, thinner slab and larger roller diameter all improve the closure of voids during hot rolling processes. Experimental results of the closure behavior of a void in the slab during hot rolling process mostly agree with the simulation results..

  20. Continental-scale simulation of burn probabilities, flame lengths, and fire size distribution for the United States

    Science.gov (United States)

    Mark A. Finney; Charles W. McHugh; Isaac Grenfell; Karin L. Riley

    2010-01-01

    Components of a quantitative risk assessment were produced by simulation of burn probabilities and fire behavior variation for 134 fire planning units (FPUs) across the continental U.S. The system uses fire growth simulation of ignitions modeled from relationships between large fire occurrence and the fire danger index Energy Release Component (ERC). Simulations of 10,...

  1. Actions and Beliefs : Estimating Distribution-Based Preferences Using a Large Scale Experiment with Probability Questions on Expectations

    NARCIS (Netherlands)

    Bellemare, C.; Kroger, S.; van Soest, A.H.O.

    2005-01-01

    We combine the choice data of proposers and responders in the ultimatum game, their expectations elicited in the form of subjective probability questions, and the choice data of proposers ("dictator") in a dictator game to estimate a structural model of decision making under uncertainty.We use a

  2. Sodium voiding analysis in Kalimer

    International Nuclear Information System (INIS)

    Chang, Won-Pyo; Jeong, Kwan-Seong; Hahn, Dohee

    2001-01-01

    A sodium boiling model has been developed for calculations of the void reactivity feedback as well as the fuel and cladding temperatures in the KALIMER core after onset of sodium boiling. The sodium boiling in liquid metal reactors using sodium as coolant should be modeled because of phenomenon difference observed from that in light water reactor systems. The developed model is a multiple -bubble slug ejection model. It allows a finite number of bubbles in a channel at any time. Voiding is assumed to result from formation of bubbles that fill the whole cross section of the coolant channel except for liquid film left on the cladding surface. The vapor pressure, currently, is assumed to be uniform within a bubble. The present study is focused on not only demonstration of the sodium voiding behavior predicted by the developed model, but also confirmation on qualitative acceptance for the model. In results, the model catches important phenomena for sodium boiling, while further effort should be made for the complete analysis. (author)

  3. Measurements of void fraction in a heated tube in the rewetting conditions

    International Nuclear Information System (INIS)

    Freitas, R.L.

    1983-01-01

    The methods of void fraction measurements by transmission and diffusion of cold, thermal and epithermal neutrons were studied with cylindrical alluminium pieces simulating the steam. A great set of void fraction found in a wet zone was examined and a particulsar attention was given to the sensitivity effects of the method, mainly for high void fraction. Several aspects of the measurement techniques were analyzed, such as the effect of the phase radial distribution, neutron energy, water tempeture, effect of the void axial gradient. The technique of thermal neutron diffusion measurement was used to measure the axial profile of void fraction in a steady two-phase flow, where the pressure, mass velocity and heat flux are representative of the wet conditions. Experimental results are presented and compared with different void fraction models. (E.G.) [pt

  4. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  5. Gravity and count probabilities in an expanding universe

    Science.gov (United States)

    Bouchet, Francois R.; Hernquist, Lars

    1992-01-01

    The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.

  6. In memory of Alois Apfelbeck: An Interconnection between Cayley-Eisenstein-Pólya and Landau Probability Distributions

    Directory of Open Access Journals (Sweden)

    Vladimír Vojta

    2013-01-01

    Full Text Available The interconnection between the Cayley-Eisenstein-Pólya distribution and the Landau distribution is studied, and possibly new transform pairs for the Laplace and Mellin transform and integral expressions for the Lambert W function have been found.

  7. Probability distribution function values in mobile phones;Valores de funciones de distribución de probabilidad en el teléfono móvil

    Directory of Open Access Journals (Sweden)

    Luis Vicente Chamorro Marcillllo

    2013-06-01

    Full Text Available Engineering, within its academic and application forms, as well as any formal research work requires the use of statistics and every inferential statistical analysis requires the use of values of probability distribution functions that are generally available in tables. Generally, management of those tables have physical problems (wasteful transport and wasteful consultation and operational (incomplete lists and limited accuracy. The study, “Probability distribution function values in mobile phones”, permitted determining – through a needs survey applied to students involved in statistics studies at Universidad de Nariño – that the best known and most used values correspond to Chi-Square, Binomial, Student’s t, and Standard Normal distributions. Similarly, it showed user’s interest in having the values in question within an alternative means to correct, at least in part, the problems presented by “the famous tables”. To try to contribute to the solution, we built software that allows immediately and dynamically obtaining the values of the probability distribution functions most commonly used by mobile phones.

  8. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  9. Electromigration of intergranular voids in metal films for microelectronic interconnects

    CERN Document Server

    Averbuch, A; Ravve, I

    2003-01-01

    Voids and cracks often occur in the interconnect lines of microelectronic devices. They increase the resistance of the circuits and may even lead to a fatal failure. Voids may occur inside a single grain, but often they appear on the boundary between two grains. In this work, we model and analyze numerically the migration and evolution of an intergranular void subjected to surface diffusion forces and external voltage applied to the interconnect. The grain-void interface is considered one-dimensional, and the physical formulation of the electromigration and diffusion model results in two coupled fourth-order one-dimensional time-dependent PDEs. The boundary conditions are specified at the triple points, which are common to both neighboring grains and the void. The solution of these equations uses a finite difference scheme in space and a Runge-Kutta integration scheme in time, and is also coupled to the solution of a static Laplace equation describing the voltage distribution throughout the grain. Since the v...

  10. Controlling Interfacial Separation in Porous Structures by Void Patterning

    Science.gov (United States)

    Ghareeb, Ahmed; Elbanna, Ahmed

    Manipulating interfacial response for enhanced adhesion or fracture resistance is a problem of great interest to scientists and engineers. In many natural materials and engineering applications, an interface exists between a porous structure and a substrate. A question that arises is how the void distribution in the bulk may affect the interfacial response and whether it is possible to alter the interfacial toughness without changing the surface physical chemistry. In this paper, we address this question by studying the effect of patterning voids on the interfacial-to-the overall response of an elastic plate glued to a rigid substrate by bilinear cohesive material. Different patterning categories are investigated; uniform, graded, and binary voids. Each case is subjected to upward displacement at the upper edge of the plate. We show that the peak force and maximum elongation at failure depend on the voids design and by changing the void size, alignment or gradation we may control these performance measures. We relate these changes in the measured force displacement response to energy release rate as a measure of interfacial toughness. We discuss the implications of our results on design of bulk heterogeneities for enhanced interfacial behavior.

  11. Nucleation from a cluster of inclusions, leading to void coalescense

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2017-01-01

    A cell model analysis is used to study the nucleation and subsequent growth of voids from a non-uniform distribution of inclusions in a ductile material. Nucleation is modeled as either stress controlled or strain controlled. The special clusters considered consist of a number of uniformly spaced...... inclusions located along a plane perpendicular to the maximum principal tensile stress. A plane strain approximation is used, where the inclusions are parallel cylinders perpendicular to the plane. Clusters with different numbers of inclusions are compared with the nucleation and growth from a single...... inclusion, such that the total initial volume of the inclusions is the same for the clusters and the single inclusion. After nucleation, local void coalescence inside the clusters is accounted for, since this makes it possible to compare the rate of growth of the single larger void that results from...

  12. Size-Effects in Void Growth

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof

    2005-01-01

    The size-effect on ductile void growth in metals is investigated. The analysis is based on unit cell models both of arrays of cylindrical voids under plane strain deformation, as well as arrays of spherical voids using an axisymmetric model. A recent finite strain generalization of two higher order...... strain gradient plasticity models is implemented in a finite element program, which is used to study void growth numerically. The results based on the two models are compared. It is shown how gradient effects suppress void growth on the micron scale when compared to predictions based on conventional...... models. This increased resistance to void growth, due to gradient hardening, is accompanied by an increase in the overall strength for the material. Furthermore, for increasing initial void volume fraction, it is shown that the effect of gradients becomes more important to the overall response but less...

  13. Binomial distribution of Poisson statistics and tracks overlapping probability to estimate total tracks count with low uncertainty

    International Nuclear Information System (INIS)

    Khayat, Omid; Afarideh, Hossein; Mohammadnia, Meisam

    2015-01-01

    In the solid state nuclear track detectors of chemically etched type, nuclear tracks with center-to-center neighborhood of distance shorter than two times the radius of tracks will emerge as overlapping tracks. Track overlapping in this type of detectors causes tracks count losses and it becomes rather severe in high track densities. Therefore, tracks counting in this condition should include a correction factor for count losses of different tracks overlapping orders since a number of overlapping tracks may be counted as one track. Another aspect of the problem is the cases where imaging the whole area of the detector and counting all tracks are not possible. In these conditions a statistical generalization method is desired to be applicable in counting a segmented area of the detector and the results can be generalized to the whole surface of the detector. Also there is a challenge in counting the tracks in densely overlapped tracks because not sufficient geometrical or contextual information are available. It this paper we present a statistical counting method which gives the user a relation between the tracks overlapping probabilities on a segmented area of the detector surface and the total number of tracks. To apply the proposed method one can estimate the total number of tracks on a solid state detector of arbitrary shape and dimensions by approximating the tracks averaged area, whole detector surface area and some orders of tracks overlapping probabilities. It will be shown that this method is applicable in high and ultra high density tracks images and the count loss error can be enervated using a statistical generalization approach. - Highlights: • A correction factor for count losses of different tracks overlapping orders. • For the cases imaging the whole area of the detector is not possible. • Presenting a statistical generalization method for segmented areas. • Giving a relation between the tracks overlapping probabilities and the total tracks

  14. Protein-protein interaction site predictions with three-dimensional probability distributions of interacting atoms on protein surfaces.

    Directory of Open Access Journals (Sweden)

    Ching-Tai Chen

    Full Text Available Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins and were tested on an independent dataset (consisting of 142 proteins. The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted

  15. Protein-Protein Interaction Site Predictions with Three-Dimensional Probability Distributions of Interacting Atoms on Protein Surfaces

    Science.gov (United States)

    Chen, Ching-Tai; Peng, Hung-Pin; Jian, Jhih-Wei; Tsai, Keng-Chang; Chang, Jeng-Yih; Yang, Ei-Wen; Chen, Jun-Bo; Ho, Shinn-Ying; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI) sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins) and were tested on an independent dataset (consisting of 142 proteins). The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted correctly with

  16. Automated air-void system characterization of hardened concrete: Helping computers to count air-voids like people count air-voids---Methods for flatbed scanner calibration

    Science.gov (United States)

    Peterson, Karl

    Since the discovery in the late 1930s that air entrainment can improve the durability of concrete, it has been important for people to know the quantity, spacial distribution, and size distribution of the air-voids in their concrete mixes in order to ensure a durable final product. The task of air-void system characterization has fallen on the microscopist, who, according to a standard test method laid forth by the American Society of Testing and Materials, must meticulously count or measure about a thousand air-voids per sample as exposed on a cut and polished cross-section of concrete. The equipment used to perform this task has traditionally included a stereomicroscope, a mechanical stage, and a tally counter. Over the past 30 years, with the availability of computers and digital imaging, automated methods have been introduced to perform the same task, but using the same basic equipment. The method described here replaces the microscope and mechanical stage with an ordinary flatbed desktop scanner, and replaces the microscopist and tally counter with a personal computer; two pieces of equipment much more readily available than a microscope with a mechanical stage, and certainly easier to find than a person willing to sit for extended periods of time counting air-voids. Most laboratories that perform air-void system characterization typically have cabinets full of prepared samples with corresponding results from manual operators. Proponents of automated methods often take advantage of this fact by analyzing the same samples and comparing the results. A similar iterative approach is described here where scanned images collected from a significant number of samples are analyzed, the results compared to those of the manual operator, and the settings optimized to best approximate the results of the manual operator. The results of this calibration procedure are compared to an alternative calibration procedure based on the more rigorous digital image accuracy

  17. Nocturia: The circadian voiding disorder

    Directory of Open Access Journals (Sweden)

    Jin Wook Kim

    2016-05-01

    Full Text Available Nocturia is a prevalent condition of waking to void during the night. The concept of nocturia has evolved from being a symptomatic aspect of disease associated with the prostate or bladder to a form of lower urinary tract disorder. However, recent advances in circadian biology and sleep science suggest that it might be important to consider nocturia as a form of circadian dysfunction. In the current review, nocturia is reexamined with an introduction to sleep disorders and recent findings in circadian biology in an attempt to highlight the importance of rediscovering nocturia as a problem of chronobiology.

  18. The Beckoning Void in Moravagine

    Directory of Open Access Journals (Sweden)

    Stephen K. Bellstrom

    1979-01-01

    Full Text Available The Chapter «Mascha,» lying at the heart of Cendrars's Moravagine , contains within it a variety of images and themes suggestive of emptiness. The philosophy of nihilism is exemplified in the motivations and actions of the group of terrorists seeking to plunge Russia into revolutionary chaos. Mascha's anatomical orifice, symbolizing both a biological and a psychological fault, and the abortion of her child, paralleled by the abortion of the revolutionary ideal among her comrades, are also emblematic of the chapter's central void. Moreover, Cendrars builds the theme of hollowness by describing Moravagine with images of omission, such as «empan» (space or span, «absent,» and «étranger.» Moravagine's presence, in fact, characteristically causes an undercurrent of doubt and uncertainty about the nature of reality to become overt. It is this parodoxical presence which seems to cause the narrator (and consequently the narrative to «lose» a day at the most critical moment of the story. By plunging the reader into the narrator's lapsus memoriae , Cendrars aims at creating a feeling of the kind of mental and cosmic disorder for which Moravagine is the strategist and apologist. This technique of insufficiency is an active technique, even though it relies on the passive idea of removing explanation and connecting details. The reader is invited, or lured, into the central void of the novel and, faced with unresolvable dilemmas, becomes involved in the same disorder that was initially produced.

  19. Evaluation of void fraction measurements from DADINE experience using RELAP4/MOD5 code

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1989-01-01

    The DADINE experiment measures the axial evolution of the void fraction by neutronic diffusion in two-phase flow in the wet regions of a pressurized water reactor in accident conditions. Since the theoretical/experimental confrontation is important for code evaluation, this paper presents the simulation with the RELAP4/MOD5 Code of the void fractions results obtained in the DADINE Experiment, that showed some deviation probably associated with the existing models in Code, special attention in the way of stablishing the two-phase flow and the no characterization of the differents flow regimes related with the void fractions. (author) [pt

  20. Assignment of probability distributions for parameters in the 1996 performance assessment for the Waste Isolation Pilot Plant. Part 1: description of process

    International Nuclear Information System (INIS)

    Rechard, Rob P.; Tierney, Martin S.

    2005-01-01

    A managed process was used to consistently and traceably develop probability distributions for parameters representing epistemic uncertainty in four preliminary and the final 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The key to the success of the process was the use of a three-member team consisting of a Parameter Task Leader, PA Analyst, and Subject Matter Expert. This team, in turn, relied upon a series of guidelines for selecting distribution types. The primary function of the guidelines was not to constrain the actual process of developing a parameter distribution but rather to establish a series of well-defined steps where recognized methods would be consistently applied to all parameters. An important guideline was to use a small set of distributions satisfying the maximum entropy formalism. Another important guideline was the consistent use of the log transform for parameters with large ranges (i.e. maximum/minimum>10 3 ). A parameter development team assigned 67 probability density functions (PDFs) in the 1989 PA and 236 PDFs in the 1996 PA using these and other guidelines described

  1. Voids, nanochannels and formation of nanotubes with mobile Sn fillings in Sn doped ZnO nanorods

    International Nuclear Information System (INIS)

    Ortega, Y; Dieker, Ch; Jaeger, W; Piqueras, J; Fernandez, P

    2010-01-01

    ZnO nanorods containing different hollow structures have been grown by a thermal evaporation-deposition method with a mixture of ZnS and SnO 2 powders as precursor. Transmission electron microscopy shows rods with rows of voids as well as rods with empty channels along the growth axis. The presence of Sn nanoprecipitates associated with the empty regions indicates, in addition, that these are generated by diffusion processes during growth, probably due to an inhomogeneous distribution of Sn. The mechanism of forming voids and precipitates appears to be based on diffusion processes similar to the Kirkendall effect, which can lead to void formation at interfaces of bulk materials or in core-shell nanostructures. In some cases the nanorods are ZnO tubes partially filled with Sn that has been found to melt and expand by heating the nanotubes under the microscope electron beam. Such metal-semiconductor nanostructures have potential applications as thermal nanosensors or as electrical nanocomponents.

  2. Predicting probability of occurrence and factors affecting distribution and abundance of three Ozark endemic crayfish species at multiple spatial scales

    Science.gov (United States)

    Nolen, Matthew S.; Magoulick, Daniel D.; DiStefano, Robert J.; Imhoff, Emily M.; Wagner, Brian K.

    2014-01-01

    Crayfishes and other freshwater aquatic fauna are particularly at risk globally due to anthropogenic demand, manipulation and exploitation of freshwater resources and yet are often understudied. The Ozark faunal region of Missouri and Arkansas harbours a high level of aquatic biological diversity, especially in regard to endemic crayfishes. Three such endemics, Orconectes eupunctus,Orconectes marchandi and Cambarus hubbsi, are threatened by limited natural distribution and the invasions of Orconectes neglectus.

  3. On the Efficient Simulation of the Distribution of the Sum of Gamma-Gamma Variates with Application to the Outage Probability Evaluation Over Fading Channels

    KAUST Repository

    Ben Issaid, Chaouki

    2016-06-01

    The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverbation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is intimately related to the difficult question of analyzing the statistics of a sum of Gamma-Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose a mean-shift importance sampling scheme that efficiently evaluates the outage probability of L-branch maximum ratio combining diversity receivers over Gamma-Gamma fading channels. The proposed estimator satisfies the well-known bounded relative error criterion, a well-desired property characterizing the robustness of importance sampling schemes, for both identically and non-identically independent distributed cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations.

  4. On the Efficient Simulation of the Distribution of the Sum of Gamma-Gamma Variates with Application to the Outage Probability Evaluation Over Fading Channels

    KAUST Repository

    Ben Issaid, Chaouki; Ben Rached, Nadhir; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2016-01-01

    The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverbation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is intimately related to the difficult question of analyzing the statistics of a sum of Gamma-Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose a mean-shift importance sampling scheme that efficiently evaluates the outage probability of L-branch maximum ratio combining diversity receivers over Gamma-Gamma fading channels. The proposed estimator satisfies the well-known bounded relative error criterion, a well-desired property characterizing the robustness of importance sampling schemes, for both identically and non-identically independent distributed cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations.

  5. Development of the impedance void meter

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Moon Ki; Song, Chul Hwa; Won, Soon Yeon; Kim, Bok Deuk [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    An impedance void meter is developed to measure the area-averaged void fraction. Its basic principle is based on the difference in the electrical conductivity between phases. Several methods of measuring void fraction are briefly reviewed and the reason why this type of void meter is chosen to develop is discussed. Basic principle of the measurement is thoroughly described and several design parameters to affect the overall function are discussed in detail. As example of applications is given for vertical air-water flow. It is shown that the current design has good dynamic response as well as very fine spatial resolution. (Author) 47 refs., 37 figs.

  6. Comparative analysis of methods for modelling the short-term probability distribution of extreme wind turbine loads

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov

    2016-01-01

    We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...

  7. The quantum probability calculus

    International Nuclear Information System (INIS)

    Jauch, J.M.

    1976-01-01

    The Wigner anomaly (1932) for the joint distribution of noncompatible observables is an indication that the classical probability calculus is not applicable for quantum probabilities. It should, therefore, be replaced by another, more general calculus, which is specifically adapted to quantal systems. In this article this calculus is exhibited and its mathematical axioms and the definitions of the basic concepts such as probability field, random variable, and expectation values are given. (B.R.H)

  8. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  9. Prediction uncertainty assessment of a systems biology model requires a sample of the full probability distribution of its parameters

    Directory of Open Access Journals (Sweden)

    Simon van Mourik

    2014-06-01

    Full Text Available Multi-parameter models in systems biology are typically ‘sloppy’: some parameters or combinations of parameters may be hard to estimate from data, whereas others are not. One might expect that parameter uncertainty automatically leads to uncertain predictions, but this is not the case. We illustrate this by showing that the prediction uncertainty of each of six sloppy models varies enormously among different predictions. Statistical approximations of parameter uncertainty may lead to dramatic errors in prediction uncertainty estimation. We argue that prediction uncertainty assessment must therefore be performed on a per-prediction basis using a full computational uncertainty analysis. In practice this is feasible by providing a model with a sample or ensemble representing the distribution of its parameters. Within a Bayesian framework, such a sample may be generated by a Markov Chain Monte Carlo (MCMC algorithm that infers the parameter distribution based on experimental data. Matlab code for generating the sample (with the Differential Evolution Markov Chain sampler and the subsequent uncertainty analysis using such a sample, is supplied as Supplemental Information.

  10. Concepts of probability theory

    CERN Document Server

    Pfeiffer, Paul E

    1979-01-01

    Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

  11. On the use of area-averaged void fraction and local bubble chord length entropies as two-phase flow regime indicators

    International Nuclear Information System (INIS)

    Hernandez, Leonor; Julia, J.E.; Paranjape, Sidharth; Hibiki, Takashi; Ishii, Mamoru

    2010-01-01

    In this work, the use of the area-averaged void fraction and bubble chord length entropies is introduced as flow regime indicators in two-phase flow systems. The entropy provides quantitative information about the disorder in the area-averaged void fraction or bubble chord length distributions. The CPDF (cumulative probability distribution function) of void fractions and bubble chord lengths obtained by means of impedance meters and conductivity probes are used to calculate both entropies. Entropy values for 242 flow conditions in upward two-phase flows in 25.4 and 50.8-mm pipes have been calculated. The measured conditions cover ranges from 0.13 to 5 m/s in the superficial liquid velocity j f and ranges from 0.01 to 25 m/s in the superficial gas velocity j g . The physical meaning of both entropies has been interpreted using the visual flow regime map information. The area-averaged void fraction and bubble chord length entropies capability as flow regime indicators have been checked with other statistical parameters and also with different input signals durations. The area-averaged void fraction and the bubble chord length entropies provide better or at least similar results than those obtained with other indicators that include more than one parameter. The entropy is capable to reduce the relevant information of the flow regimes in only one significant and useful parameter. In addition, the entropy computation time is shorter than the majority of the other indicators. The use of one parameter as input also represents faster predictions. (orig.)

  12. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  13. Constraints on Cosmology and Gravity from the Dynamics of Voids.

    Science.gov (United States)

    Hamaus, Nico; Pisani, Alice; Sutter, P M; Lavaux, Guilhem; Escoffier, Stéphanie; Wandelt, Benjamin D; Weller, Jochen

    2016-08-26

    The Universe is mostly composed of large and relatively empty domains known as cosmic voids, whereas its matter content is predominantly distributed along their boundaries. The remaining material inside them, either dark or luminous matter, is attracted to these boundaries and causes voids to expand faster and to grow emptier over time. Using the distribution of galaxies centered on voids identified in the Sloan Digital Sky Survey and adopting minimal assumptions on the statistical motion of these galaxies, we constrain the average matter content Ω_{m}=0.281±0.031 in the Universe today, as well as the linear growth rate of structure f/b=0.417±0.089 at median redshift z[over ¯]=0.57, where b is the galaxy bias (68% C.L.). These values originate from a percent-level measurement of the anisotropic distortion in the void-galaxy cross-correlation function, ϵ=1.003±0.012, and are robust to consistency tests with bootstraps of the data and simulated mock catalogs within an additional systematic uncertainty of half that size. They surpass (and are complementary to) existing constraints by unlocking cosmological information on smaller scales through an accurate model of nonlinear clustering and dynamics in void environments. As such, our analysis furnishes a powerful probe of deviations from Einstein's general relativity in the low-density regime which has largely remained untested so far. We find no evidence for such deviations in the data at hand.

  14. Influence of second phase dispersion on void formation during irradiation

    International Nuclear Information System (INIS)

    Sundararaman, M.; Banerjee, S.; Krishnan, R.

    Irradiation-induced void formation in alloys has been found to be strongly influenced by the microstructure, the important microstructural parameters being the dislocation density and the nature, density and distribution of second-phase precipitates. The effects of various types of precipitates on void swelling have been examined using the generally-accepted model of void formation : void embryos are assumed to grow in a situation where equal numbers of vacancies and interstitials are continuously generated by the incident irradiation, the interstitials being somewhat perferentially absorbed in some sinks present in the material. The mechanism of the trapping of defects by a distribution of precipitates has been discussed and the available experimental results on the suppression of void formation in materials containing coherent precipitates have been reviewed. Experimental results on the microstructure developed in a nickel-base alloys, Inconel-718 (considered to be a candidate material for structural applications in fast reactors), have been presented. The method of determination of the coherency strain associated with the precipitates has been illustrated with the help of certain observations made on this alloy. The major difficulty in using a two-phase alloy in an irradiation environment is associated with the irradiation-induced instability of the precipitates. Several processes such as precipitate dislocation (in which the incident radiation removes the outer layer of precipitates by recoil), enhanced diffusion disordering, fragmentation of precipitates, etc. are responsible for bringinq about a significant change in the structure of a two-phase material during irradiation. The effect of these processes on the continued performance of a two-phase alloy subjected to irradiation at an elevated temperature has been discussed. (auth.)

  15. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  16. Ignition Probability

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

  17. Development of quick-response area-averaged void fraction meter

    International Nuclear Information System (INIS)

    Watanabe, Hironori; Iguchi, Tadashi; Kimura, Mamoru; Anoda, Yoshinari

    2000-11-01

    Authors are performing experiments to investigate BWR thermal-hydraulic instability under coupling of neutronics and thermal-hydraulics. To perform the experiment, it is necessary to measure instantaneously area-averaged void fraction in rod bundle under high temperature/high pressure gas-liquid two-phase flow condition. Since there were no void fraction meters suitable for these requirements, we newly developed a practical void fraction meter. The principle of the meter is based on the electrical conductance changing with void fraction in gas-liquid two-phase flow. In this meter, metal flow channel wall is used as one electrode and a L-shaped line electrode installed at the center of flow channel is used as the other electrode. This electrode arrangement makes possible instantaneous measurement of area-averaged void fraction even under the metal flow channel. We performed experiments with air/water two-phase flow to clarify the void fraction meter performance. Experimental results indicated that void fraction was approximated by α=1-I/I o , where α and I are void fraction and current (I o is current at α=0). This relation holds in the wide range of void fraction of 0∼70%. The difference between α and 1-I/I o was approximately 10% at maximum. The major reasons of the difference are a void distribution over measurement area and an electrical insulation of the center electrode by bubbles. The principle and structure of this void fraction meter are very basic and simple. Therefore, the meter can be applied to various fields on gas-liquid two-phase flow studies. (author)

  18. Absence of saturation of void growth in rate theory with anisotropic diffusion

    CERN Document Server

    Hudson, T S; Sutton, A P

    2002-01-01

    We present a first attempt at solution the problem of the growth of a single void in the presence of anisotropically diffusing radiation induced self-interstitial atom (SIA) clusters. In order to treat a distribution of voids we perform ensemble averaging over the positions of centres of voids using a mean-field approximation. In this way we are able to model physical situations in between the Standard Rate Theory (SRT) treatment of swelling (isotropic diffusion), and the purely 1-dimensional diffusion of clusters in the Production Bias Model. The background absorption by dislocations is however treated isotropically, with a bias for interstitial cluster absorption assumed similar to that of individual SIAs. We find that for moderate anisotropy, unsaturated void growth is characteristic of this anisotropic diffusion of clusters. In addition we obtain a higher initial void swelling rate than predicted by SRT whenever the diffusion is anisotropic.

  19. Towards a Global Water Scarcity Risk Assessment Framework: Incorporation of Probability Distributions and Hydro-Climatic Variability

    Science.gov (United States)

    Veldkamp, T. I. E.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.

    2016-01-01

    Changing hydro-climatic and socioeconomic conditions increasingly put pressure on fresh water resources and are expected to aggravate water scarcity conditions towards the future. Despite numerous calls for risk-based water scarcity assessments, a global-scale framework that includes UNISDR's definition of risk does not yet exist. This study provides a first step towards such a risk based assessment, applying a Gamma distribution to estimate water scarcity conditions at the global scale under historic and future conditions, using multiple climate change and population growth scenarios. Our study highlights that water scarcity risk, expressed in terms of expected annual exposed population, increases given all future scenarios, up to greater than 56.2% of the global population in 2080. Looking at the drivers of risk, we find that population growth outweigh the impacts of climate change at global and regional scales. Using a risk-based method to assess water scarcity, we show the results to be less sensitive than traditional water scarcity assessments to the use of fixed threshold to represent different levels of water scarcity. This becomes especially important when moving from global to local scales, whereby deviations increase up to 50% of estimated risk levels.

  20. Wire-Mesh Tomography Measurements of Void Fraction in Rectangular Bubble Columns

    International Nuclear Information System (INIS)

    Reddy Vanga, B.N.; Lopez de Bertodano, M.A.; Zaruba, A.; Prasser, H.M.; Krepper, E.

    2004-01-01

    Bubble Columns are widely used in the process industry and their scale-up from laboratory scale units to industrial units have been a subject of extensive study. The void fraction distribution in the bubble column is affected by the column size, superficial velocity of the dispersed phase, height of the liquid column, size of the gas bubbles, flow regime, sparger design and geometry of the bubble column. The void fraction distribution in turn affects the interfacial momentum transfer in the bubble column. The void fraction distribution in a rectangular bubble column 10 cm wide and 2 cm deep has been measured using Wire-Mesh Tomography. Experiments were performed in an air-water system with the column operating in the dispersed bubbly flow regime. The experiments also serve the purpose of studying the performance of wire-mesh sensors in batch flows. A 'wall peak' has been observed in the measured void fraction profiles, for the higher gas flow rates. This 'wall peak' seems to be unique, as this distribution has not been previously reported in bubble column literature. Low gas flow rates yielded the conventional 'center peak' void profile. The effect of column height and superficial gas velocity on the void distribution has been investigated. Wire-mesh Tomography also facilitates the measurement of bubble size distribution in the column. This paper presents the measurement principle and the experimental results for a wide range of superficial gas velocities. (authors)

  1. Studies of void formation in pure metals

    International Nuclear Information System (INIS)

    Lanore, J.M.; Glowinski, L.; Risbet, A.; Regnier, P.; Flament, J.L.; Levy, V.; Adda, Y.

    1975-01-01

    Recent experiments on the effect of gases on the final configuration of vacancy clustering (void or loop), and on the local effects at dislocations are described. The contribution of this data to a general knowledge of void formation will be discussed, and Monte Carlo calculations of swelling induced by irradiation with different particles presented [fr

  2. Void Fraction Instrument operation and maintenance manual

    International Nuclear Information System (INIS)

    Borgonovi, G.; Stokes, T.I.; Pearce, K.L.; Martin, J.D.; Gimera, M.; Graves, D.B.

    1994-09-01

    This Operations and Maintenance Manual (O ampersand MM) addresses riser installation, equipment and personnel hazards, operating instructions, calibration, maintenance, removal, and other pertinent information necessary to safely operate and store the Void Fraction Instrument. Final decontamination and decommissioning of the Void Fraction Instrument are not covered in this document

  3. Studies of void formation in pure metals

    International Nuclear Information System (INIS)

    Lanore, J.M.; Glowinski, L.; Risbet, A.; Regnier, P.; Flament, J.L.

    1975-01-01

    Recent experiments on the effect of gases on the final configuration of vacancy clustering (void or loop), and on the local effects at dislocations are described. The contribution of this data to our general knowledge of void formation will be discussed, and Monte Carlo calculations of swelling induced by irradiation with different particles presented

  4. Void formation in irradiated binary nickel alloys

    International Nuclear Information System (INIS)

    Shaikh, M.A.; Ahmed, M.; Akhter, J.I.

    1994-01-01

    In this work a computer program has been used to compute void radius, void density and swelling parameter for nickel and binary nickel-carbon alloys irradiated with nickel ions of 100 keV. The aim is to compare the computed results with experimental results already reported

  5. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  6. Pores and Void in Asclepiades’ Physical Theory

    Science.gov (United States)

    Leith, David

    2012-01-01

    This paper examines a fundamental, though relatively understudied, aspect of the physical theory of the physician Asclepiades of Bithynia, namely his doctrine of pores. My principal thesis is that this doctrine is dependent on a conception of void taken directly from Epicurean physics. The paper falls into two parts: the first half addresses the evidence for the presence of void in Asclepiades’ theory, and concludes that his conception of void was basically that of Epicurus; the second half focuses on the precise nature of Asclepiadean pores, and seeks to show that they represent void interstices between the primary particles of matter which are the constituents of the human body, and are thus exactly analogous to the void interstices between atoms within solid objects in Epicurus’ theory. PMID:22984299

  7. On the Efficient Simulation of the Distribution of the Sum of Gamma-Gamma Variates with Application to the Outage Probability Evaluation Over Fading Channels

    KAUST Repository

    Ben Issaid, Chaouki

    2017-01-26

    The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverberation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is related to the difficult question of analyzing the statistics of a sum of Gamma- Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose robust importance sampling schemes that efficiently evaluates the outage probability of diversity receivers over Gamma-Gamma fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain combining cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations.

  8. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  9. On the abundance of extreme voids II: a survey of void mass functions

    International Nuclear Information System (INIS)

    Chongchitnan, Siri; Hunt, Matthew

    2017-01-01

    The abundance of cosmic voids can be described by an analogue of halo mass functions for galaxy clusters. In this work, we explore a number of void mass functions: from those based on excursion-set theory to new mass functions obtained by modifying halo mass functions. We show how different void mass functions vary in their predictions for the largest void expected in an observational volume, and compare those predictions to observational data. Our extreme-value formalism is shown to be a new practical tool for testing void theories against simulation and observation.

  10. Hydrological model calibration for flood prediction in current and future climates using probability distributions of observed peak flows and model based rainfall

    Science.gov (United States)

    Haberlandt, Uwe; Wallner, Markus; Radtke, Imke

    2013-04-01

    Derived flood frequency analysis based on continuous hydrological modelling is very demanding regarding the required length and temporal resolution of precipitation input data. Often such flood predictions are obtained using long precipitation time series from stochastic approaches or from regional climate models as input. However, the calibration of the hydrological model is usually done using short time series of observed data. This inconsistent employment of different data types for calibration and application of a hydrological model increases its uncertainty. Here, it is proposed to calibrate a hydrological model directly on probability distributions of observed peak flows using model based rainfall in line with its later application. Two examples are given to illustrate the idea. The first one deals with classical derived flood frequency analysis using input data from an hourly stochastic rainfall model. The second one concerns a climate impact analysis using hourly precipitation from a regional climate model. The results show that: (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated on extreme conditions works quite well for average conditions but not vice versa, (III) the calibration of the hydrological model using regional climate model data works as an implicit bias correction method and (IV) the best performance for flood estimation is usually obtained when model based precipitation and observed probability distribution of peak flows are used for model calibration.

  11. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  12. Analysis of void reactivity measurements in full MOX BWR physics experiments

    International Nuclear Information System (INIS)

    Ando, Yoshihira; Yamamoto, Toru; Umano, Takuya

    2008-01-01

    In the full MOX BWR physics experiments, FUBILA, four 9x9 test assemblies simulating BWR full MOX assemblies were located in the center of the core. Changing the in-channel moderator condition of the four assemblies from 0% void to 40% and 70% void mock-up, void reactivity was measured using Amplified Source Method (ASM) technique in the subcritical cores, in which three fission chambers were located. ASM correction factors necessary to express the consistency of the detector efficiency between measured core configurations were calculated using collision probability cell calculation and 3D-transport core calculation with the nuclear data library, JENDL-3.3. Measured reactivity worth with ASM correction factor was compared with the calculated results obtained through a diffusion, transport and continuous energy Monte Carlo calculation respectively. It was confirmed that the measured void reactivity worth was reproduced well by calculations. (author)

  13. Analysis of sodium-void experiments in ZPPR-3 modified phase 3 core

    International Nuclear Information System (INIS)

    Yoshida, T.

    1978-08-01

    In this work, large-zone sodium-void effects are studied in detail in the presence of many singularities, namely, control rods (CRs) and control rod positions (CRPs). The results of measurements and calculations are compared by CIE (calculation/experiment) values, which are 1.07 when the voided core region is free of singularities. When the void region includes CPRs, which are concurrently voided, the CIE value deteriorates and varies from 0.35 to 1.58. The agreement can be improved considerably by correcting the reactivity worth of the sodium contained in the CRPs with the aid of experimental data (CIE = 1.00 +- 0.15). The heterogeneity correction for the fuel elements was performed by the plate-cell vollision probability code KAPPER. (GL) [de

  14. Void formation by annealing of neutron-irradiated plastically deformed molybdenum

    International Nuclear Information System (INIS)

    Petersen, K.; Nielsen, B.; Thrane, N.

    1976-01-01

    The positron annihilation technique has been used in order to study the influence of plastic deformation on the formation and growth of voids in neutron irradiated molybdenum single crystals treated by isochronal annealing. Samples were prepared in three ways: deformed 12-19% before irradiation, deformed 12-19% after irradiation, and - for reference purposes -non-deformed. In addition a polycrystalline sample was prepared in order to study the influence of the grain boundaries. All samples were irradiated at 60 0 C with a flux of 2.5 x 10 18 fast neutrons/cm 2 . After irradiation the samples were subjected to isochronal annealing. It was found that deformation before irradiation probably enhanced the formation of voids slightly. Deformation after irradiation strongly reduced the void formation. The presence of grain boundaries in the polycrystalline sample had a reducing influence on the growth of voids. (author)

  15. Void reactivity decomposition for the Sodium-cooled Fast Reactor in equilibrium fuel cycle

    Energy Technology Data Exchange (ETDEWEB)

    Sun Kaichao, E-mail: kaichao.sun@psi.ch [Paul Scherrer Institut (PSI), 5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne (EPFL), 1015 Lausanne (Switzerland); Krepel, Jiri; Mikityuk, Konstantin; Pelloni, Sandro [Paul Scherrer Institut (PSI), 5232 Villigen PSI (Switzerland); Chawla, Rakesh [Paul Scherrer Institut (PSI), 5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne (EPFL), 1015 Lausanne (Switzerland)

    2011-07-15

    Highlights: > We analyze the void reactivity effect for three ESFR core fuel cycle states. > The void reactivity effect is decomposed by neutron balance method. > Novelly, the normalization to the integral flux in the active core is applied. > The decomposition is compared with the perturbation theory based results. > The mechanism and the differences of the void reactivity effect are explained. - Abstract: The Sodium-cooled Fast Reactor (SFR) is one of the most promising Generation IV systems with many advantages, but has one dominating neutronic drawback - a positive sodium void reactivity. The aim of this study is to develop and apply a methodology, which should help better understand the causes and consequences of the sodium void effect. It focuses not only on the beginning-of-life (BOL) state of the core, but also on the beginning of open and closed equilibrium (BOC and BEC, respectively) fuel cycle conditions. The deeper understanding of the principal phenomena involved may subsequently lead to appropriate optimization studies. Various voiding scenarios, corresponding to different spatial zones, e.g. node or assembly, have been analyzed, and the most conservative case - the voiding of both inner and outer fuel zones - has been selected as the reference scenario. On the basis of the neutron balance method, the corresponding SFR void reactivity has been decomposed reaction-, isotope-, and energy-group-wise. Complementary results, based on generalized perturbation theory and sensitivity analysis, are also presented. The numerical analysis for both neutron balance and perturbation theory methods has been carried out using appropriate modules of the ERANOS code system. A strong correlation between the flux worth, i.e. the product of flux and adjoint flux, and the void reactivity importance distributions has been found for the node- and assembly-wise voiding scenarios. The neutron balance based decomposition has shown that the void effect is caused mainly by the

  16. Void reactivity decomposition for the Sodium-cooled Fast Reactor in equilibrium fuel cycle

    International Nuclear Information System (INIS)

    Sun Kaichao; Krepel, Jiri; Mikityuk, Konstantin; Pelloni, Sandro; Chawla, Rakesh

    2011-01-01

    Highlights: → We analyze the void reactivity effect for three ESFR core fuel cycle states. → The void reactivity effect is decomposed by neutron balance method. → Novelly, the normalization to the integral flux in the active core is applied. → The decomposition is compared with the perturbation theory based results. → The mechanism and the differences of the void reactivity effect are explained. - Abstract: The Sodium-cooled Fast Reactor (SFR) is one of the most promising Generation IV systems with many advantages, but has one dominating neutronic drawback - a positive sodium void reactivity. The aim of this study is to develop and apply a methodology, which should help better understand the causes and consequences of the sodium void effect. It focuses not only on the beginning-of-life (BOL) state of the core, but also on the beginning of open and closed equilibrium (BOC and BEC, respectively) fuel cycle conditions. The deeper understanding of the principal phenomena involved may subsequently lead to appropriate optimization studies. Various voiding scenarios, corresponding to different spatial zones, e.g. node or assembly, have been analyzed, and the most conservative case - the voiding of both inner and outer fuel zones - has been selected as the reference scenario. On the basis of the neutron balance method, the corresponding SFR void reactivity has been decomposed reaction-, isotope-, and energy-group-wise. Complementary results, based on generalized perturbation theory and sensitivity analysis, are also presented. The numerical analysis for both neutron balance and perturbation theory methods has been carried out using appropriate modules of the ERANOS code system. A strong correlation between the flux worth, i.e. the product of flux and adjoint flux, and the void reactivity importance distributions has been found for the node- and assembly-wise voiding scenarios. The neutron balance based decomposition has shown that the void effect is caused mainly

  17. Accurate reactivity void coefficient calculation for the fast spectrum reactor FBR-IME

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Fabiano P.C.; Vellozo, Sergio de O.; Velozo, Marta J., E-mail: fabianopetruceli@outlook.com, E-mail: vellozo@cbpf.br, E-mail: martajann@gmail.com [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Militar

    2017-07-01

    This paper aims to present an accurate calculation of the void reactivity coefficient for the FBR-IME, a fast spectrum reactor in development at the Engineering Military Institute (IME). The main design peculiarity lies in using mixed oxide [MOX - PuO{sub 2} + U(natural uranium)O{sub 2}] as fuel core. For this task, SCALE system was used to calculate the reactivity for several voids distributions generated by bubbles in the sodium beyond its boiling point. The results show that although the void reactivity coefficient is positive and location dependent, they are offset by other feedback effects, resulting in a negative overall coefficient. (author)

  18. Studies of void growth in a thin ductile layer between ceramics

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    1997-01-01

    The growth of voids in a thin ductile layer between ceramics is analysed numerically, using an axisymmetric cell model to represent an array of uniformly distributed spherical voids at the central plane of the layer. The purpose is to determine the full traction-separation law relevant to crack...... growth by a ductile mechanism along the thin layer. Plastic flow in the layer is highly constrained by the ceramics, so that a high. level of triaxial tension develops, leading in some cases to cavitation instabilities. The computations are continued to a state near the occurrence of void coalescence....

  19. The Santiago-Harvard-Edinburgh-Durham void comparison - I. SHEDding light on chameleon gravity tests

    Science.gov (United States)

    Cautun, Marius; Paillas, Enrique; Cai, Yan-Chuan; Bose, Sownak; Armijo, Joaquin; Li, Baojiu; Padilla, Nelson

    2018-05-01

    We present a systematic comparison of several existing and new void-finding algorithms, focusing on their potential power to test a particular class of modified gravity models - chameleon f(R) gravity. These models deviate from standard general relativity (GR) more strongly in low-density regions and thus voids are a promising venue to test them. We use halo occupation distribution (HOD) prescriptions to populate haloes with galaxies, and tune the HOD parameters such that the galaxy two-point correlation functions are the same in both f(R) and GR models. We identify both three-dimensional (3D) voids and two-dimensional (2D) underdensities in the plane of the sky to find the same void abundance and void galaxy number density profiles across all models, which suggests that they do not contain much information beyond galaxy clustering. However, the underlying void dark matter density profiles are significantly different, with f(R) voids being more underdense than GR ones, which leads to f(R) voids having a larger tangential shear signal than their GR analogues. We investigate the potential of each void finder to test f(R) models with near-future lensing surveys such as EUCLID and LSST. The 2D voids have the largest power to probe f(R) gravity, with an LSST analysis of tunnel (which is a new type of 2D underdensity introduced here) lensing distinguishing at 80 and 11σ (statistical error) f(R) models with parameters, |fR0| = 10-5 and 10-6, from GR.

  20. Probability tales

    CERN Document Server

    Grinstead, Charles M; Snell, J Laurie

    2011-01-01

    This book explores four real-world topics through the lens of probability theory. It can be used to supplement a standard text in probability or statistics. Most elementary textbooks present the basic theory and then illustrate the ideas with some neatly packaged examples. Here the authors assume that the reader has seen, or is learning, the basic theory from another book and concentrate in some depth on the following topics: streaks, the stock market, lotteries, and fingerprints. This extended format allows the authors to present multiple approaches to problems and to pursue promising side discussions in ways that would not be possible in a book constrained to cover a fixed set of topics. To keep the main narrative accessible, the authors have placed the more technical mathematical details in appendices. The appendices can be understood by someone who has taken one or two semesters of calculus.

  1. Probability theory

    CERN Document Server

    Dorogovtsev, A Ya; Skorokhod, A V; Silvestrov, D S; Skorokhod, A V

    1997-01-01

    This book of problems is intended for students in pure and applied mathematics. There are problems in traditional areas of probability theory and problems in the theory of stochastic processes, which has wide applications in the theory of automatic control, queuing and reliability theories, and in many other modern science and engineering fields. Answers to most of the problems are given, and the book provides hints and solutions for more complicated problems.

  2. Impact of distributed generation in the probability density of voltage sags; Impacto da geracao distribuida na densidade de probabilidade de afundamentos de tensao

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Alessandro Candido Lopes [CELG - Companhia Energetica de Goias, Goiania, GO (Brazil). Generation and Transmission. System' s Operation Center], E-mail: alessandro.clr@celg.com.br; Batista, Adalberto Jose [Universidade Federal de Goias (UFG), Goiania, GO (Brazil)], E-mail: batista@eee.ufg.br; Leborgne, Roberto Chouhy [Universidade Federal do Rio Grande do Sul (UFRS), Porto Alegre, RS (Brazil)], E-mail: rcl@ece.ufrgs.br; Emiliano, Pedro Henrique Mota, E-mail: ph@phph.com.br

    2009-07-01

    This article presents the impact of distributed generation in studies of voltage sags caused by faults in the electrical system. We simulated short-circuit-to-ground in 62 lines of 230, 138, 69 and 13.8 kV that are part of the electrical system of the city of Goiania, of Goias state . For each fault position was monitored the bus voltage of 380 V in an industrial consumer sensitive to such sags. Were inserted different levels of GD near the consumer. The simulations of a short circuit, with the monitoring bar 380 V, were performed again. A study using stochastic simulation Monte Carlo (SMC) was performed to obtain, at each level of GD, the probability curves and sags of the probability density and its voltage class. With these curves were obtained the average number of sags according to each class, that the consumer bar may be submitted annually. The simulations were performed using the Program Analysis of Simultaneous Faults - ANAFAS. In order to overcome the intrinsic limitations of the methods of simulation of this program and allow data entry via windows, a computational tool was developed in Java language. Data processing was done using the MATLAB software.

  3. Analysis of meteorological droughts and dry spells in semiarid regions: a comparative analysis of probability distribution functions in the Segura Basin (SE Spain)

    Science.gov (United States)

    Pérez-Sánchez, Julio; Senent-Aparicio, Javier

    2017-08-01

    Dry spells are an essential concept of drought climatology that clearly defines the semiarid Mediterranean environment and whose consequences are a defining feature for an ecosystem, so vulnerable with regard to water. The present study was conducted to characterize rainfall drought in the Segura River basin located in eastern Spain, marked by the self seasonal nature of these latitudes. A daily precipitation set has been utilized for 29 weather stations during a period of 20 years (1993-2013). Furthermore, four sets of dry spell length (complete series, monthly maximum, seasonal maximum, and annual maximum) are used and simulated for all the weather stations with the following probability distribution functions: Burr, Dagum, error, generalized extreme value, generalized logistic, generalized Pareto, Gumbel Max, inverse Gaussian, Johnson SB, Log-Logistic, Log-Pearson 3, Triangular, Weibull, and Wakeby. Only the series of annual maximum spell offer a good adjustment for all the weather stations, thereby gaining the role of Wakeby as the best result, with a p value means of 0.9424 for the Kolmogorov-Smirnov test (0.2 significance level). Probability of dry spell duration for return periods of 2, 5, 10, and 25 years maps reveal the northeast-southeast gradient, increasing periods with annual rainfall of less than 0.1 mm in the eastern third of the basin, in the proximity of the Mediterranean slope.

  4. THE MAXIMUM AMOUNTS OF RAINFALL FALLEN IN SHORT PERIODS OF TIME IN THE HILLY AREA OF CLUJ COUNTY - GENESIS, DISTRIBUTION AND PROBABILITY OF OCCURRENCE

    Directory of Open Access Journals (Sweden)

    BLAGA IRINA

    2014-03-01

    Full Text Available The maximum amounts of rainfall are usually characterized by high intensity, and their effects on the substrate are revealed, at slope level, by the deepening of the existing forms of torrential erosion and also by the formation of new ones, and by landslide processes. For the 1971-2000 period, for the weather stations in the hilly area of Cluj County: Cluj- Napoca, Dej, Huedin and Turda the highest values of rainfall amounts fallen in 24, 48 and 72 hours were analyzed and extracted, based on which the variation and the spatial and temporal distribution of the precipitation were analyzed. The annual probability of exceedance of maximum rainfall amounts fallen in short time intervals (24, 48 and 72 hours, based on thresholds and class values was determined, using climatological practices and the Hyfran program facilities.

  5. Evaluation of the Air Void Analyzer

    Science.gov (United States)

    2013-07-01

    concrete using image analysis: Petrography of cementitious materials. ASTM STP 1215. S.M. DeHayes and D. Stark, eds. Philadelphia, PA: American...Administration (FHWA). 2006. Priority, market -ready technologies and innovations: Air Void Analyzer. Washington D.C. PDF file. Germann Instruments (GI). 2011...tests and properties of concrete and concrete-making materials. STP 169D. West Conshohocken, PA: ASTM International. Magura, D.D. 1996. Air void

  6. Experimental investigation of void coalescence in a dual phase steel using X-ray tomography

    International Nuclear Information System (INIS)

    Landron, C.; Bouaziz, O.; Maire, E.; Adrien, J.

    2013-01-01

    In situ tensile tests were carried out during X-ray microtomography imaging of a smooth and a notched specimen of dual phase steel. The void coalescence was first qualitatively observed and quantitative data concerning this damage step was then acquired. The void coalescence criteria of Brown and Embury and of Thomason were then tested against the experimental data at both the macroscopic and local levels. Although macroscopic implementation of the criteria gave acceptable results, the local approach was probably closest to the real nature of void coalescence, because it takes into account local coalescence events observed experimentally before final fracture. The correlation between actual coalescing couples of cavities and local implementation of the two criteria showed that the Thomason criterion is probably the best adapted to predict the local coalescence events in the case of the material studied

  7. Using voids to unscreen modified gravity

    Science.gov (United States)

    Falck, Bridget; Koyama, Kazuya; Zhao, Gong-Bo; Cautun, Marius

    2018-04-01

    The Vainshtein mechanism, present in many models of gravity, is very effective at screening dark matter haloes such that the fifth force is negligible and general relativity is recovered within their Vainshtein radii. Vainshtein screening is independent of halo mass and environment, in contrast to e.g. chameleon screening, making it difficult to test. However, our previous studies have found that the dark matter particles in filaments, walls, and voids are not screened by the Vainshtein mechanism. We therefore investigate whether cosmic voids, identified as local density minima using a watershed technique, can be used to test models of gravity that exhibit Vainshtein screening. We measure density, velocity, and screening profiles of stacked voids in cosmological N-body simulations using both dark matter particles and dark matter haloes as tracers of the density field. We find that the voids are completely unscreened, and the tangential velocity and velocity dispersion profiles of stacked voids show a clear deviation from Λ cold dark matter at all radii. Voids have the potential to provide a powerful test of gravity on cosmological scales.

  8. Interfacial area, velocity and void fraction in two-phase slug flow

    International Nuclear Information System (INIS)

    Kojasoy, G.; Riznic, J.R.

    1997-01-01

    The internal flow structure of air-water plug/slug flow in a 50.3 mm dia transparent pipeline has been experimentally investigated by using a four-sensor resistivity probe. Liquid and gas volumetric superficial velocities ranged from 0.55 to 2.20 m/s and 0.27 to 2.20 m/s, respectively, and area-averaged void fractions ranged from about 10 to 70%. The local distributions of void fractions, interfacial area concentration and interface velocity were measured. Contributions from small spherical bubbles and large elongated slug bubbles toward the total void fraction and interfacial area concentration were differentiated. It was observed that the small bubble void contribution to the overall void fraction was small indicating that the large slug bubble void fraction was a dominant factor in determining the total void fraction. However, the small bubble interfacial area contribution was significant in the lower and upper portions of the pipe cross sections

  9. Coolant Void Reactivity Analysis of CANDU Lattice

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin Su; Lee, Hyun Suk; Tak, Tae Woo; Lee, Deok Jung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    Models of CANDU-6 and ACR-700 fuel lattices were constructed for a single bundle and 2 by 2 checkerboard to understand the physics related to CVR. Also, a familiar four factor formula was used to predict the specific contributions to reactivity change in order to achieve an understanding of the physics issues related to the CVR. At the same time, because the situation of coolant voiding should bring about a change of neutron behavior, the spectral changes and neutron current were also analyzed. The models of the CANDU- 6 and ACR-700 fuel lattices were constructed using the Monte Carlo code MCNP6 using the ENDF/B-VII.0 continuous energy cross section library based on the specification from AECL. The CANDU fuel lattice was searched through sensitivity studies of each design parameter such as fuel enrichment, fuel pitch, and types of burnable absorber for obtaining better behavior in terms of CVR. Unlike the single channel coolant voiding, the ACR-700 bundle has a positive reactivity change upon 2x2 checkerboard coolant voiding. Because of the new path for neutron moderation, the neutrons from the voided channel move to the no-void channel where they lose energy and come back to the voided channel as thermal neutrons. This phenomenon causes the positive CVR when checkerboard voiding occurs. The sensitivity study revealed the effects of the moderator to fuel volume ratio, fuel enrichment, and burnable absorber on the CVR. A fuel bundle with low moderator to fuel volume ratio and high fuel enrichment can help achieve negative CVR.

  10. Monte Carlo dose calculations and radiobiological modelling: analysis of the effect of the statistical noise of the dose distribution on the probability of tumour control

    International Nuclear Information System (INIS)

    Buffa, Francesca M.

    2000-01-01

    The aim of this work is to investigate the influence of the statistical fluctuations of Monte Carlo (MC) dose distributions on the dose volume histograms (DVHs) and radiobiological models, in particular the Poisson model for tumour control probability (tcp). The MC matrix is characterized by a mean dose in each scoring voxel, d, and a statistical error on the mean dose, σ d ; whilst the quantities d and σ d depend on many statistical and physical parameters, here we consider only their dependence on the phantom voxel size and the number of histories from the radiation source. Dose distributions from high-energy photon beams have been analysed. It has been found that the DVH broadens when increasing the statistical noise of the dose distribution, and the tcp calculation systematically underestimates the real tumour control value, defined here as the value of tumour control when the statistical error of the dose distribution tends to zero. When increasing the number of energy deposition events, either by increasing the voxel dimensions or increasing the number of histories from the source, the DVH broadening decreases and tcp converges to the 'correct' value. It is shown that the underestimation of the tcp due to the noise in the dose distribution depends on the degree of heterogeneity of the radiobiological parameters over the population; in particular this error decreases with increasing the biological heterogeneity, whereas it becomes significant in the hypothesis of a radiosensitivity assay for single patients, or for subgroups of patients. It has been found, for example, that when the voxel dimension is changed from a cube with sides of 0.5 cm to a cube with sides of 0.25 cm (with a fixed number of histories of 10 8 from the source), the systematic error in the tcp calculation is about 75% in the homogeneous hypothesis, and it decreases to a minimum value of about 15% in a case of high radiobiological heterogeneity. The possibility of using the error on the

  11. Multipole analysis of redshift-space distortions around cosmic voids

    Energy Technology Data Exchange (ETDEWEB)

    Hamaus, Nico; Weller, Jochen [Universitäts-Sternwarte München, Fakultät für Physik, Ludwig-Maximilians Universität, Scheinerstr. 1, D-81679 München (Germany); Cousinou, Marie-Claude; Pisani, Alice; Aubert, Marie; Escoffier, Stéphanie, E-mail: hamaus@usm.lmu.de, E-mail: cousinou@cppm.in2p3.fr, E-mail: pisani@cppm.in2p3.fr, E-mail: maubert@cppm.in2p3.fr, E-mail: escoffier@cppm.in2p3.fr, E-mail: jochen.weller@usm.lmu.de [Aix Marseille Univ., CNRS/IN2P3, CPPM, 163 avenue de Luminy, F-13288, Marseille (France)

    2017-07-01

    We perform a comprehensive redshift-space distortion analysis based on cosmic voids in the large-scale distribution of galaxies observed with the Sloan Digital Sky Survey. To this end, we measure multipoles of the void-galaxy cross-correlation function and compare them with standard model predictions in cosmology. Merely considering linear-order theory allows us to accurately describe the data on the entire available range of scales and to probe void-centric distances down to about 2 h {sup −1}Mpc. Common systematics, such as the Fingers-of-God effect, scale-dependent galaxy bias, and nonlinear clustering do not seem to play a significant role in our analysis. We constrain the growth rate of structure via the redshift-space distortion parameter β at two median redshifts, β( z-bar =0.32)=0.599{sup +0.134}{sub −0.124} and β( z-bar =0.54)=0.457{sup +0.056}{sub −0.054}, with a precision that is competitive with state-of-the-art galaxy-clustering results. While the high-redshift constraint perfectly agrees with model expectations, we observe a mild 2σ deviation at z-bar =0.32, which increases to 3σ when the data is restricted to the lowest available redshift range of 0.15< z <0.33.

  12. Multipole analysis of redshift-space distortions around cosmic voids

    Science.gov (United States)

    Hamaus, Nico; Cousinou, Marie-Claude; Pisani, Alice; Aubert, Marie; Escoffier, Stéphanie; Weller, Jochen

    2017-07-01

    We perform a comprehensive redshift-space distortion analysis based on cosmic voids in the large-scale distribution of galaxies observed with the Sloan Digital Sky Survey. To this end, we measure multipoles of the void-galaxy cross-correlation function and compare them with standard model predictions in cosmology. Merely considering linear-order theory allows us to accurately describe the data on the entire available range of scales and to probe void-centric distances down to about 2 h-1Mpc. Common systematics, such as the Fingers-of-God effect, scale-dependent galaxy bias, and nonlinear clustering do not seem to play a significant role in our analysis. We constrain the growth rate of structure via the redshift-space distortion parameter β at two median redshifts, β(bar z=0.32)=0.599+0.134-0.124 and β(bar z=0.54)=0.457+0.056-0.054, with a precision that is competitive with state-of-the-art galaxy-clustering results. While the high-redshift constraint perfectly agrees with model expectations, we observe a mild 2σ deviation at bar z=0.32, which increases to 3σ when the data is restricted to the lowest available redshift range of 0.15

  13. Multipole analysis of redshift-space distortions around cosmic voids

    International Nuclear Information System (INIS)

    Hamaus, Nico; Weller, Jochen; Cousinou, Marie-Claude; Pisani, Alice; Aubert, Marie; Escoffier, Stéphanie

    2017-01-01

    We perform a comprehensive redshift-space distortion analysis based on cosmic voids in the large-scale distribution of galaxies observed with the Sloan Digital Sky Survey. To this end, we measure multipoles of the void-galaxy cross-correlation function and compare them with standard model predictions in cosmology. Merely considering linear-order theory allows us to accurately describe the data on the entire available range of scales and to probe void-centric distances down to about 2 h −1 Mpc. Common systematics, such as the Fingers-of-God effect, scale-dependent galaxy bias, and nonlinear clustering do not seem to play a significant role in our analysis. We constrain the growth rate of structure via the redshift-space distortion parameter β at two median redshifts, β( z-bar =0.32)=0.599 +0.134 −0.124 and β( z-bar =0.54)=0.457 +0.056 −0.054 , with a precision that is competitive with state-of-the-art galaxy-clustering results. While the high-redshift constraint perfectly agrees with model expectations, we observe a mild 2σ deviation at z-bar =0.32, which increases to 3σ when the data is restricted to the lowest available redshift range of 0.15< z <0.33.

  14. 38 CFR 3.207 - Void or annulled marriage.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Void or annulled marriage... Void or annulled marriage. Proof that a marriage was void or has been annulled should consist of: (a... marriage void, together with such other evidence as may be required for a determination. (b) Annulled. A...

  15. Electromagnetic wave survey on voids behind waterway channel lining; Suiro kaikyo sokuheki haimen kudo no denjiha tansa

    Energy Technology Data Exchange (ETDEWEB)

    Koitabashi, H [Tokyo Electric Power Co. Inc., Tokyo (Japan); Inagaki, M

    1996-10-01

    Voids behind lining were surveyed by applying electromagnetic wave reflection method to the waterway channel of a hydraulic power plant. Since waterway channel lining is ranged from oblique to vertical direction, voids are hardly formed. However, formation of voids or cavities behind lining is supposed such as voids between ground and lining due to change with time or consolidation settlement, and voids due to soil loss. Electromagnetic radar reflection suggesting continuous void was observed behind terrace concrete lining. As the result of core boring, thin continuous void of 2-5cm thick and more than 100m long was found. This was possibly formed by consolidation settlement for a long time. In some sites, continuous void signal was observed at the upper part of side walls although this signal was smaller than that at the upper part of a terrace. This continuous cavity of 10-20cm thick and 20m long was different from voids, and unevenly distributed at the upper part of an open channel along flowing surface with large flow rate. In addition, it is necessary to clarify the relation to cracks. 2 refs., 4 figs.

  16. Measurement of the thermal Sunyaev-Zel'dovich effect around cosmic voids

    Science.gov (United States)

    Alonso, David; Hill, J. Colin; Hložek, Renée; Spergel, David N.

    2018-03-01

    We stack maps of the thermal Sunyaev-Zel'dovich effect produced by the Planck Collaboration around the centers of cosmic voids defined by the distribution of galaxies in the CMASS sample of the Baryon Oscillation Spectroscopic Survey, scaled by the void effective radii. We report a first detection of the associated cross-correlation at the 3.4 σ level: voids are under-pressured relative to the cosmic mean. We compare the measured Compton-y profile around voids with a model based solely on the spatial modulation of halo abundance with environmental density. The amplitude of the detected signal is marginally lower than predicted by an overall amplitude αv=0.67 ±0.2 . We discuss the possible interpretations of this measurement in terms of modeling uncertainties, excess pressure in low-mass halos, or nonlocal heating mechanisms.

  17. Finite Element Analysis of Transverse Compressive Loads on Superconducting Nb3Sn Wires Containing Voids

    Science.gov (United States)

    D'Hauthuille, Luc; Zhai, Yuhu; Princeton Plasma Physics Lab Collaboration; University of Geneva Collaboration

    2015-11-01

    High field superconductors play an important role in many large-scale physics experiments, particularly particle colliders and fusion devices such as the LHC and ITER. The two most common superconductors used are NbTi and Nb3Sn. Nb3Sn wires are favored because of their significantly higher Jc, allowing them to produce much higher magnetic fields. The main disadvantage is that the superconducting performance of Nb3Sn is highly strain-sensitive and it is very brittle. The strain-sensitivity is strongly influenced by two factors: plasticity and cracked filaments. Cracks are induced by large stress concentrators due to the presence of voids. We will attempt to understand the correlation between Nb3Sn's irreversible strain limit and the void-induced stress concentrations around the voids. We will develop accurate 2D and 3D finite element models containing detailed filaments and possible distributions of voids in a bronze-route Nb3Sn wire. We will apply a compressive transverse load for the various cases to simulate the stress response of a Nb3Sn wire from the Lorentz force. Doing this will further improve our understanding of the effect voids have on the wire's mechanical properties, and thus, the connection between the shape & distribution of voids and performance degradation.

  18. Software quality assurance plan for void fraction instrument

    International Nuclear Information System (INIS)

    Gimera, M.

    1994-01-01

    Waste Tank SY-101 has been the focus of extensive characterization work over the past few years. The waste continually generates gases, most notably hydrogen, which are periodically released from the waste. Gas can be trapped in tank waste in three forms: as void gas (bubbles), dissolved gas, or absorbed gas. Void fraction is the volume percentage of a given sample that is comprised of void gas. The void fraction instrument (VFI) acquires the data necessary to calculate void fraction. This document covers the product, Void Fraction Data Acquisition Software. The void fraction software being developed will have the ability to control the void fraction instrument hardware and acquire data necessary to calculate the void fraction in samples. This document provides the software quality assurance plan, verification and validation plan, and configuration management plan for developing the software for the instrumentation that will be used to obtain void fraction data from Tank SY-101

  19. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  20. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  1. Post-void residual urine under 150 ml does not exclude voiding dysfunction in women

    DEFF Research Database (Denmark)

    Khayyami, Yasmine; Klarskov, Niels; Lose, Gunnar

    2016-01-01

    INTRODUCTION AND HYPOTHESIS: It has been claimed that post-void residual urine (PVR) below 150 ml rules out voiding dysfunction in women with stress urinary incontinence (SUI) and provides license to perform sling surgery. The cut-off of 150 ml seems arbitrary, not evidence-based, and so we sough...

  2. A brief introduction to probability.

    Science.gov (United States)

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  3. The Metallicity of Void Dwarf Galaxies

    Science.gov (United States)

    Kreckel, K.; Croxall, K.; Groves, B.; van de Weygaert, R.; Pogge, R. W.

    2015-01-01

    The current ΛCDM cosmological model predicts that galaxy evolution proceeds more slowly in lower density environments, suggesting that voids are a prime location to search for relatively pristine galaxies that are representative of the building blocks of early massive galaxies. To test the assumption that void galaxies are more pristine, we compare the evolutionary properties of a sample of dwarf galaxies selected specifically to lie in voids with a sample of similar isolated dwarf galaxies in average density environments. We measure gas-phase oxygen abundances and gas fractions for eight dwarf galaxies (Mr > -16.2), carefully selected to reside within the lowest density environments of seven voids, and apply the same calibrations to existing samples of isolated dwarf galaxies. We find no significant difference between these void dwarf galaxies and the isolated dwarf galaxies, suggesting that dwarf galaxy chemical evolution proceeds independent of the large-scale environment. While this sample is too small to draw strong conclusions, it suggests that external gas accretion is playing a limited role in the chemical evolution of these systems, and that this evolution is instead dominated mainly by the internal secular processes that are linking the simultaneous growth and enrichment of these galaxies.

  4. Failure by void coalescence in metallic materials containing primary and secondary voids subject to intense shearing

    DEFF Research Database (Denmark)

    Nielsen, Kim Lau; Tvergaard, Viggo

    2011-01-01

    Failure under intense shearing at close to zero stress triaxiality is widely observed for ductile metallic materials, and is identified in experiments as smeared-out dimples on the fracture surface. Numerical cell-model studies of equal sized voids have revealed that the mechanism governing...... this shear failure mode boils down to the interaction between primary voids which rotate and elongate until coalescence occurs under severe plastic deformation of the internal ligaments. The objective of this paper is to analyze this failure mechanism of primary voids and to study the effect of smaller...... secondary damage that co-exists with or nucleation in the ligaments between larger voids that coalesce during intense shearing. A numerical cell-model study is carried out to gain a parametric understanding of the overall material response for different initial conditions of the two void populations...

  5. Comparison of 4.2 MeV Fe+ and 46.5 MeV Ni6+ ion irradiation for the study of void swelling

    International Nuclear Information System (INIS)

    Blamires, N.G.; Worth, J.H.

    1975-11-01

    Void formation in pure nickel and 316 steel containing 10 ppm He has been studied using 4.2 MeV Fe+ ions from the Harwell Van de Graaff accelerator. The dose dependence of swelling in nickel at 525degC and the dose and temperature dependence of swelling in 316 steel is reported. The results are compared with those of other workers, especially those sup(13,14) using 46.5 MeV Ni 6+ ions. In general, there is good agment, except for a marked decrease in swelling of 316 steel at 650degC and 700degC compared with the Ni 6+ bombardment. The reason for this is thought to result from the restricted width of the damaged region in the low energy case which at the high temperatures is comparable with the inter-void spacing. Anomalous void distributions adjacent to grain boundaries are reported and are probably caused by grain boundary movement. Denuded zones at grain boundaries in 316 steel vary in width from approximatly 1300A at 450degC to approximatly 8800A at 700degC. The region adjacent to the surface of the nickel specimens exhibits an abnormally high swelling. Possible explanations are suggested

  6. Introduction to probability with R

    CERN Document Server

    Baclawski, Kenneth

    2008-01-01

    FOREWORD PREFACE Sets, Events, and Probability The Algebra of Sets The Bernoulli Sample Space The Algebra of Multisets The Concept of Probability Properties of Probability Measures Independent Events The Bernoulli Process The R Language Finite Processes The Basic Models Counting Rules Computing Factorials The Second Rule of Counting Computing Probabilities Discrete Random Variables The Bernoulli Process: Tossing a Coin The Bernoulli Process: Random Walk Independence and Joint Distributions Expectations The Inclusion-Exclusion Principle General Random Variable

  7. Void growth suppression by dislocation impurity atmospheres

    International Nuclear Information System (INIS)

    Weertman, J.; Green, W.V.

    1976-01-01

    A detailed calculation is given of the effect of an impurity atmosphere on void growth under irradiation damage conditions. Norris has proposed that such an atmosphere can suppress void growth. The hydrostatic stress field of a dislocation that is surrounded by an impurity atmosphere was found and used to calculate the change in the effective radius of a dislocation line as a sink for interstitials and vacancies. The calculation of the impurity concentration in a Cottrell cloud takes into account the change in hydrostatic pressure produced by the presence of the cloud itself. It is found that void growth is eliminated whenever dislocations are surrounded by a condensed atmosphere of either oversized substitutional impurity atoms or interstitial impurity atoms. A condensed atmosphere will form whenever the average impurity concentration is larger than a critical concentration

  8. From Voids to Yukawaballs And Back

    International Nuclear Information System (INIS)

    Land, V.; Goedheer, W. J.

    2008-01-01

    When dust particles are introduced in a radio-frequency discharge under micro-gravity conditions, usually a dust free void is formed due to the ion drag force pushing the particles away from the center. Experiments have shown that it is possible to close the void by reducing the power supplied to the discharge. This reduces the ion density and with that the ratio between the ion drag force and the opposing electric force. We have studied the behavior of a discharge with a large amount of dust particles (radius 3.4 micron) with our hydrodynamic model, and simulated the closure of the void for conditions similar to the experiment. We also approached the formation of a Yukawa ball from the other side, starting with a discharge at low power and injecting batches of dust, while increasing the power to prevent extinction of the discharge. Eventually the same situation could be reached.

  9. Development of subchannel void measurement sensor and multidimensional two-phase flow dynamics in rod bundle

    International Nuclear Information System (INIS)

    Arai, T.; Furuya, M.; Kanai, T.; Shirakawa, K.

    2011-01-01

    An accurate subchannel database is crucial for modeling the multidimensional two-phase flow in a rod bundle and for validating subchannel analysis codes. Based on available reference, it can be said that a point-measurement sensor for acquiring void fractions and bubble velocity distributions do not infer interactions of the subchannel flow dynamics, such as a cross flow and flow distribution, etc. In order to acquire multidimensional two-phase flow in a 10×10 rod bundle with an o.d. of 10 mm and 3110 mm length, a new sensor consisting of 11-wire by 11-wire and 10-rod by 10-rod electrodes was developed. Electric potential in the proximity region between two wires creates a void fraction in the center subchannel region, like a so-called wire mesh sensor. A unique aspect of the devised sensor is that the void fraction near the rod surface can be estimated from the electric potential in the proximity region between one wire and one rod. The additional 400 points of void fraction and phasic velocity in 10×10 bundle can therefore be acquired. The devised sensor exhibits the quasi three-dimensional flow structures, i.e. void fraction, phasic velocity and bubble chord length distributions. These quasi three-dimensional structures exhibit the complexity of two-phase flow dynamics, such as coalescence and the breakup of bubbles in transient phasic velocity distributions. (author)

  10. Surface drift prediction in the Adriatic Sea using hyper-ensemble statistics on atmospheric, ocean and wave models: Uncertainties and probability distribution areas

    Science.gov (United States)

    Rixen, M.; Ferreira-Coelho, E.; Signell, R.

    2008-01-01

    Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).

  11. Summed Probability Distribution of 14C Dates Suggests Regional Divergences in the Population Dynamics of the Jomon Period in Eastern Japan.

    Directory of Open Access Journals (Sweden)

    Enrico R Crema

    Full Text Available Recent advances in the use of summed probability distribution (SPD of calibrated 14C dates have opened new possibilities for studying prehistoric demography. The degree of correlation between climate change and population dynamics can now be accurately quantified, and divergences in the demographic history of distinct geographic areas can be statistically assessed. Here we contribute to this research agenda by reconstructing the prehistoric population change of Jomon hunter-gatherers between 7,000 and 3,000 cal BP. We collected 1,433 14C dates from three different regions in Eastern Japan (Kanto, Aomori and Hokkaido and established that the observed fluctuations in the SPDs were statistically significant. We also introduced a new non-parametric permutation test for comparing multiple sets of SPDs that highlights point of divergences in the population history of different geographic regions. Our analyses indicate a general rise-and-fall pattern shared by the three regions but also some key regional differences during the 6th millennium cal BP. The results confirm some of the patterns suggested by previous archaeological studies based on house and site counts but offer statistical significance and an absolute chronological framework that will enable future studies aiming to establish potential correlation with climatic changes.

  12. The neolithic demographic transition in Europe: correlation with juvenility index supports interpretation of the summed calibrated radiocarbon date probability distribution (SCDPD as a valid demographic proxy.

    Directory of Open Access Journals (Sweden)

    Sean S Downey

    Full Text Available Analysis of the proportion of immature skeletons recovered from European prehistoric cemeteries has shown that the transition to agriculture after 9000 BP triggered a long-term increase in human fertility. Here we compare the largest analysis of European cemeteries to date with an independent line of evidence, the summed calibrated date probability distribution of radiocarbon dates (SCDPD from archaeological sites. Our cemetery reanalysis confirms increased growth rates after the introduction of agriculture; the radiocarbon analysis also shows this pattern, and a significant correlation between both lines of evidence confirms the demographic validity of SCDPDs. We analyze the areal extent of Neolithic enclosures and demographic data from ethnographically known farming and foraging societies and we estimate differences in population levels at individual sites. We find little effect on the overall shape and precision of the SCDPD and we observe a small increase in the correlation with the cemetery trends. The SCDPD analysis supports the hypothesis that the transition to agriculture dramatically increased demographic growth, but it was followed within centuries by a general pattern of collapse even after accounting for higher settlement densities during the Neolithic. The study supports the unique contribution of SCDPDs as a valid demographic proxy for the demographic patterns associated with early agriculture.

  13. Multicritical phase diagrams of the Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: the pair approximation and the path probability method with pair distribution

    International Nuclear Information System (INIS)

    Keskin, Mustafa; Erdinc, Ahmet

    2004-01-01

    As a continuation of the previously published work, the pair approximation of the cluster variation method is applied to study the temperature dependences of the order parameters of the Blume-Emery-Griffiths model with repulsive biquadratic coupling on a body centered cubic lattice. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. We study the dynamics of the model by the path probability method with pair distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagram in addition to the equilibrium phase diagram and also the first-order phase transition line for the unstable branches of the quadrupole order parameter is superimposed on the phase diagrams. It is found that the metastable phase diagram and the first-order phase boundary for the unstable quadrupole order parameter always exist at the low temperatures which are consistent with experimental and theoretical works

  14. Partial discharges in ellipsoidal and spheroidal voids

    DEFF Research Database (Denmark)

    Crichton, George C; Karlsson, P. W.; Pedersen, Aage

    1989-01-01

    Transients associated with partial discharges in voids can be described in terms of the charges induced on the terminal electrodes of the system. The relationship between the induced charge and the properties which are usually measured is discussed. The method is illustrated by applying it to a s......Transients associated with partial discharges in voids can be described in terms of the charges induced on the terminal electrodes of the system. The relationship between the induced charge and the properties which are usually measured is discussed. The method is illustrated by applying...

  15. Voids and overdensities of coupled Dark Energy

    International Nuclear Information System (INIS)

    Mainini, Roberto

    2009-01-01

    We investigate the clustering properties of dynamical Dark Energy even in association of a possible coupling between Dark Energy and Dark Matter. We find that within matter inhomogeneities, Dark Energy migth form voids as well as overdensity depending on how its background energy density evolves. Consequently and contrarily to what expected, Dark Energy fluctuations are found to be slightly suppressed if a coupling with Dark Matter is permitted. When considering density contrasts and scales typical of superclusters, voids and supervoids, perturbations amplitudes range from |δ φ | ∼ O(10 −6 ) to |δ φ | ∼ O(10 −4 ) indicating an almost homogeneous Dark Energy component

  16. Radar application in void and bar detection

    International Nuclear Information System (INIS)

    Amry Amin Abas; Mohamad Pauzi Ismail; Suhairy Sani

    2003-01-01

    Radar is one of the new non-destructive testing techniques for concrete and structures inspection. Radar is a non-ionizing electromagnetic wave that can penetrate deep into concrete or soil in about several tenths of meters. Method of inspection using radar enables us to perform high resolution detection, imaging and mapping of subsurface concrete and soil condition. This paper will discuss the use of radar for void and bar detection and sizing. The samples used in this paper are custom made samples and comparison will be made to validate the use of radar in detecting, locating and also size determination of voids and bars. (Author)

  17. Measurement of void fractions by nuclear techniques

    International Nuclear Information System (INIS)

    Hernandez G, A.; Vazquez G, J.; Diaz H, C.; Salinas R, G.A.

    1997-01-01

    In this work it is done a general analysis of those techniques used to determine void fractions and it is chosen a nuclear technique to be used in the heat transfer circuit of the Physics Department of the Basic Sciences Management. The used methods for the determination of void fractions are: radioactive absorption, acoustic techniques, average velocity measurement, electromagnetic flow measurement, optical methods, oscillating absorption, nuclear magnetic resonance, relation between pressure and flow oscillation, infrared absorption methods, sound neutron analysis. For the case of this work it will be treated about the radioactive absorption method which is based in the gamma rays absorption. (Author)

  18. Void fraction calculation in a channel containing boiling coolant

    International Nuclear Information System (INIS)

    Norelli, F.

    1978-01-01

    The problem of void fraction calculation was studied for a channel containing boiling coolant, when a slip ratio correlation is used. Use of fitting (e.g. polinomial or rational algebraic) for slip ratio correlation and the characteristic method are proposed in this work. In this way we are reduced to some elementary quadrature problem. Another problem discussed in the present work concerns what we must consider as ''initial condition'' in any initial value problem, in order to take into account different error distributions in steady state and in successive time-dependent calculations

  19. Void fraction measurement system for high temperature flows

    Energy Technology Data Exchange (ETDEWEB)

    Teyssedou, A; Aube, F; Champagne, P [Montreal Univ., PQ (Canada). Institut de Genie Energetique

    1992-05-01

    A {gamma}-ray absorption technique has been developed for measuring the axial distribution of the void fraction for high-temperature and high-pressure two-phase flows. The system is mounted on a moving platform driven by a high-power stepping motor. A personal computer (IBM AT) connected to a data acquisition system is used to control the displacement of the {gamma} source and detector, and to read the response of the detector. All the measurement procedures are carried out automatically by dedicated software developed for this purpose. (Author).

  20. The evolution of voids in the adhesion approximation

    Science.gov (United States)

    Sahni, Varun; Sathyaprakah, B. S.; Shandarin, Sergei F.

    1994-08-01

    We apply the adhesion approximation to study the formation and evolution of voids in the universe. Our simulations-carried out using 1283 particles in a cubical box with side 128 Mpc-indicate that the void spectrum evolves with time and that the mean void size in the standard Cosmic Background Explorer Satellite (COBE)-normalized cold dark matter (CDM) model with H50 = 1 scals approximately as bar D(z) = bar Dzero/(1+2)1/2, where bar Dzero approximately = 10.5 Mpc. Interestingly, we find a strong correlation between the sizes of voids and the value of the primordial gravitational potential at void centers. This observation could in principle, pave the way toward reconstructing the form of the primordial potential from a knowledge of the observed void spectrum. Studying the void spectrum at different cosmological epochs, for spectra with a built in k-space cutoff we find that the number of voids in a representative volume evolves with time. The mean number of voids first increases until a maximum value is reached (indicating that the formation of cellular structure is complete), and then begins to decrease as clumps and filaments erge leading to hierarchical clustering and the subsequent elimination of small voids. The cosmological epoch characterizing the completion of cellular structure occurs when the length scale going nonlinear approaches the mean distance between peaks of the gravitaional potential. A central result of this paper is that voids can be populated by substructure such as mini-sheets and filaments, which run through voids. The number of such mini-pancakes that pass through a given void can be measured by the genus characteristic of an individual void which is an indicator of the topology of a given void in intial (Lagrangian) space. Large voids have on an average a larger measure than smaller voids indicating more substructure within larger voids relative to smaller ones. We find that the topology of individual voids is strongly epoch dependent