WorldWideScience

Sample records for two-parameter lognormal distribution

  1. An empirical multivariate log-normal distribution representing uncertainty of biokinetic parameters for 137Cs

    International Nuclear Information System (INIS)

    Miller, G.; Martz, H.; Bertelli, L.; Melo, D.

    2008-01-01

    A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)

  2. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    Energy Technology Data Exchange (ETDEWEB)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu [Division of Science and Mathematics, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi (United Arab Emirates)

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  3. Neuronal variability during handwriting: lognormal distribution.

    Directory of Open Access Journals (Sweden)

    Valery I Rupasov

    Full Text Available We examined time-dependent statistical properties of electromyographic (EMG signals recorded from intrinsic hand muscles during handwriting. Our analysis showed that trial-to-trial neuronal variability of EMG signals is well described by the lognormal distribution clearly distinguished from the Gaussian (normal distribution. This finding indicates that EMG formation cannot be described by a conventional model where the signal is normally distributed because it is composed by summation of many random sources. We found that the variability of temporal parameters of handwriting--handwriting duration and response time--is also well described by a lognormal distribution. Although, the exact mechanism of lognormal statistics remains an open question, the results obtained should significantly impact experimental research, theoretical modeling and bioengineering applications of motor networks. In particular, our results suggest that accounting for lognormal distribution of EMGs can improve biomimetic systems that strive to reproduce EMG signals in artificial actuators.

  4. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  5. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  7. Estimation of expected value for lognormal and gamma distributions

    International Nuclear Information System (INIS)

    White, G.C.

    1978-01-01

    Concentrations of environmental pollutants tend to follow positively skewed frequency distributions. Two such density functions are the gamma and lognormal. Minimum variance unbiased estimators of the expected value for both densities are available. The small sample statistical properties of each of these estimators were compared for its own distribution, as well as the other distribution to check the robustness of the estimator. Results indicated that the arithmetic mean provides an unbiased estimator when the underlying density function of the sample is either lognormal or gamma, and that the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two. Further Monte Carlo simulations were conducted to study the robustness of the above estimators by simulating a lognormal or gamma distribution with the expected value of a particular observation selected from a uniform distribution before the lognormal or gamma observation is generated. Again, the arithmetic mean provides an unbiased estimate of expected value, and the coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two

  8. Handbook of tables for order statistics from lognormal distributions with applications

    CERN Document Server

    Balakrishnan, N

    1999-01-01

    Lognormal distributions are one of the most commonly studied models in the sta­ tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma­ cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...

  9. Species Abundance in a Forest Community in South China: A Case of Poisson Lognormal Distribution

    Institute of Scientific and Technical Information of China (English)

    Zuo-Yun YIN; Hai REN; Qian-Mei ZHANG; Shao-Lin PENG; Qin-Feng GUO; Guo-Yi ZHOU

    2005-01-01

    Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m×20 m, 5 m×5 m, and 1 m×1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal;(ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (σ andμ) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the σ and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/σ should be an alternative measure of diversity.

  10. Use of the lognormal distribution for the coefficients of friction and wear

    International Nuclear Information System (INIS)

    Steele, Clint

    2008-01-01

    To predict the reliability of a system, an engineer might allocate a distribution to each input. This raises a question: how to select the correct distribution? Siddall put forward an evolutionary approach that was intended to utilise both the understanding of the engineer and available data. However, this method requires a subjective initial distribution based on the engineer's understanding of the variable or parameter. If the engineer's understanding is limited, the initial distribution will be misrepresentative of the actual distribution, and application of the method will likely fail. To provide some assistance, the coefficients of friction and wear are considered here. Basic tribology theory, dimensional issues and the central limit theorem are used to argue that the distribution for each of the coefficients will typically be like a lognormal distribution. Empirical evidence from other sources is cited to lend support to this argument. It is concluded that the distributions for the coefficients of friction and wear would typically be lognormal in nature. It is therefore recommended that the engineer, without data or evidence to suggest differently, should allocate a lognormal distribution to the coefficients of friction and wear

  11. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  12. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  13. Aerosol Extinction Profile Mapping with Lognormal Distribution Based on MPL Data

    Science.gov (United States)

    Lin, T. H.; Lee, T. T.; Chang, K. E.; Lien, W. H.; Liu, G. R.; Liu, C. Y.

    2017-12-01

    This study intends to challenge the profile mapping of aerosol vertical distribution by mathematical function. With the similarity in distribution pattern, lognormal distribution is examined for mapping the aerosol extinction profile based on MPL (Micro Pulse LiDAR) in situ measurements. The variables of lognormal distribution are log mean (μ) and log standard deviation (σ), which will be correlated with the parameters of aerosol optical depht (AOD) and planetary boundary layer height (PBLH) associated with the altitude of extinction peak (Mode) defined in this study. On the base of 10 years MPL data with single peak, the mapping results showed that the mean error of Mode and σ retrievals are 16.1% and 25.3%, respectively. The mean error of σ retrieval can be reduced to 16.5% under the cases of larger distance between PBLH and Mode. The proposed method is further applied to MODIS AOD product in mapping extinction profile for the retrieval of PM2.5 in terms of satellite observations. The results indicated well agreement between retrievals and ground measurements when aerosols under 525 meters are well-mixed. The feasibility of proposed method to satellite remote sensing is also suggested by the case study. Keyword: Aerosol extinction profile, Lognormal distribution, MPL, Planetary boundary layer height (PBLH), Aerosol optical depth (AOD), Mode

  14. MODELING PARTICLE SIZE DISTRIBUTION IN HETEROGENEOUS POLYMERIZATION SYSTEMS USING MULTIMODAL LOGNORMAL FUNCTION

    Directory of Open Access Journals (Sweden)

    J. C. Ferrari

    Full Text Available Abstract This work evaluates the usage of the multimodal lognormal function to describe Particle Size Distributions (PSD of emulsion and suspension polymerization processes, including continuous reactions with particle re-nucleation leading to complex multimodal PSDs. A global optimization algorithm, namely Particle Swarm Optimization (PSO, was used for parameter estimation of the proposed model, minimizing the objective function defined by the mean squared errors. Statistical evaluation of the results indicated that the multimodal lognormal function could describe distinctive features of different types of PSDs with accuracy and consistency.

  15. Lognormal Behavior of the Size Distributions of Animation Characters

    Science.gov (United States)

    Yamamoto, Ken

    This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.

  16. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Smith, Donald L.; Capote, Roberto

    2013-01-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue

  17. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, Gašper, E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, Andrej, E-mail: andrej.trkov@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Smith, Donald L., E-mail: donald.l.smith@anl.gov [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States); Capote, Roberto, E-mail: roberto.capotenoy@iaea.org [NAPC–Nuclear Data Section, International Atomic Energy Agency, PO Box 100, Vienna-A-1400 (Austria)

    2013-11-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue.

  18. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    International Nuclear Information System (INIS)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical data from the National Uranium Resource Evaluation Program

  19. Competition and fragmentation: a simple model generating lognormal-like distributions

    International Nuclear Information System (INIS)

    Schwaemmle, V; Queiros, S M D; Brigatti, E; Tchumatchenko, T

    2009-01-01

    The current distribution of language size in terms of speaker population is generally described using a lognormal distribution. Analyzing the original real data we show how the double-Pareto lognormal distribution can give an alternative fit that indicates the existence of a power law tail. A simple Monte Carlo model is constructed based on the processes of competition and fragmentation. The results reproduce the power law tails of the real distribution well and give better results for a poorly connected topology of interactions.

  20. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  1. Life prediction for white OLED based on LSM under lognormal distribution

    Science.gov (United States)

    Zhang, Jianping; Liu, Fang; Liu, Yu; Wu, Helen; Zhu, Wenqing; Wu, Wenli; Wu, Liang

    2012-09-01

    In order to acquire the reliability information of White Organic Light Emitting Display (OLED), three groups of OLED constant stress accelerated life tests (CSALTs) were carried out to obtain failure data of samples. Lognormal distribution function was applied to describe OLED life distribution, and the accelerated life equation was determined by Least square method (LSM). The Kolmogorov-Smirnov test was performed to verify whether the white OLED life meets lognormal distribution or not. Author-developed software was employed to predict the average life and the median life. The numerical results indicate that the white OLED life submits to lognormal distribution, and that the accelerated life equation meets inverse power law completely. The estimated life information of the white OLED provides manufacturers and customers with important guidelines.

  2. Maximum Likelihood Estimates of Parameters in Various Types of Distribution Fitted to Important Data Cases.

    OpenAIRE

    HIROSE,Hideo

    1998-01-01

    TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...

  3. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  4. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    Science.gov (United States)

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of

  5. The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts

    Directory of Open Access Journals (Sweden)

    Lorenzo Zaninetti

    2016-11-01

    Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  6. SYVAC3 parameter distribution package

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T; Skeet, A

    1995-01-01

    SYVAC3 (Systems Variability Analysis Code, generation 3) is a computer program that implements a method called systems variability analysis to analyze the behaviour of a system in the presence of uncertainty. This method is based on simulating the system many times to determine the variation in behaviour it can exhibit. SYVAC3 specializes in systems representing the transport of contaminants, and has several features to simplify the modelling of such systems. It provides a general tool for estimating environmental impacts from the dispersal of contaminants. This report describes a software object type (a generalization of a data type) called Parameter Distribution. This object type is used in SYVAC3, and can also be used independently. Parameter Distribution has the following subtypes: beta distribution; binomial distribution; constant distribution; lognormal distribution; loguniform distribution; normal distribution; piecewise uniform distribution; Triangular distribution; and uniform distribution. Some of these distributions can be altered by correlating two parameter distribution objects. This report provides complete specifications for parameter distributions, and also explains how to use them. It should meet the needs of casual users, reviewers, and programmers who wish to add their own subtypes. (author). 30 refs., 75 tabs., 56 figs.

  7. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    International Nuclear Information System (INIS)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.

    2016-01-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg"2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3–10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ"2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Lastly, our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  8. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  9. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  10. Lognormal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of α-Particle Track Autoradiography

    Science.gov (United States)

    Neti, Prasad V.S.V.; Howell, Roger W.

    2010-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086

  11. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  12. The modelled raindrop size distribution of Skudai, Peninsular Malaysia, using exponential and lognormal distributions.

    Science.gov (United States)

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance.

  13. The Modelled Raindrop Size Distribution of Skudai, Peninsular Malaysia, Using Exponential and Lognormal Distributions

    Science.gov (United States)

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597

  14. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    Science.gov (United States)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been

  15. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  16. On Modelling Insurance Data by Using a Generalized Lognormal Distribution || Sobre la modelización de datos de seguros usando una distribución lognormal generalizada

    Directory of Open Access Journals (Sweden)

    García, Victoriano J.

    2014-12-01

    Full Text Available In this paper, a new heavy-tailed distribution is used to model data with a strong right tail, as often occurs in practical situations. The distribution proposed is derived from the lognormal distribution, by using the Marshall and Olkin procedure. Some basic properties of this new distribution are obtained and we present situations where this new distribution correctly reflects the sample behaviour for the right tail probability. An application of the model to dental insurance data is presented and analysed in depth. We conclude that the generalized lognormal distribution proposed is a distribution that should be taken into account among other possible distributions for insurance data in which the properties of a heavy-tailed distribution are present. || Presentamos una nueva distribución lognormal con colas pesadas que se adapta bien a muchas situaciones prácticas en el campo de los seguros. Utilizamos el procedimiento de Marshall y Olkin para generar tal distribución y estudiamos sus propiedades básicas. Se presenta una aplicación de la misma para datos de seguros dentales que es analizada en profundidad, concluyendo que tal distribución deberá formar parte del catálogo de distribuciones a tener cuenta para la modernización de datos en seguros cuando hay presencia de colas pesadas.

  17. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    Science.gov (United States)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  18. Geomagnetic storms, the Dst ring-current myth and lognormal distributions

    Science.gov (United States)

    Campbell, W.H.

    1996-01-01

    The definition of geomagnetic storms dates back to the turn of the century when researchers recognized the unique shape of the H-component field change upon averaging storms recorded at low latitude observatories. A generally accepted modeling of the storm field sources as a magnetospheric ring current was settled about 30 years ago at the start of space exploration and the discovery of the Van Allen belt of particles encircling the Earth. The Dst global 'ring-current' index of geomagnetic disturbances, formulated in that period, is still taken to be the definitive representation for geomagnetic storms. Dst indices, or data from many world observatories processed in a fashion paralleling the index, are used widely by researchers relying on the assumption of such a magnetospheric current-ring depiction. Recent in situ measurements by satellites passing through the ring-current region and computations with disturbed magnetosphere models show that the Dst storm is not solely a main-phase to decay-phase, growth to disintegration, of a massive current encircling the Earth. Although a ring current certainly exists during a storm, there are many other field contributions at the middle-and low-latitude observatories that are summed to show the 'storm' characteristic behavior in Dst at these observatories. One characteristic of the storm field form at middle and low latitudes is that Dst exhibits a lognormal distribution shape when plotted as the hourly value amplitude in each time range. Such distributions, common in nature, arise when there are many contributors to a measurement or when the measurement is a result of a connected series of statistical processes. The amplitude-time displays of Dst are thought to occur because the many time-series processes that are added to form Dst all have their own characteristic distribution in time. By transforming the Dst time display into the equivalent normal distribution, it is shown that a storm recovery can be predicted with

  19. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  20. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  1. On the Laplace transform of the Lognormal distribution

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    -form approximation L˜(θ) of the Laplace transform L(θ) which is obtained via a modified version of Laplace's method. This approximation, given in terms of the Lambert W(⋅) function, is tractable enough for applications. We prove that L˜(θ) is asymptotically equivalent to L(θ) as θ→∞. We apply this result......Integral transforms of the lognormal distribution are of great importance in statistics and probability, yet closed-form expressions do not exist. A wide variety of methods have been employed to provide approximations, both analytical and numerical. In this paper, we analyze a closed...

  2. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    Science.gov (United States)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  3. Probability distribution of atmospheric pollutants: comparison among four methods for the determination of the log-normal distribution parameters; La distribuzione di probabilita` degli inquinanti atmosferici: confronto tra quattro metodi per la determinazione dei parametri della distribuzione log-normale

    Energy Technology Data Exchange (ETDEWEB)

    Bellasio, R [Enviroware s.r.l., Agrate Brianza, Milan (Italy). Centro Direzionale Colleoni; Lanzani, G; Ripamonti, M; Valore, M [Amministrazione Provinciale, Como (Italy)

    1998-04-01

    This work illustrates the possibility to interpolate the measured concentrations of CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} during one year (1995) at the 13 stations of the air quality monitoring station network of the Provinces of Como and Lecco (Italy) by means of a log-normal distribution. Particular attention was given in choosing the method for the determination of the log-normal distribution parameters among four possible methods: I natural, II percentiles, III moments, IV maximum likelihood. In order to evaluate the goodness of fit a ranking procedure was carried out over the values of four indices: absolute deviation, weighted absolute deviation, Kolmogorov-Smirnov index and Cramer-von Mises-Smirnov index. The capability of the log-normal distribution to fit the measured data is then discussed as a function of the pollutant and of the monitoring station. Finally an example of application is given: the effect of an emission reduction strategy in Lombardy Region (the so called `bollino blu`) is evaluated using a log-normal distribution. [Italiano] In questo lavoro si discute la possibilita` di interpolare le concentrazioni misurate di CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} durante un anno solare (il 1995) nelle 13 stazioni della Rete di Rilevamento della qualita` dell`aria delle Provincie di Como e di Lecco mediante una funzione log-normale. In particolare si discute quale metodo e` meglio usare per l`individuazione dei 2 parametri caratteristici della log-normale, tra 4 teoreticamente possibili: I naturale, II dei percentili, III dei momenti, IV della massima verosimiglianza. Per valutare i risultati ottenuti si usano: la deviazione assoluta, la deviazione pesata, il parametro di Kolmogorov-Smirnov e quello di Cramer-von Mises-Smirnov effettuando un ranking tra i metodi in funzione degli inquinanti e della stazione di misura. Ancora in funzione degli inquinanti e delle diverse stazioni di misura si discute poi la capacita` della funzione log-normale di

  4. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...

  5. Beyond lognormal inequality: The Lorenz Flow Structure

    Science.gov (United States)

    Eliazar, Iddo

    2016-11-01

    Observed from a socioeconomic perspective, the intrinsic inequality of the lognormal law happens to manifest a flow generated by an underlying ordinary differential equation. In this paper we extend this feature of the lognormal law to a general ;Lorenz Flow Structure; of Lorenz curves-objects that quantify socioeconomic inequality. The Lorenz Flow Structure establishes a general framework of size distributions that span continuous spectra of socioeconomic states ranging from the pure-communism extreme to the absolute-monarchy extreme. This study introduces and explores the Lorenz Flow Structure, analyzes its statistical properties and its inequality properties, unveils the unique role of the lognormal law within this general structure, and presents various examples of this general structure. Beyond the lognormal law, the examples include the inverse-Pareto and Pareto laws-which often govern the tails of composite size distributions.

  6. Lognormal firing rate distribution reveals prominent fluctuation-driven regime in spinal motor networks

    DEFF Research Database (Denmark)

    Petersen, Peter C.; Berg, Rune W.

    2016-01-01

    fraction that operates within either a ‘mean-driven’ or a ‘fluctuation–driven’ regime. Fluctuation-driven neurons have a ‘supralinear’ input-output curve, which enhances sensitivity, whereas the mean-driven regime reduces sensitivity. We find a rich diversity of firing rates across the neuronal population...... as reflected in a lognormal distribution and demonstrate that half of the neurons spend at least 50 %% of the time in the ‘fluctuation–driven’ regime regardless of behavior. Because of the disparity in input–output properties for these two regimes, this fraction may reflect a fine trade–off between stability...

  7. A physical explanation of the lognormality of pollutant concentrations

    International Nuclear Information System (INIS)

    Ott, W.R.

    1990-01-01

    Investigators in different environmental fields have reported that the concentrations of various measured substances have frequency distributions that are lognormal, or nearly so. That is, when the logarithms of the observed concentrations are plotted as a frequency distribution, the resulting distribution is approximately normal, or Gaussian, over much of the observed range. Examples include radionuclides in soil, pollutants in ambient air, indoor air quality, trace metals in streams, metals in biological tissue, calcium in human remains. The ubiquity of the lognormal distribution in environmental processes is surprising and has not been adequately explained, since common processes in nature (for example, computation of the mean and the analysis of error) usually give rise to distributions that are normal rather than lognormal. This paper takes the first step toward explaining why lognormal distributions can arise naturally from certain physical processes that are analogous to those found in the environment. In this paper, these processes are treated mathematically, and the results are illustrated in a laboratory beaker experiment that is simulated on the computer

  8. Evolution and mass extinctions as lognormal stochastic processes

    Science.gov (United States)

    Maccone, Claudio

    2014-10-01

    In a series of recent papers and in a book, this author put forward a mathematical model capable of embracing the search for extra-terrestrial intelligence (SETI), Darwinian Evolution and Human History into a single, unified statistical picture, concisely called Evo-SETI. The relevant mathematical tools are: (1) Geometric Brownian motion (GBM), the stochastic process representing evolution as the stochastic increase of the number of species living on Earth over the last 3.5 billion years. This GBM is well known in the mathematics of finances (Black-Sholes models). Its main features are that its probability density function (pdf) is a lognormal pdf, and its mean value is either an increasing or, more rarely, decreasing exponential function of the time. (2) The probability distributions known as b-lognormals, i.e. lognormals starting at a certain positive instant b>0 rather than at the origin. These b-lognormals were then forced by us to have their peak value located on the exponential mean-value curve of the GBM (Peak-Locus theorem). In the framework of Darwinian Evolution, the resulting mathematical construction was shown to be what evolutionary biologists call Cladistics. (3) The (Shannon) entropy of such b-lognormals is then seen to represent the `degree of progress' reached by each living organism or by each big set of living organisms, like historic human civilizations. Having understood this fact, human history may then be cast into the language of b-lognormals that are more and more organized in time (i.e. having smaller and smaller entropy, or smaller and smaller `chaos'), and have their peaks on the increasing GBM exponential. This exponential is thus the `trend of progress' in human history. (4) All these results also match with SETI in that the statistical Drake equation (generalization of the ordinary Drake equation to encompass statistics) leads just to the lognormal distribution as the probability distribution for the number of extra

  9. Determination of mean rainfall from the Special Sensor Microwave/Imager (SSM/I) using a mixed lognormal distribution

    Science.gov (United States)

    Berg, Wesley; Chase, Robert

    1992-01-01

    Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.

  10. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  11. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  12. The lognormal handwriter: learning, performing and declining.

    Directory of Open Access Journals (Sweden)

    Réjean ePlamondon

    2013-12-01

    Full Text Available The generation of handwriting is a complex neuromotor skill requiring the interaction of many cognitive processes. It aims at producing a message to be imprinted as an ink trace left on a writing medium. The generated trajectory of the pen tip is made up of strokes superimposed over time. The Kinematic Theory of rapid human movements and its family of lognormal models provide analytical representations of these strokes, often considered as the basic unit of handwriting. This paradigm has not only been experimentally confirmed in numerous predictive and physiologically significant tests but it has also been shown to be the ideal mathematical description for the impulse response of a neuromuscular system. This latter demonstration suggests that the lognormality of the velocity patterns can be interpreted as reflecting the behaviour of subjects who are in perfect control of their movements. To illustrate this interpretation, we present a short overview of the main concepts behind the Kinematic Theory and briefly describe how its models can be exploited, using various software tools, to investigate these ideal lognormal behaviors. We emphasize that the parameters extracted during various tasks can be used to analyze some underlying processes associated with their realization. To investigate the operational convergence hypothesis, we report on two original studies. First, we focus on the early steps of the motor learning process as seen as a converging behaviour toward the production of more precise lognormal patterns as young children practicing handwriting start to become more fluent writers. Second, we illustrate how aging affects handwriting by pointing out the increasing departure from the ideal lognormal behaviour as the control of the fine motricity begins to decline. Overall, the paper highlights this developmental process of merging toward a lognormal behaviour with learning, mastering this behaviour to succeed in performing a given task

  13. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  14. Generating log-normally distributed random numbers by using the Ziggurat algorithm

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2016-01-01

    Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method

  15. Testing the Pareto against the lognormal distributions with the uniformly most powerful unbiased test applied to the distribution of cities.

    Science.gov (United States)

    Malevergne, Yannick; Pisarenko, Vladilen; Sornette, Didier

    2011-03-01

    Fat-tail distributions of sizes abound in natural, physical, economic, and social systems. The lognormal and the power laws have historically competed for recognition with sometimes closely related generating processes and hard-to-distinguish tail properties. This state-of-affair is illustrated with the debate between Eeckhout [Amer. Econ. Rev. 94, 1429 (2004)] and Levy [Amer. Econ. Rev. 99, 1672 (2009)] on the validity of Zipf's law for US city sizes. By using a uniformly most powerful unbiased (UMPU) test between the lognormal and the power-laws, we show that conclusive results can be achieved to end this debate. We advocate the UMPU test as a systematic tool to address similar controversies in the literature of many disciplines involving power laws, scaling, "fat" or "heavy" tails. In order to demonstrate that our procedure works for data sets other than the US city size distribution, we also briefly present the results obtained for the power-law tail of the distribution of personal identity (ID) losses, which constitute one of the major emergent risks at the interface between cyberspace and reality.

  16. The analysis of annual dose distributions for radiation workers

    International Nuclear Information System (INIS)

    Mill, A.J.

    1984-05-01

    The system of dose limitation recommended by the ICRP includes the requirement that no worker shall exceed the current dose limit of 50mSv/a. Continuous exposure at this limit corresponds to an annual death rate comparable with 'high risk' industries if all workers are continuously exposed at the dose limit. In practice, there is a distribution of doses with an arithmetic mean lower than the dose limit. In its 1977 report UNSCEAR defined a reference dose distribution for the purposes of comparison. However, this two parameter distribution does not show the departure from log-normality normally observed for actual distributions at doses which are a significant proportion of the annual limit. In this report an alternative model is suggested, based on a three parameter log-normal distribution. The third parameter is an ''effective dose limit'' and such a model fits very well the departure from log-normality observed in actual dose distributions. (author)

  17. Dobinski-type relations and the log-normal distribution

    International Nuclear Information System (INIS)

    Blasiak, P; Penson, K A; Solomon, A I

    2003-01-01

    We consider sequences of generalized Bell numbers B(n), n = 1, 2, ..., which can be represented by Dobinski-type summation formulae, i.e. B(n) = 1/C Σ k =0 ∞ [P(k)] n /D(k), with P(k) a polynomial, D(k) a function of k and C = const. They include the standard Bell numbers (P(k) k, D(k) = k!, C = e), their generalizations B r,r (n), r = 2, 3, ..., appearing in the normal ordering of powers of boson monomials (P(k) (k+r)!/k!, D(k) = k!, C = e), variants of 'ordered' Bell numbers B o (p) (n) (P(k) = k, D(k) = (p+1/p) k , C = 1 + p, p = 1, 2 ...), etc. We demonstrate that for α, β, γ, t positive integers (α, t ≠ 0), [B(αn 2 + βn + γ)] t is the nth moment of a positive function on (0, ∞) which is a weighted infinite sum of log-normal distributions. (letter to the editor)

  18. Log-normality of indoor radon data in the Walloon region of Belgium

    International Nuclear Information System (INIS)

    Cinelli, Giorgia; Tondeur, François

    2015-01-01

    The deviations of the distribution of Belgian indoor radon data from the log-normal trend are examined. Simulated data are generated to provide a theoretical frame for understanding these deviations. It is shown that the 3-component structure of indoor radon (radon from subsoil, outdoor air and building materials) generates deviations in the low- and high-concentration tails, but this low-C trend can be almost completely compensated by the effect of measurement uncertainties and by possible small errors in background subtraction. The predicted low-C and high-C deviations are well observed in the Belgian data, when considering the global distribution of all data. The agreement with the log-normal model is improved when considering data organised in homogeneous geological groups. As the deviation from log-normality is often due to the low-C tail for which there is no interest, it is proposed to use the log-normal fit limited to the high-C half of the distribution. With this prescription, the vast majority of the geological groups of data are compatible with the log-normal model, the remaining deviations being mostly due to a few outliers, and rarely to a “fat tail”. With very few exceptions, the log-normal modelling of the high-concentration part of indoor radon data is expected to give reasonable results, provided that the data are organised in homogeneous geological groups. - Highlights: • Deviations of the distribution of Belgian indoor Rn data from the log-normal trend. • 3-component structure of indoor Rn: subsoil, outdoor air and building materials. • Simulated data generated to provide a theoretical frame for understanding deviations. • Data organised in homogeneous geological groups; better agreement with the log-normal

  19. Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities

    International Nuclear Information System (INIS)

    Waite, D.A.; Denham, D.H.

    1975-01-01

    The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these

  20. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    International Nuclear Information System (INIS)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro

    2011-01-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≅10 -2.5 Mpc -1 with the upper limit B < or approx. 3 nG.

  1. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  2. A NEW STATISTICAL PERSPECTIVE TO THE COSMIC VOID DISTRIBUTION

    International Nuclear Information System (INIS)

    Pycke, J-R; Russell, E.

    2016-01-01

    In this study, we obtain the size distribution of voids as a three-parameter redshift-independent log-normal void probability function (VPF) directly from the Cosmic Void Catalog (CVC). Although many statistical models of void distributions are based on the counts in randomly placed cells, the log-normal VPF that we obtain here is independent of the shape of the voids due to the parameter-free void finder of the CVC. We use three void populations drawn from the CVC generated by the Halo Occupation Distribution (HOD) Mocks, which are tuned to three mock SDSS samples to investigate the void distribution statistically and to investigate the effects of the environments on the size distribution. As a result, it is shown that void size distributions obtained from the HOD Mock samples are satisfied by the three-parameter log-normal distribution. In addition, we find that there may be a relation between the hierarchical formation, skewness, and kurtosis of the log-normal distribution for each catalog. We also show that the shape of the three-parameter distribution from the samples is strikingly similar to the galaxy log-normal mass distribution obtained from numerical studies. This similarity between void size and galaxy mass distributions may possibly indicate evidence of nonlinear mechanisms affecting both voids and galaxies, such as large-scale accretion and tidal effects. Considering the fact that in this study, all voids are generated by galaxy mocks and show hierarchical structures in different levels, it may be possible that the same nonlinear mechanisms of mass distribution affect the void size distribution.

  3. EVIDENCE FOR TWO LOGNORMAL STATES IN MULTI-WAVELENGTH FLUX VARIATION OF FSRQ PKS 1510-089

    Energy Technology Data Exchange (ETDEWEB)

    Kushwaha, Pankaj; Misra, Ranjeev [Inter University Center for Astronomy and Astrophysics, Pune 411007 (India); Chandra, Sunil; Singh, K. P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Sahayanathan, S. [Astrophysical Sciences Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Baliyan, K. S., E-mail: pankajk@iucaa.in [Physical Research Laboratory, Ahmedabad 380009 (India)

    2016-05-01

    We present a systematic characterization of multi-wavelength emission from blazar PKS 1510-089 using well-sampled data at near-infrared (NIR), optical, X-ray, and γ -ray energies. The resulting flux distributions, except at X-rays, show two distinct lognormal profiles corresponding to a high and a low flux level. The dispersions exhibit energy-dependent behavior except in the LAT γ -ray and optical B-band. During the low level flux states, it is higher toward the peak of the spectral energy distribution, with γ -ray being intrinsically more variable followed by IR and then optical, consistent with mainly being a result of varying bulk Lorentz factor. On the other hand, the dispersions during the high state are similar in all bands except the optical B-band, where thermal emission still dominates. The centers of distributions are a factor of ∼4 apart, consistent with anticipation from studies of extragalactic γ -ray background with the high state showing a relatively harder mean spectral index compared to the low state.

  4. Neutron dosimetry and spectrometry with Bonner spheres. Working out a log-normal reference matrix

    International Nuclear Information System (INIS)

    Zaborowski, Henrick.

    1981-11-01

    From the experimental and theoretical studies made upon the BONNER's spheres System with a I 6 Li(Eu) crystal and with a miniaturized 3 He counter we get the normalized energy response functions R*sub(i)(E). This normalization is obtained by the mathematization of the Resolution Function R*(i,E) in the Log-Normal distribution hypothesis to mono energetic neutrons given in April 1976 to the International Symposium on Californium 252. The fit of the Log-Normal Hypothesis with the experimental and Theoretical data is very satisfactory. The parameter's tabulated values allow a precise interpolation, at all energies between 0.4 eV and 15 MeV and for all spheres diameters between 2 and 12 inches, of the discretized R*sub(ij) Reference Matrix for the applications to neutron dosimetry and spectrometry [fr

  5. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  6. Drug binding affinities and potencies are best described by a log-normal distribution and use of geometric means

    International Nuclear Information System (INIS)

    Stanisic, D.; Hancock, A.A.; Kyncl, J.J.; Lin, C.T.; Bush, E.N.

    1986-01-01

    (-)-Norepinephrine (NE) is used as an internal standard in their in vitro adrenergic assays, and the concentration of NE which produces a half-maximal inhibition of specific radioligand binding (affinity; K/sub I/), or half-maximal contractile response (potency; ED 50 ) has been measured numerous times. The goodness-of-fit test for normality was performed on both normal (Gaussian) or log 10 -normal frequency histograms of these data using the SAS Univariate procedure. Specific binding of 3 H-prazosin to rat liver (α 1 -), 3 H rauwolscine to rat cortex (α 2 -) and 3 H-dihydroalprenolol to rat ventricle (β 1 -) or rat lung (β 2 -receptors) was inhibited by NE; the distributions of NE K/sub I/'s at all these sites were skewed to the right, with highly significant (p 50 's of NE in isolated rabbit aorta (α 1 ), phenoxybenzamine-treated dog saphenous vein (α 2 ) and guinea pig atrium (β 1 ). The vasorelaxant potency of atrial natriuretic hormone in histamine-contracted rabbit aorta also was better described by a log-normal distribution, indicating that log-normalcy is probably a general phenomenon of drug-receptor interactions. Because data of this type appear to be log-normally distributed, geometric means should be used in parametric statistical analyses

  7. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    International Nuclear Information System (INIS)

    Agterberg, Frits

    2017-01-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  8. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    Energy Technology Data Exchange (ETDEWEB)

    Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)

    2017-07-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  9. A numerical study of the segregation phenomenon of lognormal particle size distributions in the rotating drum

    Science.gov (United States)

    Yang, Shiliang; Sun, Yuhao; Zhao, Ya; Chew, Jia Wei

    2018-05-01

    Granular materials are mostly polydisperse, which gives rise to phenomena such as segregation that has no monodisperse counterpart. The discrete element method is applied to simulate lognormal particle size distributions (PSDs) with the same arithmetic mean particle diameter but different PSD widths in a three-dimensional rotating drum operating in the rolling regime. Despite having the same mean particle diameter, as the PSD width of the lognormal PSDs increases, (i) the steady-state mixing index, the total kinetic energy, the ratio of the active region depth to the total bed depth, the mass fraction in the active region, the steady-state active-passive mass-based exchanging rate, and the mean solid residence time (SRT) of the particles in the active region increase, while (ii) the steady-state gyration radius, the streamwise velocity, and the SRT in the passive region decrease. Collectively, these highlight the need for more understanding of the effect of PSD width on the granular flow behavior in the rotating drum operating in the rolling flow regime.

  10. ORILAM, a three-moment lognormal aerosol scheme for mesoscale atmospheric model: Online coupling into the Meso-NH-C model and validation on the Escompte campaign

    Science.gov (United States)

    Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert

    2005-09-01

    Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.

  11. Log-normal spray drop distribution...analyzed by two new computer programs

    Science.gov (United States)

    Gerald S. Walton

    1968-01-01

    Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...

  12. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  13. The effect of mis-specification on mean and selection between the Weibull and lognormal models

    Science.gov (United States)

    Jia, Xiang; Nadarajah, Saralees; Guo, Bo

    2018-02-01

    The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.

  14. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  15. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  16. Use of the truncated shifted Pareto distribution in assessing size distribution of oil and gas fields

    Science.gov (United States)

    Houghton, J.C.

    1988-01-01

    The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.

  17. Log-Normality and Multifractal Analysis of Flame Surface Statistics

    Science.gov (United States)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2013-11-01

    The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.

  18. Weibull and lognormal Taguchi analysis using multiple linear regression

    International Nuclear Information System (INIS)

    Piña-Monarrez, Manuel R.; Ortiz-Yañez, Jesús F.

    2015-01-01

    The paper provides to reliability practitioners with a method (1) to estimate the robust Weibull family when the Taguchi method (TM) is applied, (2) to estimate the normal operational Weibull family in an accelerated life testing (ALT) analysis to give confidence to the extrapolation and (3) to perform the ANOVA analysis to both the robust and the normal operational Weibull family. On the other hand, because the Weibull distribution neither has the normal additive property nor has a direct relationship with the normal parameters (µ, σ), in this paper, the issues of estimating a Weibull family by using a design of experiment (DOE) are first addressed by using an L_9 (3"4) orthogonal array (OA) in both the TM and in the Weibull proportional hazard model approach (WPHM). Then, by using the Weibull/Gumbel and the lognormal/normal relationships and multiple linear regression, the direct relationships between the Weibull and the lifetime parameters are derived and used to formulate the proposed method. Moreover, since the derived direct relationships always hold, the method is generalized to the lognormal and ALT analysis. Finally, the method’s efficiency is shown through its application to the used OA and to a set of ALT data. - Highlights: • It gives the statistical relations and steps to use the Taguchi Method (TM) to analyze Weibull data. • It gives the steps to determine the unknown Weibull family to both the robust TM setting and the normal ALT level. • It gives a method to determine the expected lifetimes and to perform its ANOVA analysis in TM and ALT analysis. • It gives a method to give confidence to the extrapolation in an ALT analysis by using the Weibull family of the normal level.

  19. Small Sample Properties of Asymptotically Efficient Estimators of the Parameters of a Bivariate Gaussian–Weibull Distribution

    Science.gov (United States)

    Steve P. Verrill; James W. Evans; David E. Kretschmann; Cherilyn A. Hatfield

    2012-01-01

    Two important wood properties are stiffness (modulus of elasticity or MOE) and bending strength (modulus of rupture or MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two or three parameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of...

  20. [A study on the departmental distribution of mortality by cause: some evidence concerning two populations].

    Science.gov (United States)

    Damiani, P; Masse, H; Aubenque, M

    1984-01-01

    The distributions of proportions of deaths by cause are analyzed for each department of France by sex for the age group 45 to 64. The data are official French departmental data on causes of death for the period 1968-1970. The authors conclude that these distributions are the sum of two log-normal distributions. They also identify the existence of two populations according to whether the cause of death was endogenous or exogenous. (summary in ENG)

  1. Behaviour interpretation log-normal tenor of uranium in the context of intrusive rocks

    International Nuclear Information System (INIS)

    Valencia, Jacinto; Palacios, Andres; Maguina, Jose

    2015-01-01

    Analysis and processing of the results of the tenor of uranium obtained from a rock intrusive by the method of gamma spectrometry, which result in a better correlation between uranium and thorium when the logarithm of these analyzes is used is discussed and is represented in a thorium/uranium diagram obtaining a better response. This is provided that the expression of the lognormal distribution provides a closer relation to the spatial distribution of uranium in a mineral deposit. The representation of a normal distribution and a log-normal distribution is shown. In the interpretative part explained by diagrams the behavior of the thorium/uranium and relation to potassium from direct measurements of tenors obtained in the field of sampling points of a section of granite San Ramon (SR) relationship, and volcanic Mitu Group (GM) where it has identified the granite rock of this unit as a source of uranium. (author)

  2. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  3. Indirect estimation of the Convective Lognormal Transfer function model parameters for describing solute transport in unsaturated and undisturbed soil.

    Science.gov (United States)

    Mohammadi, Mohammad Hossein; Vanclooster, Marnik

    2012-05-01

    Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μ(t), increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ²(t) first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μ(t) estimated from the conceptual model performed much better as compared to predictions with μ(t) and σ²(t) estimated from calibration of solute transport at shallow soil depths. The use of μ(t) estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    Science.gov (United States)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  5. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    Science.gov (United States)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-11-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[S|[Lambda

  6. Annual rainfall statistics for stations in the Top End of Australia: normal and log-normal distribution analysis

    International Nuclear Information System (INIS)

    Vardavas, I.M.

    1992-01-01

    A simple procedure is presented for the statistical analysis of measurement data where the primary concern is the determination of the value corresponding to a specified average exceedance probability. The analysis employs the normal and log-normal frequency distributions together with a χ 2 -test and an error analysis. The error analysis introduces the concept of a counting error criterion, or ζ-test, to test whether the data are sufficient to make the Z 2 -test reliable. The procedure is applied to the analysis of annual rainfall data recorded at stations in the tropical Top End of Australia where the Ranger uranium deposit is situated. 9 refs., 12 tabs., 9 figs

  7. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    Science.gov (United States)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  8. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  9. On Riemann zeroes, lognormal multiplicative chaos, and Selberg integral

    International Nuclear Information System (INIS)

    Ostrovsky, Dmitry

    2016-01-01

    Rescaled Mellin-type transforms of the exponential functional of the Bourgade–Kuan–Rodgers statistic of Riemann zeroes are conjecturally related to the distribution of the total mass of the limit lognormal stochastic measure of Mandelbrot–Bacry–Muzy. The conjecture implies that a non-trivial, log-infinitely divisible probability distribution is associated with Riemann zeroes. For application, integral moments, covariance structure, multiscaling spectrum, and asymptotics associated with the exponential functional are computed in closed form using the known meromorphic extension of the Selberg integral. (paper)

  10. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  11. Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio

    Science.gov (United States)

    Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.

    2017-12-01

    Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.

  12. Reliability Implications in Wood Systems of a Bivariate Gaussian-Weibull Distribution and the Associated Univariate Pseudo-truncated Weibull

    Science.gov (United States)

    Steve P. Verrill; James W. Evans; David E. Kretschmann; Cherilyn A. Hatfield

    2014-01-01

    Two important wood properties are the modulus of elasticity (MOE) and the modulus of rupture (MOR). In the past, the statistical distribution of the MOE has often been modeled as Gaussian, and that of the MOR as lognormal or as a two- or three-parameter Weibull distribution. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior...

  13. Statistics of the acoustic emission signals parameters from Zircaloy-4 fuel cladding

    International Nuclear Information System (INIS)

    Oliveto, Maria E.; Lopez Pumarega, Maria I.; Ruzzante, Jose E.

    2000-01-01

    Statistic analysis of acoustic emission signals parameters: amplitude, duration and risetime was carried out. CANDU type Zircaloy-4 fuel claddings were pressurized up to rupture, one set of five normal pieces and six with defects included, acoustic emission was used on-line. Amplitude and duration frequency distributions were fitted with lognormal distribution functions, and risetime with an exponential one. Using analysis of variance, acoustic emission was appropriated to distinguish between defective and non-defective subsets. Clusters analysis applied on mean values of acoustic emission signal parameters were not effective to distinguish two sets of fuel claddings studied. (author)

  14. Asymptotics of sums of lognormal random variables with Gaussian copula

    DEFF Research Database (Denmark)

    Asmussen, Søren; Rojas-Nandayapa, Leonardo

    2008-01-01

    Let (Y1, ..., Yn) have a joint n-dimensional Gaussian distribution with a general mean vector and a general covariance matrix, and let Xi = eYi, Sn = X1 + ⋯ + Xn. The asymptotics of P (Sn > x) as n → ∞ are shown to be the same as for the independent case with the same lognormal marginals. In part...

  15. The size distributions of all Indian cities

    Science.gov (United States)

    Luckstead, Jeff; Devadoss, Stephen; Danforth, Diana

    2017-05-01

    We apply five distributions-lognormal, double-Pareto lognormal, lognormal-upper tail Pareto, Pareto tails-lognormal, and Pareto tails-lognormal with differentiability restrictions-to estimate the size distribution of all Indian cities. Since India contains numerous small cities, it is important to explicitly model the lower-tail behavior for studying the distribution of all Indian cities. Our results rigorously confirm, using both graphical and formal statistical tests, that among these five distributions, Pareto tails-lognormal is a better suited parametrization of the Indian city size data, verifying that the Indian city size distribution exhibits a strong reverse Pareto in the lower tail, lognormal in the mid-range body, and Pareto in the upper tail.

  16. Collision prediction models using multivariate Poisson-lognormal regression.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.

  17. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    Science.gov (United States)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  18. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    Science.gov (United States)

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  19. EARLY GUIDANCE FOR ASSIGNING DISTRIBUTION PARAMETERS TO GEOCHEMICAL INPUT TERMS TO STOCHASTIC TRANSPORT MODELS

    International Nuclear Information System (INIS)

    Kaplan, D; Margaret Millings, M

    2006-01-01

    Stochastic modeling is being used in the Performance Assessment program to provide a probabilistic estimate of the range of risk that buried waste may pose. The objective of this task was to provide early guidance for stochastic modelers for the selection of the range and distribution (e.g., normal, log-normal) of distribution coefficients (K d ) and solubility values (K sp ) to be used in modeling subsurface radionuclide transport in E- and Z-Area on the Savannah River Site (SRS). Due to the project's schedule, some modeling had to be started prior to collecting the necessary field and laboratory data needed to fully populate these models. For the interim, the project will rely on literature values and some statistical analyses of literature data as inputs. Based on statistical analyses of some literature sorption tests, the following early guidance was provided: (1) Set the range to an order of magnitude for radionuclides with K d values >1000 mL/g and to a factor of two for K d values of sp values -6 M and to a factor of two for K d values of >10 -6 M. This decision is based on the literature. (3) The distribution of K d values with a mean >1000 mL/g will be log-normally distributed. Those with a K d value <1000 mL/g will be assigned a normal distribution. This is based on statistical analysis of non-site-specific data. Results from on-going site-specific field/laboratory research involving E-Area sediments will supersede this guidance; these results are expected in 2007

  20. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  1. Lognormal switching times for titanium dioxide bipolar memristors: origin and resolution

    International Nuclear Information System (INIS)

    Medeiros-Ribeiro, Gilberto; Perner, Frederick; Carter, Richard; Abdalla, Hisham; Pickett, Matthew D; Williams, R Stanley

    2011-01-01

    We measured the switching time statistics for a TiO 2 memristor and found that they followed a lognormal distribution, which is a potentially serious problem for computer memory and data storage applications. We examined the underlying physical phenomena that determine the switching statistics and proposed a simple analytical model for the distribution based on the drift/diffusion equation and previously measured nonlinear drift behavior. We designed a closed-loop switching protocol that dramatically narrows the time distribution, which can significantly improve memory circuit performance and reliability.

  2. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    [Xe−θX]/L(θ)=x. The asymptotic formulas involve the Lambert W function. The established relations are used to provide two different numerical methods for evaluating the left tail probability of lognormal sum Sn=X1+⋯+Xn: a saddlepoint approximation and an exponential twisting importance sampling estimator. For the latter we...

  3. Determining prescription durations based on the parametric waiting time distribution

    DEFF Research Database (Denmark)

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-01-01

    two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users...... in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies......-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide...

  4. Recovering Parameters of Johnson's SB Distribution

    Science.gov (United States)

    Bernard R. Parresol

    2003-01-01

    A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...

  5. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal distribution of the magnetic dipole moment. Here, we test this assumption for different types of superparamagnetic iron oxide nanoparticles in the 5–20 nm range, by multimodal fitting of magnetization curves using the MINORIM inversion method. The particles are studied while in dilute colloidal dispersion in a liquid, thereby preventing hysteresis and diminishing the effects of magnetic anisotropy on the interpretation of the magnetization curves. For two different types of well crystallized particles, the magnetic distribution is indeed log-normal, as expected from the physical size distribution. However, two other types of particles, with twinning defects or inhomogeneous oxide phases, are found to have a bimodal magnetic distribution. Our qualitative explanation is that relatively low fields are sufficient to begin aligning the particles in the liquid on the basis of their net dipole moment, whereas higher fields are required to align the smaller domains or less magnetic phases inside the particles. - Highlights: • Multimodal fits of dilute ferrofluids reveal when the particles are multidomain. • No a priori shape of the distribution is assumed by the MINORIM inversion method. • Well crystallized particles have log-normal TEM and magnetic size distributions. • Defective particles can combine a monomodal size and a bimodal dipole moment

  6. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  7. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  8. An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand

    Science.gov (United States)

    Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.

    2005-01-01

    An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…

  9. Empirical analysis on the runners' velocity distribution in city marathons

    Science.gov (United States)

    Lin, Zhenquan; Meng, Fan

    2018-01-01

    In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.

  10. Distribution characteristics of interfacial parameter in downward gas-liquid two-phase flow in vertical circular tube

    International Nuclear Information System (INIS)

    Liu Guoqiang; Yan Changqi; Tian Daogui; Sun Licheng

    2014-01-01

    Experimental study was performed on distribution characteristics of interfacial parameters of downward gas-liquid flow in a vertical circular tube with the measurement by a two-sensor optical fiber probe. The test section is a circular pipe with the inner diameter of 50 mm and the length of 2000 mm. The superficial velocities of the gas and the liquid phases cover the ranges of 0.004-0.077 m/s and 0.43-0.71 m/s, respectively. The results show that the distributions of the interfacial parameters in downward bubbly flows are quite different from those in upward bubbly flows. For the case of upward flow, the parameters present the 'wall-peak' or 'core-peak' distributions, but for the case of downward flow, they show 'wall-peak' or 'wide-peak' distributions. The average value of void fraction in vertical downward flow is about 119.6%-145.0% larger than that in upward flow, and the interfacial area concentration is about 18.8%-82.5% larger than that in upward flow. The distribution of interfacial parameters shows an obvious tendency of uniformity. (authors)

  11. Methodology for lognormal modelling of malignant pleural mesothelioma survival time distributions: a study of 5580 case histories from Europe and USA

    Energy Technology Data Exchange (ETDEWEB)

    Mould, Richard F [41 Ewhurst Avenue, South Croydon, Surrey CR2 0DH (United Kingdom); Lahanas, Michael [Klinikum Offenbach, Strahlenklinik, 66 Starkenburgring, 63069 Offenbach am Main (Germany); Asselain, Bernard [Institut Curie, Biostatistiques, 26 rue d' Ulm, 75231 Paris Cedex 05 (France); Brewster, David [Director, Scottish Cancer Registry, Information Services (NHS National Services Scotland) Area 155, Gyle Square, 1 South Gyle Crescent, Edinburgh EH12 9EB (United Kingdom); Burgers, Sjaak A [Department of Thoracic Oncology, Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam, The (Netherlands); Damhuis, Ronald A M [Rotterdam Cancer Registry, Rochussenstraat 125, PO Box 289, 3000 AG Rotterdam, The (Netherlands); Rycke, Yann De [Institut Curie, Biostatistiques, 26 rue d' Ulm, 75231 Paris Cedex 05 (France); Gennaro, Valerio [Liguria Mesothelioma Cancer Registry, Etiology and Epidemiology Department, National Cancer Research Institute, Pad. Maragliano, Largo R Benzi, 10-16132 Genoa (Italy); Szeszenia-Dabrowska, Neonila [Department of Occupational and Environmental Epidemiology, National Institute of Occupational Medicine, PO Box 199, Swietej Teresy od Dzieciatka Jezus 8, 91-348 Lodz (Poland)

    2004-09-07

    A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0.

  12. Methodology for lognormal modelling of malignant pleural mesothelioma survival time distributions: a study of 5580 case histories from Europe and USA

    International Nuclear Information System (INIS)

    Mould, Richard F; Lahanas, Michael; Asselain, Bernard; Brewster, David; Burgers, Sjaak A; Damhuis, Ronald A M; Rycke, Yann De; Gennaro, Valerio; Szeszenia-Dabrowska, Neonila

    2004-01-01

    A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0

  13. Frequency distribution of Radium-226, Thorium-228 and Potassium-40 concentration in ploughed soils

    International Nuclear Information System (INIS)

    Drichko, V.F.; Krisyuk, B.E.; Travnikova, I.G.; Lisachenko, E.P.; Dubenskaya, M.A.

    1977-01-01

    The results of studying Ra-226, Th-228 and K-40 concentration distribution laws in podsol, chernozem and saline soils are considered. Radionuclide concentrations were determined by gamma-spectrometric method in the samples chosen from arable soil layer according to the generally accepted agrotechnical procedure. Measuring procedure is described. The results show that frequency distributions of radionuclide concentrations transform from asymmetric form in normal coordinates into symmetric form in logarithmic coordinates. The usage of the lognormal law to describe frequency concentration distributions is substantiated. The values of concentration distribution parameters are given. The analysis of the data obtained permits to establish that Ra-226 and Th-228 concentrations in soils distribute lognormally and K-40 concentrations - normally and lognormally. According to the degree of decreasing mean concentrations of Ra-226 and Th-228, soils lie in line: chernozems=chernozem salterns > podsols; and according to the degree of decreasing mean quadratic deviation - in line: podsols>chernozems=salterns. It is necessary to determine the value of mean quadratic deviation and distribution type for full characteristics of the studied soil radioactivity

  14. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  15. Generating log-normal mock catalog of galaxies in redshift space

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun; Komatsu, Eiichiro [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Chiang, Chi-Ting [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Jeong, Donghui, E-mail: aniket@mpa-garching.mpg.de, E-mail: makiya@mpa-garching.mpg.de, E-mail: chi-ting.chiang@stonybrook.edu, E-mail: djeong@psu.edu, E-mail: ssaito@mpa-garching.mpg.de, E-mail: komatsu@mpa-garching.mpg.de [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  16. Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.

    Science.gov (United States)

    Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric

    2017-02-01

    In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and

  17. Simulation of depth distribution of geological strata. HQSW program

    International Nuclear Information System (INIS)

    Czubek, J.A.; Kolakowski, L.

    1987-01-01

    The method of simulation of the layered geological formation for a given geological parameter is presented. The geological formation contains at least two types of layers and is given with the depth resolution Δh corresponding to the thickness of the hypothetical elementary layer. Two types of geostatistical distributions of the rock parameters are considered: modified normal and modified lognormal for which the input data are expected value and the variance. The HQSW simulation program given in the paper generates in a random way (but in a given repeatable sequence) the thicknesses of a given type of strata, their average specific radioactivity and the variance of specific radioactivity within a given layer. 8 refs., 14 figs., 1 tab. (author)

  18. Localized massive halo properties in BAHAMAS and MACSIS simulations: scalings, log-normality, and covariance

    Science.gov (United States)

    Farahi, Arya; Evrard, August E.; McCarthy, Ian; Barnes, David J.; Kay, Scott T.

    2018-05-01

    Using tens of thousands of halos realized in the BAHAMAS and MACSIS simulations produced with a consistent astrophysics treatment that includes AGN feedback, we validate a multi-property statistical model for the stellar and hot gas mass behavior in halos hosting groups and clusters of galaxies. The large sample size allows us to extract fine-scale mass-property relations (MPRs) by performing local linear regression (LLR) on individual halo stellar mass (Mstar) and hot gas mass (Mgas) as a function of total halo mass (Mhalo). We find that: 1) both the local slope and variance of the MPRs run with mass (primarily) and redshift (secondarily); 2) the conditional likelihood, p(Mstar, Mgas| Mhalo, z) is accurately described by a multivariate, log-normal distribution, and; 3) the covariance of Mstar and Mgas at fixed Mhalo is generally negative, reflecting a partially closed baryon box model for high mass halos. We validate the analytical population model of Evrard et al. (2014), finding sub-percent accuracy in the log-mean halo mass selected at fixed property, ⟨ln Mhalo|Mgas⟩ or ⟨ln Mhalo|Mstar⟩, when scale-dependent MPR parameters are employed. This work highlights the potential importance of allowing for running in the slope and scatter of MPRs when modeling cluster counts for cosmological studies. We tabulate LLR fit parameters as a function of halo mass at z = 0, 0.5 and 1 for two popular mass conventions.

  19. The reliability assessment of the electromagnetic valve of high-speed electric multiple units braking system based on two-parameter exponential distribution

    Directory of Open Access Journals (Sweden)

    Jianwei Yang

    2016-06-01

    Full Text Available In order to solve the reliability assessment of braking system component of high-speed electric multiple units, this article, based on two-parameter exponential distribution, provides the maximum likelihood estimation and Bayes estimation under a type-I life test. First of all, we evaluate the failure probability value according to the classical estimation method and then obtain the maximum likelihood estimation of parameters of two-parameter exponential distribution by performing and using the modified likelihood function. On the other hand, based on Bayesian theory, this article also selects the beta and gamma distributions as the prior distribution, combines with the modified maximum likelihood function, and innovatively applies a Markov chain Monte Carlo algorithm to parameters assessment based on Bayes estimation method for two-parameter exponential distribution, so that two reliability mathematical models of the electromagnetic valve are obtained. Finally, through type-I life test, the failure rates according to maximum likelihood estimation and Bayes estimation method based on Markov chain Monte Carlo algorithm are, respectively, 2.650 × 10−5 and 3.037 × 10−5. Compared with the failure rate of a electromagnetic valve 3.005 × 10−5, it proves that the Bayes method can use a Markov chain Monte Carlo algorithm to estimate reliability for two-parameter exponential distribution and Bayes estimation is more closer to the value of electromagnetic valve. So, by fully integrating multi-source, Bayes estimation method can preferably modify and precisely estimate the parameters, which can provide a certain theoretical basis for the safety operation of high-speed electric multiple units.

  20. Temporal Statistical Analysis of Degree Distributions in an Undirected Landline Phone Call Network Graph Series

    Directory of Open Access Journals (Sweden)

    Orgeta Gjermëni

    2017-10-01

    Full Text Available This article aims to provide new results about the intraday degree sequence distribution considering phone call network graph evolution in time. More specifically, it tackles the following problem. Given a large amount of landline phone call data records, what is the best way to summarize the distinct number of calling partners per client per day? In order to answer this question, a series of undirected phone call network graphs is constructed based on data from a local telecommunication source in Albania. All network graphs of the series are simplified. Further, a longitudinal temporal study is made on this network graphs series related to the degree distributions. Power law and log-normal distribution fittings on the degree sequence are compared on each of the network graphs of the series. The maximum likelihood method is used to estimate the parameters of the distributions, and a Kolmogorov–Smirnov test associated with a p-value is used to define the plausible models. A direct distribution comparison is made through a Vuong test in the case that both distributions are plausible. Another goal was to describe the parameters’ distributions’ shape. A Shapiro-Wilk test is used to test the normality of the data, and measures of shape are used to define the distributions’ shape. Study findings suggested that log-normal distribution models better the intraday degree sequence data of the network graphs. It is not possible to say that the distributions of log-normal parameters are normal.

  1. Multilevel quadrature of elliptic PDEs with log-normal diffusion

    KAUST Repository

    Harbrecht, Helmut

    2015-01-07

    We apply multilevel quadrature methods for the moment computation of the solution of elliptic PDEs with lognormally distributed diffusion coefficients. The computation of the moments is a difficult task since they appear as high dimensional Bochner integrals over an unbounded domain. Each function evaluation corresponds to a deterministic elliptic boundary value problem which can be solved by finite elements on an appropriate level of refinement. The complexity is thus given by the number of quadrature points times the complexity for a single elliptic PDE solve. The multilevel idea is to reduce this complexity by combining quadrature methods with different accuracies with several spatial discretization levels in a sparse grid like fashion.

  2. The PDF of fluid particle acceleration in turbulent flow with underlying normal distribution of velocity fluctuations

    International Nuclear Information System (INIS)

    Aringazin, A.K.; Mazhitov, M.I.

    2003-01-01

    We describe a formal procedure to obtain and specify the general form of a marginal distribution for the Lagrangian acceleration of fluid particle in developed turbulent flow using Langevin type equation and the assumption that velocity fluctuation u follows a normal distribution with zero mean, in accord to the Heisenberg-Yaglom picture. For a particular representation, β=exp[u], of the fluctuating parameter β, we reproduce the underlying log-normal distribution and the associated marginal distribution, which was found to be in a very good agreement with the new experimental data by Crawford, Mordant, and Bodenschatz on the acceleration statistics. We discuss on arising possibilities to make refinements of the log-normal model

  3. Distribution functions for the linear region of the S-N curve

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Christian; Waechter, Michael; Masendorf, Rainer; Esderts, Alfons [TU Clausthal, Clausthal-Zellerfeld (Germany). Inst. for Plant Engineering and Fatigue Analysis

    2017-08-01

    This study establishes a database containing the results of fatigue tests from the linear region of the S-N curve using sources from the literature. Each set of test results originates from testing metallic components on a single load level. Eighty-nine test series with sample sizes of 14 ≤ n ≤ 500 are included in the database, resulting in a sum of 6,086 individual test results. The test series are tested in terms of the type of distribution function (log-normal or 2-parameter Weibull) using the Shapiro-Wilk test, the Anderson-Darling test and probability plots. The majority of the tested individual test results follows a log-normal distribution.

  4. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  5. Bayesian Estimation of Two-Parameter Weibull Distribution Using Extension of Jeffreys' Prior Information with Three Loss Functions

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.

  6. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  7. Separating the contributions of variability and parameter uncertainty in probability distributions

    International Nuclear Information System (INIS)

    Sankararaman, S.; Mahadevan, S.

    2013-01-01

    This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters

  8. The particle size distribution of fragmented melt debris from molten fuel coolant interactions

    International Nuclear Information System (INIS)

    Fletcher, D.F.

    1984-04-01

    Results are presented of a study of the types of statistical distributions which arise when examining debris from Molten Fuel Coolant Interactions. The lognormal probability distribution and the modifications of this distribution which result from the mixing of two distributions or the removal of some debris are described. Methods of fitting these distributions to real data are detailed. A two stage fragmentation model has been developed in an attempt to distinguish between the debris produced by coarse mixing and fine scale fragmentation. However, attempts to fit this model to real data have proved unsuccessful. It was found that the debris particle size distributions from experiments at Winfrith with thermite generated uranium dioxide/molybdenum melts were Upper Limit Lognormal. (U.K.)

  9. Microstructural parameters in 8 MeV Electron irradiated Bombyx mori silk fibers by wide-angle X-ray scattering studies (WAXS)

    International Nuclear Information System (INIS)

    Halabhavi, Sangappa

    2009-01-01

    The present work looks into the microstructural modification in Bombyx mori silk fibers, induced by electron irradiation. The irradiation process was performed in air at room temperature by use of 8 MeV electron accelerators at different doses: 0, 25, 50, 75 and 100 kGy respectively. Irradiation of polymer can be used to crosslink or degrade the desired component or to fixate the polymer morphology. The changes in microstructural parameters in these natural polymer fibers have been studied using wide angle X-ray scattering (WAXS) method. The crystal imperfection parameters such as crystallite size , lattice strain (g in %) and enthalpy (a * ) have been determined by line profile analysis (LPA) using Fourier method of Warren. Exponential, Lognormal and Reinhold functions for the column length distributions have been used for the determination of these parameters. The goodness of the fit and the consistency of these results suggest that the exponential distribution gives much better results, even though lognormal distribution has been widely used to estimate the similar stacking faults in metal oxide compounds. (author)

  10. The mathematical formula of the intravaginal ejaculation latency time (IELT distribution of lifelong premature ejaculation differs from the IELT distribution formula of men in the general male population

    Directory of Open Access Journals (Sweden)

    Paddy K.C. Janssen

    2016-03-01

    Full Text Available Purpose: To find the most accurate mathematical description of the intravaginal ejaculation latency time (IELT distribution in the general male population. Materials and Methods: We compared the fitness of various well-known mathematical distributions with the IELT distribution of two previously published stopwatch studies of the Caucasian general male population and a stopwatch study of Dutch Caucasian men with lifelong premature ejaculation (PE. The accuracy of fitness is expressed by the Goodness of Fit (GOF. The smaller the GOF, the more accurate is the fitness. Results: The 3 IELT distributions are gamma distributions, but the IELT distribution of lifelong PE is another gamma distribution than the IELT distribution of men in the general male population. The Lognormal distribution of the gamma distributions most accurately fits the IELT distribution of 965 men in the general population, with a GOF of 0.057. The Gumbel Max distribution most accurately fits the IELT distribution of 110 men with lifelong PE with a GOF of 0.179. There are more men with lifelong PE ejaculating within 30 and 60 seconds than can be extrapolated from the probability density curve of the Lognormal IELT distribution of men in the general population. Conclusions: Men with lifelong PE have a distinct IELT distribution, e.g., a Gumbel Max IELT distribution, that can only be retrieved from the general male population Lognormal IELT distribution when thousands of men would participate in a IELT stopwatch study. The mathematical formula of the Lognormal IELT distribution is useful for epidemiological research of the IELT.

  11. The mathematical formula of the intravaginal ejaculation latency time (IELT) distribution of lifelong premature ejaculation differs from the IELT distribution formula of men in the general male population

    Science.gov (United States)

    Janssen, Paddy K.C.

    2016-01-01

    Purpose To find the most accurate mathematical description of the intravaginal ejaculation latency time (IELT) distribution in the general male population. Materials and Methods We compared the fitness of various well-known mathematical distributions with the IELT distribution of two previously published stopwatch studies of the Caucasian general male population and a stopwatch study of Dutch Caucasian men with lifelong premature ejaculation (PE). The accuracy of fitness is expressed by the Goodness of Fit (GOF). The smaller the GOF, the more accurate is the fitness. Results The 3 IELT distributions are gamma distributions, but the IELT distribution of lifelong PE is another gamma distribution than the IELT distribution of men in the general male population. The Lognormal distribution of the gamma distributions most accurately fits the IELT distribution of 965 men in the general population, with a GOF of 0.057. The Gumbel Max distribution most accurately fits the IELT distribution of 110 men with lifelong PE with a GOF of 0.179. There are more men with lifelong PE ejaculating within 30 and 60 seconds than can be extrapolated from the probability density curve of the Lognormal IELT distribution of men in the general population. Conclusions Men with lifelong PE have a distinct IELT distribution, e.g., a Gumbel Max IELT distribution, that can only be retrieved from the general male population Lognormal IELT distribution when thousands of men would participate in a IELT stopwatch study. The mathematical formula of the Lognormal IELT distribution is useful for epidemiological research of the IELT. PMID:26981594

  12. Statistical distributions as applied to environmental surveillance data

    International Nuclear Information System (INIS)

    Speer, D.R.; Waite, D.A.

    1976-01-01

    Application of normal, lognormal, and Weibull distributions to radiological environmental surveillance data was investigated for approximately 300 nuclide-medium-year-location combinations. The fit of data to distributions was compared through probability plotting (special graph paper provides a visual check) and W test calculations. Results show that 25% of the data fit the normal distribution, 50% fit the lognormal, and 90% fit the Weibull.Demonstration of how to plot each distribution shows that normal and lognormal distributions are comparatively easy to use while Weibull distribution is complicated and difficult to use. Although current practice is to use normal distribution statistics, normal fit the least number of data groups considered in this study

  13. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  14. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  15. On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment

    KAUST Repository

    Rached, Nadhir B.

    2017-02-15

    The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.

  16. On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment

    KAUST Repository

    Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2017-01-01

    The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.

  17. Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2012-08-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr−1, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m−2 respectively, which show good consistency with model estimates from other studies.

  18. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  19. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  20. Transformation of Bayesian posterior distribution into a basic analytical distribution

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2002-01-01

    Bayesian estimation is well-known approach that is widely used in Probabilistic Safety Analyses for the estimation of input model reliability parameters, such as component failure rates or probabilities of failure upon demand. In this approach, a prior distribution, which contains some generic knowledge about a parameter is combined with likelihood function, which contains plant-specific data about the parameter. Depending on the type of prior distribution, the resulting posterior distribution can be estimated numerically or analytically. In many instances only a numerical Bayesian integration can be performed. In such a case the posterior is provided in the form of tabular discrete distribution. On the other hand, it is much more convenient to have a parameter's uncertainty distribution that is to be input into a PSA model to be provided in the form of some basic analytical probability distribution, such as lognormal, gamma or beta distribution. One reason is that this enables much more convenient propagation of parameters' uncertainties through the model up to the so-called top events, such as plant system unavailability or core damage frequency. Additionally, software tools used to run PSA models often require that parameter's uncertainty distribution is defined in the form of one among the several allowed basic types of distributions. In such a case the posterior distribution that came as a product of Bayesian estimation needs to be transformed into an appropriate basic analytical form. In this paper, some approaches on transformation of posterior distribution to a basic probability distribution are proposed and discussed. They are illustrated by an example from NPP Krsko PSA model.(author)

  1. Log-Normal Distribution in a Growing System with Weighted and Multiplicatively Interacting Particles

    Science.gov (United States)

    Fujihara, Akihiro; Tanimoto, Satoshi; Yamamoto, Hiroshi; Ohtsuki, Toshiya

    2018-03-01

    A growing system with weighted and multiplicatively interacting particles is investigated. Each particle has a quantity that changes multiplicatively after a binary interaction, with its growth rate controlled by a weight parameter in a homogeneous symmetric kernel. We consider the system using moment inequalities and analytically derive the log-normal-type tail in the probability distribution function of quantities when the parameter is negative, which is different from the result for single-body multiplicative processes. We also find that the system approaches a winner-take-all state when the parameter is positive.

  2. Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence

    KAUST Repository

    Song, Xuegui

    2015-03-01

    Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate BER expressions are presented. We demonstrate that the approximate BER, which is obtained by dividing the symbol error rate by the number of bits per symbol, can be used to estimate the BER performance with acceptable accuracy. Through our asymptotic analysis, we derive closed-form asymptotic BER performance loss expression for MDPSK with respect to MPSK in the lognormal turbulence channels. © 2015 IEEE.

  3. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  4. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    NARCIS (Netherlands)

    van Rijssel, Jozef; Kuipers, Bonny W M; Erne, Ben

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal

  5. X-ray diffraction microstructural analysis of bimodal size distribution MgO nano powder

    International Nuclear Information System (INIS)

    Suminar Pratapa; Budi Hartono

    2009-01-01

    Investigation on the characteristics of x-ray diffraction data for MgO powdered mixture of nano and sub-nano particles has been carried out to reveal the crystallite-size-related microstructural information. The MgO powders were prepared by co-precipitation method followed by heat treatment at 500 degree Celsius and 1200 degree Celsius for 1 hour, being the difference in the temperature was to obtain two powders with distinct crystallite size and size-distribution. The powders were then blended in air to give the presumably bimodal-size- distribution MgO nano powder. High-quality laboratory X-ray diffraction data for the powders were collected and then analysed using Rietveld-based MAUD software using the lognormal size distribution. Results show that the single-mode powders exhibit spherical crystallite size (R) of 20(1) nm and 160(1) nm for the 500 degree Celsius and 1200 degree Celsius data respectively with the nano metric powder displays narrower crystallite size distribution character, indicated by lognormal dispersion parameter of 0.21 as compared to 0.01 for the sub-nano metric powder. The mixture exhibits relatively more asymmetric peak broadening. Analysing the x-ray diffraction data for the latter specimen using single phase approach give unrealistic results. Introducing two phase models for the double-phase mixture to accommodate the bimodal-size-distribution characteristics give R = 100(6) and σ = 0.62 for the nano metric phase and R = 170(5) and σ= 0.12 for the σ sub-nano metric phase. (author)

  6. The Power of Heterogeneity: Parameter Relationships from Distributions

    Science.gov (United States)

    Röding, Magnus; Bradley, Siobhan J.; Williamson, Nathan H.; Dewi, Melissa R.; Nann, Thomas; Nydén, Magnus

    2016-01-01

    Complex scientific data is becoming the norm, many disciplines are growing immensely data-rich, and higher-dimensional measurements are performed to resolve complex relationships between parameters. Inherently multi-dimensional measurements can directly provide information on both the distributions of individual parameters and the relationships between them, such as in nuclear magnetic resonance and optical spectroscopy. However, when data originates from different measurements and comes in different forms, resolving parameter relationships is a matter of data analysis rather than experiment. We present a method for resolving relationships between parameters that are distributed individually and also correlated. In two case studies, we model the relationships between diameter and luminescence properties of quantum dots and the relationship between molecular weight and diffusion coefficient for polymers. Although it is expected that resolving complicated correlated relationships require inherently multi-dimensional measurements, our method constitutes a useful contribution to the modelling of quantitative relationships between correlated parameters and measurements. We emphasise the general applicability of the method in fields where heterogeneity and complex distributions of parameters are obstacles to scientific insight. PMID:27182701

  7. On the generation of log-Levy distributions and extreme randomness

    International Nuclear Information System (INIS)

    Eliazar, Iddo; Klafter, Joseph

    2011-01-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Levy distributions. The log-Levy distributions are the Levy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Levy distributions emerge universally-the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot's extreme randomness. (paper)

  8. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  9. Effects of Initial Values and Convergence Criterion in the Two-Parameter Logistic Model When Estimating the Latent Distribution in BILOG-MG 3.

    Directory of Open Access Journals (Sweden)

    Ingo W Nader

    Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.

  10. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    Science.gov (United States)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  11. Statistical analysis of wind speed using two-parameter Weibull distribution in Alaçatı region

    International Nuclear Information System (INIS)

    Ozay, Can; Celiktas, Melih Soner

    2016-01-01

    Highlights: • Wind speed & direction data from September 2008 to March 2014 has been analyzed. • Mean wind speed for the whole data set has been found to be 8.11 m/s. • Highest wind speed is observed in July with a monthly mean value of 9.10 m/s. • Wind speed with the most energy has been calculated as 12.77 m/s. • Observed data has been fit to a Weibull distribution and k &c parameters have been calculated as 2.05 and 9.16. - Abstract: Weibull Statistical Distribution is a common method for analyzing wind speed measurements and determining wind energy potential. Weibull probability density function can be used to forecast wind speed, wind density and wind energy potential. In this study a two-parameter Weibull statistical distribution is used to analyze the wind characteristics of Alaçatı region, located in Çeşme, İzmir. The data used in the density function are acquired from a wind measurement station in Alaçatı. Measurements were gathered on three different heights respectively 70, 50 and 30 m between 10 min intervals for five and half years. As a result of this study; wind speed frequency distribution, wind direction trends, mean wind speed, and the shape and the scale (k&c) Weibull parameters have been calculated for the region. Mean wind speed for the entirety of the data set is found to be 8.11 m/s. k&c parameters are found as 2.05 and 9.16 in relative order. Wind direction analysis along with a wind rose graph for the region is also provided with the study. Analysis suggests that higher wind speeds which range from 6–12 m/s are prevalent between the sectors 340–360°. Lower wind speeds, from 3 to 6 m/s occur between sectors 10–29°. Results of this study contribute to the general knowledge about the regions wind energy potential and can be used as a source for investors and academics.

  12. An investigation into the population abundance distribution of mRNAs, proteins, and metabolites in biological systems.

    Science.gov (United States)

    Lu, Chuan; King, Ross D

    2009-08-15

    Distribution analysis is one of the most basic forms of statistical analysis. Thanks to improved analytical methods, accurate and extensive quantitative measurements can now be made of the mRNA, protein and metabolite from biological systems. Here, we report a large-scale analysis of the population abundance distributions of the transcriptomes, proteomes and metabolomes from varied biological systems. We compared the observed empirical distributions with a number of distributions: power law, lognormal, loglogistic, loggamma, right Pareto-lognormal (PLN) and double PLN (dPLN). The best-fit for mRNA, protein and metabolite population abundance distributions was found to be the dPLN. This distribution behaves like a lognormal distribution around the centre, and like a power law distribution in the tails. To better understand the cause of this observed distribution, we explored a simple stochastic model based on geometric Brownian motion. The distribution indicates that multiplicative effects are causally dominant in biological systems. We speculate that these effects arise from chemical reactions: the central-limit theorem then explains the central lognormal, and a number of possible mechanisms could explain the long tails: positive feedback, network topology, etc. Many of the components in the central lognormal parts of the empirical distributions are unidentified and/or have unknown function. This indicates that much more biology awaits discovery.

  13. The shape of terrestrial abundance distributions

    Science.gov (United States)

    Alroy, John

    2015-01-01

    Ecologists widely accept that the distribution of abundances in most communities is fairly flat but heavily dominated by a few species. The reason for this is that species abundances are thought to follow certain theoretical distributions that predict such a pattern. However, previous studies have focused on either a few theoretical distributions or a few empirical distributions. I illustrate abundance patterns in 1055 samples of trees, bats, small terrestrial mammals, birds, lizards, frogs, ants, dung beetles, butterflies, and odonates. Five existing theoretical distributions make inaccurate predictions about the frequencies of the most common species and of the average species, and most of them fit the overall patterns poorly, according to the maximum likelihood–related Kullback-Leibler divergence statistic. Instead, the data support a low-dominance distribution here called the “double geometric.” Depending on the value of its two governing parameters, it may resemble either the geometric series distribution or the lognormal series distribution. However, unlike any other model, it assumes both that richness is finite and that species compete unequally for resources in a two-dimensional niche landscape, which implies that niche breadths are variable and that trait distributions are neither arrayed along a single dimension nor randomly associated. The hypothesis that niche space is multidimensional helps to explain how numerous species can coexist despite interacting strongly. PMID:26601249

  14. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  15. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  16. THE IMPACT OF SPATIAL AND TEMPORAL RESOLUTIONS IN TROPICAL SUMMER RAINFALL DISTRIBUTION: PRELIMINARY RESULTS

    Directory of Open Access Journals (Sweden)

    Q. Liu

    2017-10-01

    Full Text Available The abundance or lack of rainfall affects peoples’ life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007, accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG. However, the models’ resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days and monthly resolutions. The probability distributions (PDF and cumulative distribution functions(CDF of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.

  17. The Impact of Spatial and Temporal Resolutions in Tropical Summer Rainfall Distribution: Preliminary Results

    Science.gov (United States)

    Liu, Q.; Chiu, L. S.; Hao, X.

    2017-10-01

    The abundance or lack of rainfall affects peoples' life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007), accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG). However, the models' resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling) procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA) at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days) and monthly resolutions. The probability distributions (PDF) and cumulative distribution functions(CDF) of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS) test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.

  18. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    Science.gov (United States)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  19. Concentration distribution of trace elements: from normal distribution to Levy flights

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Banas, D.; Braziewicz, J.; Majewska, U.; Pajek, M.

    2003-01-01

    The paper discusses a nature of concentration distributions of trace elements in biomedical samples, which were measured by using the X-ray fluorescence techniques (XRF, TXRF). Our earlier observation, that the lognormal distribution well describes the measured concentration distribution is explained here on a more general ground. Particularly, the role of random multiplicative process, which models the concentration distributions of trace elements in biomedical samples, is discussed in detail. It is demonstrated that the lognormal distribution, appearing when the multiplicative process is driven by normal distribution, can be generalized to the so-called log-stable distribution. Such distribution describes the random multiplicative process, which is driven, instead of normal distribution, by more general stable distribution, being known as the Levy flights. The presented ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, obtained by using the conventional (XRF) and (TXRF) X-ray fluorescence methods. Particularly, the first observation of log-stable concentration distribution of trace elements is reported and discussed here in detail

  20. Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence

    KAUST Repository

    Song, Xuegui; Yang, Fan; Cheng, Julian; Alouini, Mohamed-Slim

    2015-01-01

    Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate

  1. Outage Performance Analysis of Cooperative Diversity with MRC and SC in Correlated Lognormal Channels

    Directory of Open Access Journals (Sweden)

    Skraparlis D

    2009-01-01

    Full Text Available Abstract The study of relaying systems has found renewed interest in the context of cooperative diversity for communication channels suffering from fading. This paper provides analytical expressions for the end-to-end SNR and outage probability of cooperative diversity in correlated lognormal channels, typically found in indoor and specific outdoor environments. The system under consideration utilizes decode-and-forward relaying and Selection Combining or Maximum Ratio Combining at the destination node. The provided expressions are used to evaluate the gains of cooperative diversity compared to noncooperation in correlated lognormal channels, taking into account the spectral and energy efficiency of the protocols and the half-duplex or full-duplex capability of the relay. Our analysis demonstrates that correlation and lognormal variances play a significant role on the performance gain of cooperative diversity against noncooperation.

  2. Optimization of VPSC Model Parameters for Two-Phase Titanium Alloys: Flow Stress Vs Orientation Distribution Function Metrics

    Science.gov (United States)

    Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.

    2018-06-01

    The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.

  3. Lifetime characterization via lognormal distribution of transformers in smart grids: Design optimization

    International Nuclear Information System (INIS)

    Chiodo, Elio; Lauria, Davide; Mottola, Fabio; Pisani, Cosimo

    2016-01-01

    authors verified that the transformer’s lifetime is modeled as a lognormal, stochastic process. Hence, a novel, closed-form relationship was derived between the transformer’s lifetime and the distributional properties of the stochastic load. The usefulness of the closed-form expression is discussed for sake of design, even if a few of the considerations also are performed with respect to operating conditions. The aim of the numerical application was to demonstrate the feasibility and the easy applicability of the analytical methodology.

  4. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    Science.gov (United States)

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (Pdonation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  5. Ventilation-perfusion distribution in normal subjects.

    Science.gov (United States)

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  6. Distribution of age at menopause in two Danish samples

    DEFF Research Database (Denmark)

    Boldsen, J L; Jeune, B

    1990-01-01

    We analyzed the distribution of reported age at natural menopause in two random samples of Danish women (n = 176 and n = 150) to determine the shape of the distribution and to disclose any possible trends in the distribution parameters. It was necessary to correct the frequencies of the reported...... ages for the effect of differing ages at reporting. The corrected distribution of age at menopause differs from the normal distribution in the same way in both samples. Both distributions could be described by a mixture of two normal distributions. It appears that most of the parameters of the normal...... distribution mixtures remain unchanged over a 50-year time lag. The position of the distribution, that is, the mean age at menopause, however, increases slightly but significantly....

  7. The magnetized sheath of a dusty plasma with grains size distribution

    International Nuclear Information System (INIS)

    Ou, Jing; Gan, Chunyun; Lin, Binbin; Yang, Jinhong

    2015-01-01

    The structure of a plasma sheath in the presence of dust grains size distribution (DGSD) is investigated in the multi-fluid framework. It is shown that effect of the dust grains with different sizes on the sheath structure is a collective behavior. The spatial distributions of electric potential, the electron and ion densities and velocities, and the dust grains surface potential are strongly affected by DGSD. The dynamics of dust grains with different sizes in the sheath depend on not only DGSD but also their radius. By comparison of the sheath structure, it is found that under the same expected value of DGSD condition, the sheath length is longer in the case of lognormal distribution than that in the case of uniform distribution. In two cases of normal and lognormal distributions, the sheath length is almost equal for the small variance of DGSD, and then the difference of sheath length increases gradually with increase in the variance

  8. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. High SNR BER comparison of coherent and differentially coherent modulation schemes in lognormal fading channels

    KAUST Repository

    Song, Xuegui; Cheng, Julian; Alouini, Mohamed-Slim

    2014-01-01

    Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.

  10. High SNR BER comparison of coherent and differentially coherent modulation schemes in lognormal fading channels

    KAUST Repository

    Song, Xuegui

    2014-09-01

    Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.

  11. CELL AVERAGING CFAR DETECTOR WITH SCALE FACTOR CORRECTION THROUGH THE METHOD OF MOMENTS FOR THE LOG-NORMAL DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    José Raúl Machado Fernández

    2018-01-01

    Full Text Available Se presenta el nuevo detector LN-MoM-CA-CFAR que tiene una desviación reducida en la tasa de probabilidad de falsa alarma operacional con respecto al valor concebido de diseño. La solución corrige un problema fundamental de los procesadores CFAR que ha sido ignora-do en múltiples desarrollos. En efecto, la mayoría de los esquemas previamente propuestos tratan con los cambios bruscos del nivel del clutter mientras que la presente solución corrige los cambios lentos estadísticos de la señal de fondo. Se ha demostrado que estos tienen una influencia marcada en la selección del factor de ajuste multiplicativo CFAR, y consecuen-temente en el mantenimiento de la probabilidad de falsa alarma. Los autores aprovecharon la alta precisión que se alcanza en la estimación del parámetro de forma Log-Normal con el MoM, y la amplia aplicación de esta distribución en la modelación del clutter, para crear una arquitectura que ofrece resultados precisos y con bajo costo computacional. Luego de un procesamiento intensivo de 100 millones de muestras Log-Normal, se creó un esquema que, mejorando el desempeño del clásico CA-CFAR a través de la corrección continua de su fac-tor de ajuste, opera con una excelente estabilidad alcanzando una desviación de solamente 0,2884 % para la probabilidad de falsa alarma de 0,01.

  12. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim

    2018-04-04

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.

  13. [The analysis of general mortality by age and sex: evidence of two types of mortality].

    Science.gov (United States)

    Damiani, P; Masse, H; Aubenque, M

    1984-01-01

    The departmental distributions of the probabilities of dying by age group and sex are analyzed for France in 1975. It is found that these distributions are the sum of two lognormal distributions. The authors deduce the existence of two populations that are distinguished by whether mortality was endogenous or exogenous. The relative importance of these two types of mortality is considered according to age. (summary in ENG)

  14. On the distribution of plasma parameters in RF glow discharge

    International Nuclear Information System (INIS)

    Ning Cheng; Liu Zuli; Liu Donghui; Han Caiyuan.

    1993-01-01

    A self-consistent numerical model based on the two-fluid equations for describing the transport of charged particles in the RF glow discharge is presented. For a plasma generator filled with low-pressure air and parallel-plate electrodes, the model is numerical solved. The space-time distribution of parameters and the spatial distribution of some time-averaged parameters in plasma, which show the physical picture of the RF glow discharge, are obtained

  15. Probabilistic analysis of glass elements with three-parameter Weibull distribution

    International Nuclear Information System (INIS)

    Ramos, A.; Muniz-Calvente, M.; Fernandez, P.; Fernandez Cantel, A.; Lamela, M. J.

    2015-01-01

    Glass and ceramics present a brittle behaviour so a large scatter in the test results is obtained. This dispersion is mainly due to the inevitable presence of micro-cracks on its surface, edge defects or internal defects, which must be taken into account using an appropriate failure criteria non-deterministic but probabilistic. Among the existing probability distributions, the two or three parameter Weibull distribution is generally used in adjusting material resistance results, although the method of use thereof is not always correct. Firstly, in this work, the results of a large experimental programme using annealed glass specimens of different dimensions based on four-point bending and coaxial double ring tests was performed. Then, the finite element models made for each type of test, the adjustment of the parameters of the three-parameter Weibull distribution function (cdf) (λ: location, β: shape, d: scale) for a certain failure criterion and the calculation of the effective areas from the cumulative distribution function are presented. Summarizing, this work aims to generalize the use of the three-parameter Weibull function in structural glass elements with stress distributions not analytically described, allowing to apply the probabilistic model proposed in general loading distributions. (Author)

  16. Explaining the power-law distribution of human mobility through transportation modality decomposition

    Science.gov (United States)

    Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu

    2015-03-01

    Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns.

  17. Book review: A new view on the species abundance distribution

    Science.gov (United States)

    DeAngelis, Donald L.

    2018-01-01

    The sampled relative abundances of species of a taxonomic group, whether birds, trees, or moths, in a natural community at a particular place vary in a way that suggests a consistent underlying pattern, referred to as the species abundance distribution (SAD). Preston [1] conjectured that the numbers of species, plotted as a histogram of logarithmic abundance classes called octaves, seemed to fit a lognormal distribution; that is, the histograms look like normal distributions, although truncated on the left-hand, or low-species-abundance, end. Although other specific curves for the SAD have been proposed in the literature, Preston’s lognormal distribution is widely cited in textbooks and has stimulated attempts at explanation. An important aspect of Preston’s lognormal distribution is the ‘veil line’, a vertical line drawn exactly at the point of the left-hand truncation in the distribution, to the left of which would be species missing from the sample. Dewdney rejects the lognormal conjecture. Instead, starting with the long-recognized fact that the number of species sampled from a community, when plotted as histograms against population abundance, resembles an inverted J, he presents a mathematical description of an alternative that he calls the ‘J distribution’, a hyperbolic density function truncated at both ends. When multiplied by species richness, R, it becomes the SAD of the sample.

  18. Distributions of energy losses of electrons and pions in the CBM TRD

    International Nuclear Information System (INIS)

    Akishina, E.P.; Akishina, T.P.; Ivanov, V.V.; Denisova, O.Yu.

    2007-01-01

    The distributions of energy losses of electrons and pions in the TRD detector of the CBM experiment are considered. We analyze the measurements of the energy deposits in one-layer TRD prototype obtained during the test beam (GSI, Darmstadt, February 2006) and Monte Carlo simulations for the n-layered TRD realized with the help of GEANT in frames of the CBM ROOT. We show that 1) energy losses both for real measurements and GEANT simulations are approximated with a high accuracy by a log-normal distribution for π and a weighted sum of two log-normal distributions for e; 2) GEANT simulations noticeably differ from real measurements and, as a result, we have a significant loss in the efficiency of the e/π identification. A procedure to control and correct the process of the energy deposit of electrons in the TRD is developed

  19. A revisited Johnson-Mehl-Avrami-Kolmogorov model and the evolution of grain-size distributions in steel

    OpenAIRE

    Hömberg, D.; Patacchini, F. S.; Sakamoto, K.; Zimmer, J.

    2016-01-01

    The classical Johnson-Mehl-Avrami-Kolmogorov approach for nucleation and growth models of diffusive phase transitions is revisited and applied to model the growth of ferrite in multiphase steels. For the prediction of mechanical properties of such steels, a deeper knowledge of the grain structure is essential. To this end, a Fokker-Planck evolution law for the volume distribution of ferrite grains is developed and shown to exhibit a log-normally distributed solution. Numerical parameter studi...

  20. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    Science.gov (United States)

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  1. Change of particle size distribution during Brownian coagulation

    International Nuclear Information System (INIS)

    Lee, K.W.

    1984-01-01

    Change in particle size distribution due to Brownian coagulation in the continuum regime has been stuied analytically. A simple analytic solution for the size distribution of an initially lognormal distribution is obtained based on the assumption that the size distribution during the coagulation process attains or can, at least, be represented by a time dependent lognormal function. The results are found to be in a form that corrects Smoluchowski's solution for both polydispersity and size-dependent kernel. It is further shown that regardless of whether the initial distribution is narrow or broad, the spread of the distribution is characterized by approaching a fixed value of the geometric standard deviation. This result has been compared with the self-preserving distribution obtained by similarity theory. (Author)

  2. Maximum likelihood estimation of the parameters of a bivariate Gaussian-Weibull distribution from machine stress-rated data

    Science.gov (United States)

    Steve P. Verrill; David E. Kretschmann; James W. Evans

    2016-01-01

    Two important wood properties are stiffness (modulus of elasticity, MOE) and bending strength (modulus of rupture, MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two- or threeparameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of wood...

  3. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    Science.gov (United States)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  4. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    Science.gov (United States)

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  5. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates.

    Science.gov (United States)

    Wagner, Peter J

    2012-02-23

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.

  6. Measurements of local two-phase flow parameters in a boiling flow channel

    International Nuclear Information System (INIS)

    Yun, Byong Jo; Park, Goon-CherI; Chung, Moon Ki; Song, Chul Hwa

    1998-01-01

    Local two-phase flow parameters were measured lo investigate the internal flow structures of steam-water boiling flow in an annulus channel. Two kinds of measuring methods for local two-phase flow parameters were investigated. These are a two-conductivity probe for local vapor parameters and a Pitot cube for local liquid parameters. Using these probes, the local distribution of phasic velocities, interfacial area concentration (IAC) and void fraction is measured. In this study, the maximum local void fraction in subcooled boiling condition is observed around the heating rod and the local void fraction is smoothly decreased from the surface of a heating rod to the channel center without any wall void peaking, which was observed in air-water experiments. The distributions of local IAC and bubble frequency coincide with those of local void fraction for a given area-averaged void fraction. (author)

  7. Distribution functions to estimate radionuclide solid-liquid distribution coefficients in soils: the case of Cs

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)

    2014-07-01

    In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for

  8. Rationalisation of distribution functions for models of nanoparticle magnetism

    International Nuclear Information System (INIS)

    El-Hilo, M.; Chantrell, R.W.

    2012-01-01

    A formalism is presented which reconciles the use of different distribution functions of particle diameter in analytical models of the magnetic properties of nanoparticle systems. For the lognormal distribution a transformation is derived which shows that a distribution of volume fraction transforms into a lognormal distribution of particle number albeit with a modified median diameter. This transformation resolves an apparent discrepancy reported in Tournus and Tamion [Journal of Magnetism and Magnetic Materials 323 (2011) 1118]. - Highlights: ► We resolve a problem resulting from the misunderstanding of the nature. ► The nature of dispersion functions in models of nanoparticle magnetism. ► The derived transformation between distributions will be of benefit in comparing models and experimental results.

  9. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    Science.gov (United States)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  10. Forms and genesis of species abundance distributions

    Directory of Open Access Journals (Sweden)

    Evans O. Ochiaga

    2015-12-01

    Full Text Available Species abundance distribution (SAD is one of the most important metrics in community ecology. SAD curves take a hollow or hyperbolic shape in a histogram plot with many rare species and only a few common species. In general, the shape of SAD is largely log-normally distributed, although the mechanism behind this particular SAD shape still remains elusive. Here, we aim to review four major parametric forms of SAD and three contending mechanisms that could potentially explain this highly skewed form of SAD. The parametric forms reviewed here include log series, negative binomial, lognormal and geometric distributions. The mechanisms reviewed here include the maximum entropy theory of ecology, neutral theory and the theory of proportionate effect.

  11. Pricing FX Options in the Heston/CIR Jump-Diffusion Model with Log-Normal and Log-Uniform Jump Amplitudes

    Directory of Open Access Journals (Sweden)

    Rehez Ahlip

    2015-01-01

    model for the exchange rate with log-normal jump amplitudes and the volatility model with log-uniformly distributed jump amplitudes. We assume that the domestic and foreign stochastic interest rates are governed by the CIR dynamics. The instantaneous volatility is correlated with the dynamics of the exchange rate return, whereas the domestic and foreign short-term rates are assumed to be independent of the dynamics of the exchange rate and its volatility. The main result furnishes a semianalytical formula for the price of the foreign exchange European call option.

  12. Statistical distributions applications and parameter estimates

    CERN Document Server

    Thomopoulos, Nick T

    2017-01-01

    This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability.  Understanding statistical distributions is fundamental for researchers in almost all disciplines.  The informed researcher will select the statistical distribution that best fits the data in the study at hand.  Some of the distributions are well known to the general researcher and are in use in a wide variety of ways.  Other useful distributions are less understood and are not in common use.  The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study.  The distributions are for continuous, discrete, and bivariate random variables.  In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values.  In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...

  13. An investigation on effect of geometrical parameters on spray cone angle and droplet size distribution of a two-fluid atomizer

    Energy Technology Data Exchange (ETDEWEB)

    Shafaee, Maziar; Banitabaei, Sayed Abdolhossein; Esfahanian, Vahid; Ashjaee, Mehdi [Tehran University, Tehran (Iran, Islamic Republic of)

    2011-12-15

    A visual study is conducted to determine the effect of geometrical parameters of a two-fluid atomizer on its spray cone angle. The liquid (water) jets exit from six peripheral inclined orifices and are introduced to a high speed gas (air) stream in the gravitational direction. Using a high speed imaging system, the spray cone angle has been determined in constant operational conditions, i.e., Reynolds and Weber numbers for different nozzle geometries. Also, the droplet sizes (Sauter mean diameter) and their distributions have been determined using Malvern Master Sizer x. The investigated geometrical parameters are the liquid jet diameter, liquid port angle and the length of the gas-liquid mixing chamber. The results show that among these parameters, the liquid jet diameter has a significant effect on spray cone angle. In addition, an empirical correlation has been obtained to predict the spray cone angle of the present two-fluid atomizer in terms of nozzle geometries.

  14. Two new bivariate zero-inflated generalized Poisson distributions with a flexible correlation structure

    Directory of Open Access Journals (Sweden)

    Chi Zhang

    2015-05-01

    Full Text Available To model correlated bivariate count data with extra zero observations, this paper proposes two new bivariate zero-inflated generalized Poisson (ZIGP distributions by incorporating a multiplicative factor (or dependency parameter λ, named as Type I and Type II bivariate ZIGP distributions, respectively. The proposed distributions possess a flexible correlation structure and can be used to fit either positively or negatively correlated and either over- or under-dispersed count data, comparing to the existing models that can only fit positively correlated count data with over-dispersion. The two marginal distributions of Type I bivariate ZIGP share a common parameter of zero inflation while the two marginal distributions of Type II bivariate ZIGP have their own parameters of zero inflation, resulting in a much wider range of applications. The important distributional properties are explored and some useful statistical inference methods including maximum likelihood estimations of parameters, standard errors estimation, bootstrap confidence intervals and related testing hypotheses are developed for the two distributions. A real data are thoroughly analyzed by using the proposed distributions and statistical methods. Several simulation studies are conducted to evaluate the performance of the proposed methods.

  15. Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions

    International Nuclear Information System (INIS)

    Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.

    2010-01-01

    A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  16. Parameter estimation of the zero inflated negative binomial beta exponential distribution

    Science.gov (United States)

    Sirichantra, Chutima; Bodhisuwan, Winai

    2017-11-01

    The zero inflated negative binomial-beta exponential (ZINB-BE) distribution is developed, it is an alternative distribution for the excessive zero counts with overdispersion. The ZINB-BE distribution is a mixture of two distributions which are Bernoulli and negative binomial-beta exponential distributions. In this work, some characteristics of the proposed distribution are presented, such as, mean and variance. The maximum likelihood estimation is applied to parameter estimation of the proposed distribution. Finally some results of Monte Carlo simulation study, it seems to have high-efficiency when the sample size is large.

  17. Errors in determination of irregularity factor for distributed parameters in a reactor core

    International Nuclear Information System (INIS)

    Vlasov, V.A.; Zajtsev, M.P.; Il'ina, L.I.; Postnikov, V.V.

    1988-01-01

    Two types errors (measurement error and error of regulation of reactor core distributed parameters), offen met during high-power density reactor operation, are analyzed. Consideration is given to errors in determination of irregularity factor for radial power distribution for a hot channel under conditions of its minimization and for the conditions when the regulation of relative power distribution is absent. The first regime is investigated by the method of statistic experiment using the program of neutron-physical calculation optimization taking as an example a large channel water cooled graphite moderated reactor. It is concluded that it is necessary, to take into account the complex interaction of measurement error with the error of parameter profiling over the core both for conditions of continuous manual or automatic parameter regulation (optimization) and for the conditions without regulation namely at a priore equalized distribution. When evaluating the error of distributed parameter control

  18. Five and four-parameter lifetime distributions for bathtub-shaped failure rate using Perks mortality equation

    International Nuclear Information System (INIS)

    Zeng, Hongtao; Lan, Tian; Chen, Qiming

    2016-01-01

    Two lifetime distributions derived from Perks' mortality rate function, one with 4 parameters and the other with 5 parameters, for the modeling of bathtub-shaped failure rates are proposed in this paper. The Perks' mortality/failure rate functions have historically been used for human life modeling in life insurance industry. Although this distribution is no longer used in insurance industry, considering many nice and some unique features of this function, it is necessary to revisit it and introduce it to the reliability community. The parameters of the distributions can control the scale, shape, and location of the PDF. The 4-parameter distribution can be used to model the bathtub failure rate. This model is applied to three previously published groups of lifetime data. This study shows they fit very well. The 5-parameter version can potentially model constant hazard rates of the later life of some devices in addition to the good features of 4-parameter version. Both the 4 and 5-parameter versions have closed form PDF and CDF. The truncated distributions of both versions stay within the original distribution family with simple parameter transformation. This nice feature is normally considered to be only possessed by the simple exponential distribution - Highlights: • Two new distributions are proposed to model bathtub shaped hazard rate. • Derive the close-form PDF, CDF and feature of scalability and truncatability. • Perks4 is verified to be good to model common bathtub shapes through comparison. • Perks5 has the potential to model the stabilization of hazard rate at later life.

  19. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  20. Statistical Evidence for the Preference of Frailty Distributions with Regularly-Varying-at-Zero Densities

    DEFF Research Database (Denmark)

    Missov, Trifon I.; Schöley, Jonas

    to this criterion admissible distributions are, for example, the gamma, the beta, the truncated normal, the log-logistic and the Weibull, while distributions like the log-normal and the inverse Gaussian do not satisfy this condition. In this article we show that models with admissible frailty distributions...... and a Gompertz baseline provide a better fit to adult human mortality data than the corresponding models with non-admissible frailty distributions. We implement estimation procedures for mixture models with a Gompertz baseline and frailty that follows a gamma, truncated normal, log-normal, or inverse Gaussian...

  1. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  2. Power laws in citation distributions: evidence from Scopus.

    Science.gov (United States)

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  3. Energy-harvesting in cooperative AF relaying networks over log-normal fading channels

    KAUST Repository

    Rabie, Khaled M.; Salem, Abdelhamid; Alsusa, Emad; Alouini, Mohamed-Slim

    2016-01-01

    Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.

  4. Energy-harvesting in cooperative AF relaying networks over log-normal fading channels

    KAUST Repository

    Rabie, Khaled M.

    2016-07-26

    Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.

  5. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  6. Two-parameter fracture mechanics: Theory and applications

    International Nuclear Information System (INIS)

    O'Dowd, N.P.; Shih, C.F.

    1993-02-01

    A family of self-similar fields provides the two parameters required to characterize the full range of high- and low-triaxiality crack tip states. The two parameters, J and Q, have distinct roles: J sets the size scale of the process zone over which large stresses and strains develop, while Q scales the near-tip stress distribution relative to a high triaxiality reference stress state. An immediate consequence of the theory is this: it is the toughness values over a range of crack tip constraint that fully characterize the material's fracture resistance. It is shown that Q provides a common scale for interpreting cleavage fracture and ductile tearing data thus allowing both failure modes to be incorporated in a single toughness locus. The evolution of Q, as plasticity progresses from small scale yielding to fully yielded conditions, has been quantified for several crack geometries and for a wide range of material strain hardening properties. An indicator of the robustness of the J-Q fields is introduced; Q as a field parameter and as a pointwise measure of stress level is discussed

  7. Effects of statistical distribution of joint trace length on the stability of tunnel excavated in jointed rock mass

    Directory of Open Access Journals (Sweden)

    Kayvan Ghorbani

    2015-12-01

    Full Text Available The rock masses in a construction site of underground cavern are generally not continuous, due to the presence of discontinuities, such as bedding, joints, faults, and fractures. The performance of an underground cavern is principally ruled by the mechanical behaviors of the discontinuities in the vicinity of the cavern. During underground excavation, many surrounding rock failures have close relationship with joints. The stability study on tunnel in jointed rock mass is of importance to rock engineering, especially tunneling and underground space development. In this study, using the probability density distribution functions of negative exponential, log-normal and normal, we investigated the effect of joint trace length on the stability parameters such as stress and displacement of tunnel constructed in rock mass using UDEC (Universal Distinct Element Code. It was obtained that normal distribution function of joint trace length is more critical on the stability of tunnel, and exponential distribution function has less effect on the tunnel stability compared to the two other distribution functions.

  8. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    Science.gov (United States)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  9. An efficient method for evaluating the effect of input parameters on the integrity of safety systems

    International Nuclear Information System (INIS)

    Tang, Zhang-Chun; Zuo, Ming J.; Xiao, Ningcong

    2016-01-01

    Safety systems are significant to reduce or prevent risk from potentially dangerous activities in industry. Probability of failure to perform its functions on demand (PFD) for safety system usually exhibits variation due to the epistemic uncertainty associated with various input parameters. This paper uses the complementary cumulative distribution function of the PFD to define the exceedance probability (EP) that the PFD of the system is larger than the designed value. Sensitivity analysis of safety system is further investigated, which focuses on the effect of the variance of an individual input parameter on the EP resulting from epistemic uncertainty associated with the input parameters. An available numerical technique called finite difference method is first employed to evaluate the effect, which requires extensive computational cost and needs to select a step size. To address these difficulties, this paper proposes an efficient simulation method to estimate the effect. The proposed method needs only an evaluation to estimate the effects corresponding to all input parameters. Two examples are used to demonstrate that the proposed method can obtain more accurate results with less computation time compared to reported methods. - Highlights: • We define a sensitivity index to measure effect of a parameter for safety system. • We analyze the physical meaning of the sensitivity index. • We propose an efficient simulation method to assess the sensitivity index. • We derive the formulations of this index for lognormal and beta distributions. • Results identify important parameters on exceedance probability of safety system.

  10. Reappraisal of the reference dose distribution in the UNSCEAR 1977 report

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    2008-01-01

    This paper provides the update of the reference dose distribution proposed by G.A.M. Web and D. Beninson in Annex E to the UNSCEAR 1977 Report. To demonstrate compliance with regulatory obligations regarding doses to individuals, they defined it with the following properties: 1) The distribution of annual doses is log-normal; 2) The mean of the annual dose distribution is 5 m Gy (10% of the ICRP 1977 dose limit); 3) The proportion of workers exceeding 50 m Gy is 0.1%. The concept of the reference dose distribution is still important to understand the inherent variation of individual doses to workers controlled by source-related and individual-related efforts of best dose reduction. In the commercial nuclear power plant, the dose distribution becomes the more apart from the log-normal due to the stronger ALARA efforts and the revised dose limits. The monitored workers show about 1 m Sv of annual mean and far less than 0.1% of workers above 20 m Sv. The updated models of dose distribution consist of log-normal (no feedback on dose X) ln(X)∼N(μ,σ 2 ), hybrid log-normal (feedback on higher X by ρ) hyb(ρX)=ρX+ln(ρX)∼N(μ,σ 2 ), hybrid S B (feedback on higher dose quotient X/(D-X) not close to D by ρ) hyb[ρX/(D.X)]∼N(μ,σ 2 ) and Johnson's S B (limit to D) ln[X/(D-X)]∼N(μ,σ 2 ). These models afford interpreting the degree of dose control including dose constraint/limit to the reference distribution. Some of distributions are examined to characterize the variation of doses to members of the public with uncertainty. (author)

  11. [A study on the distribution of the consumption of tobacco and alcohol].

    Science.gov (United States)

    Damiani, P; Masse, H; Aubenque, M

    1983-01-01

    An analysis of the distribution of tobacco consumption and alcohol-related mortality in France by sex and department is presented for the population aged 45 to 64. It is shown that the "population can be decomposed into two sets such that, for each of them, tobacco and alcohol consumption distributions are log-normal. [It is suggested] that consumption is normal for one set and an endogenous predisposition for the other." (summary in ENG) excerpt

  12. Evaluation of microstructural parameters of oxide dispersion strengthened steels from X-ray diffraction profiles

    International Nuclear Information System (INIS)

    Vlasenko, Svetlana; Benediktovitch, Andrei; Ulyanenkova, Tatjana; Uglov, Vladimir; Skuratov, Vladimir; O'Connell, Jacques; Neethling, Johannes

    2016-01-01

    The microstructural parameters of oxide dispersion strengthened (ODS) steels from measured diffraction profiles were evaluated using an approach where the complex oxide nanoparticles (Y 2 Ti 2 O 7 and Y 4 Al 2 O 9 ) are modeled as spherical inclusions in the steel matrix with coherent or incoherent boundaries. The proposed method enables processing of diffraction data from materials containing spherical inclusions in addition to straight dislocations, and taking into account broadening due to crystallite size and instrumental effects. The parameters of crystallite size distribution modeled by a lognormal distribution function (the parameters m and σ), the strain anisotropy parameter q, the dislocation density ρ, the dislocation arrangement parameter M, the density of oxide nanoparticles ρ np and the nanoparticle radius r 0 were determined for the ODS steel samples. The results obtained are in good agreement with the results of transmission electron microscopy (TEM). - Highlights: • The microstructural parameters of oxide dispersion strengthened steels were obtained. • The microstructure of irradiated and unirradiated samples was investigated. • Oxide nanoparticles are modeled as spherical inclusions. • We considered the influence of dislocations, inclusions and size effects.

  13. Evaluation of microstructural parameters of oxide dispersion strengthened steels from X-ray diffraction profiles

    Energy Technology Data Exchange (ETDEWEB)

    Vlasenko, Svetlana, E-mail: svetlana.vlasenko.bsu@gmail.com [Belarusian State University, Nezavisimosti Avenue 4, Minsk (Belarus); Benediktovitch, Andrei [Belarusian State University, Nezavisimosti Avenue 4, Minsk (Belarus); Ulyanenkova, Tatjana [Rigaku Europe SE, Am Hardtwald 11, Ettlingen (Germany); Uglov, Vladimir [Belarusian State University, Nezavisimosti Avenue 4, Minsk (Belarus); Tomsk Polytechnic University, Lenina Avenue 2a, Tomsk (Russian Federation); Skuratov, Vladimir [Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research, Dubna (Russian Federation); O' Connell, Jacques; Neethling, Johannes [Centre for High Resolution Transmission Electron Microscopy, Nelson Mandela Metropolitan University, Port Elizabeth (South Africa)

    2016-03-15

    The microstructural parameters of oxide dispersion strengthened (ODS) steels from measured diffraction profiles were evaluated using an approach where the complex oxide nanoparticles (Y{sub 2}Ti{sub 2}O{sub 7} and Y{sub 4}Al{sub 2}O{sub 9}) are modeled as spherical inclusions in the steel matrix with coherent or incoherent boundaries. The proposed method enables processing of diffraction data from materials containing spherical inclusions in addition to straight dislocations, and taking into account broadening due to crystallite size and instrumental effects. The parameters of crystallite size distribution modeled by a lognormal distribution function (the parameters m and σ), the strain anisotropy parameter q, the dislocation density ρ, the dislocation arrangement parameter M, the density of oxide nanoparticles ρ{sub np} and the nanoparticle radius r{sub 0} were determined for the ODS steel samples. The results obtained are in good agreement with the results of transmission electron microscopy (TEM). - Highlights: • The microstructural parameters of oxide dispersion strengthened steels were obtained. • The microstructure of irradiated and unirradiated samples was investigated. • Oxide nanoparticles are modeled as spherical inclusions. • We considered the influence of dislocations, inclusions and size effects.

  14. Evaluation of bacterial run and tumble motility parameters through trajectory analysis

    Science.gov (United States)

    Liang, Xiaomeng; Lu, Nanxi; Chang, Lin-Ching; Nguyen, Thanh H.; Massoudieh, Arash

    2018-04-01

    In this paper, a method for extraction of the behavior parameters of bacterial migration based on the run and tumble conceptual model is described. The methodology is applied to the microscopic images representing the motile movement of flagellated Azotobacter vinelandii. The bacterial cells are considered to change direction during both runs and tumbles as is evident from the movement trajectories. An unsupervised cluster analysis was performed to fractionate each bacterial trajectory into run and tumble segments, and then the distribution of parameters for each mode were extracted by fitting mathematical distributions best representing the data. A Gaussian copula was used to model the autocorrelation in swimming velocity. For both run and tumble modes, Gamma distribution was found to fit the marginal velocity best, and Logistic distribution was found to represent better the deviation angle than other distributions considered. For the transition rate distribution, log-logistic distribution and log-normal distribution, respectively, was found to do a better job than the traditionally agreed exponential distribution. A model was then developed to mimic the motility behavior of bacteria at the presence of flow. The model was applied to evaluate its ability to describe observed patterns of bacterial deposition on surfaces in a micro-model experiment with an approach velocity of 200 μm/s. It was found that the model can qualitatively reproduce the attachment results of the micro-model setting.

  15. Asymptotic Expansions of the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Cyril Grunspan

    2011-01-01

    We invert the Black-Scholes formula. We consider the cases low strike, large strike, short maturity and large maturity. We give explicitly the first 5 terms of the expansions. A method to compute all the terms by induction is also given. At the money, we have a closed form formula for implied lognormal volatility in terms of a power series in call price.

  16. Micromachined two dimensional resistor arrays for determination of gas parameters

    NARCIS (Netherlands)

    van Baar, J.J.J.; Verwey, Willem B.; Dijkstra, Mindert; Dijkstra, Marcel; Wiegerink, Remco J.; Lammerink, Theodorus S.J.; Krijnen, Gijsbertus J.M.; Elwenspoek, Michael Curt

    A resistive sensor array is presented for two dimensional temperature distribution measurements in a micromachined flow channel. This allows simultaneous measurement of flow velocity and fluid parameters, like thermal conductivity, diffusion coefficient and viscosity. More general advantages of

  17. Statistical study on the strength of structural materials and elements

    International Nuclear Information System (INIS)

    Blume, J.A.; Dalal, J.S.; Honda, K.K.

    1975-07-01

    Strength data for structural materials and elements including concrete, reinforcing steel, structural steel, plywood elements, reinforced concrete beams, reinforced concrete columns, brick masonry elements, and concrete masonry walls were statistically analyzed. Sample statistics were computed for these data, and distribution parameters were derived for normal, lognormal, and Weibull distributions. Goodness-of-fit tests were performed on these distributions. Most data, except those for masonry elements, displayed fairly small dispersion. Dispersion in data for structural materials was generally found to be smaller than for structural elements. Lognormal and Weibull distributions displayed better overall fits to data than normal distribution, although either Weibull or lognormal distribution can be used to represent the data analyzed. (auth)

  18. Oxide particle size distribution from shearing irradiated and unirradiated LWR fuels in Zircaloy and stainless steel cladding: significance for risk assessment

    International Nuclear Information System (INIS)

    Davis, W. Jr.; West, G.A.; Stacy, R.G.

    1979-01-01

    Sieve fractionation was performed with oxide particles dislodged during shearing of unirradiated or irradiated fuel bundles or single rods of UO 2 or 96 to 97% ThO 2 --3 to 4% UO 2 . Analyses of these data by nonlinear least-squares techniques demonstrated that the particle size distribution is lognormal. Variables involved in the numerical analyses include lognormal median size, lognormal standard deviation, and shear cut length. Sieve-fractionation data are presented for unirradiated bundles of stainless-steel-clad or Zircaloy-2-clad UO 2 or ThO 2 --UO 2 sheared into lengths from 0.5 to 2.0 in. Data are also presented for irradiated single rods (sheared into lengths of 0.25 to 2.0 in.) of Zircaloy-2-clad UO 2 from BWRs and of Zircaloy-4-clad UO 2 from PWRs. Median particle sizes of UO 2 from shearing irradiated stainless-steel-clad fuel ranged from 103 to 182 μm; particle sizes of ThO 2 --UO 2 , under these same conditions, ranged from 137 to 202 μm. Similarly, median particle sizes of UO 2 from shearing unirradiated Zircaloy-2-clad fuel ranged from 230 to 957 μm. Irradiation levels of fuels from reactors ranged from 9,000 to 28,000 MWd/MTU. In general, particle sizes from shearing these irradiated fuels are larger than those from the unirradiated fuels. In addition, variations in particle size parameters pertaining to samples of a single vendor varied as much as those between different vendors. The fraction of fuel dislodged from the cladding is nearly proportional to the reciprocal of the shear cut length, until the cut length attains some minimum value below which all fuel is dislodged. Particles of fuel are generally elongated with a long-to-short axis ratio usually less than 3. Using parameters of the lognormal distribution deduced from experimental data, realistic estimates can be made of fractions of dislodged fuel having dimensions less than specified values

  19. Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels

    KAUST Repository

    Ansari, Imran Shafique

    2015-05-01

    Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.

  20. Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim

    2015-01-01

    Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.

  1. THE DENSITY DISTRIBUTION IN TURBULENT BISTABLE FLOWS

    International Nuclear Information System (INIS)

    Gazol, Adriana; Kim, Jongsoo

    2013-01-01

    We numerically study the volume density probability distribution function (n-PDF) and the column density probability distribution function (Σ-PDF) resulting from thermally bistable turbulent flows. We analyze three-dimensional hydrodynamic models in periodic boxes of 100 pc by side, where turbulence is driven in the Fourier space at a wavenumber corresponding to 50 pc. At low densities (n ∼ –3 ), the n-PDF is well described by a lognormal distribution for an average local Mach number ranging from ∼0.2 to ∼5.5. As a consequence of the nonlinear development of thermal instability (TI), the logarithmic variance of the distribution of the diffuse gas increases with M faster than in the well-known isothermal case. The average local Mach number for the dense gas (n ∼> 7.1 cm –3 ) goes from ∼1.1 to ∼16.9 and the shape of the high-density zone of the n-PDF changes from a power law at low Mach numbers to a lognormal at high M values. In the latter case, the width of the distribution is smaller than in the isothermal case and grows slower with M. At high column densities, the Σ-PDF is well described by a lognormal for all of the Mach numbers we consider and, due to the presence of TI, the width of the distribution is systematically larger than in the isothermal case but follows a qualitatively similar behavior as M increases. Although a relationship between the width of the distribution and M can be found for each one of the cases mentioned above, these relations are different from those of the isothermal case.

  2. Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Gabriele Scheler

    2017-10-01

    Full Text Available In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum, neurotransmitter (GABA (striatum or glutamate (cortex or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.

  3. Changes of firm size distribution: The case of Korea

    Science.gov (United States)

    Kang, Sang Hoon; Jiang, Zhuhua; Cheong, Chongcheul; Yoon, Seong-Min

    2011-01-01

    In this paper, the distribution and inequality of firm sizes is evaluated for the Korean firms listed on the stock markets. Using the amount of sales, total assets, capital, and the number of employees, respectively, as a proxy for firm sizes, we find that the upper tail of the Korean firm size distribution can be described by power-law distributions rather than lognormal distributions. Then, we estimate the Zipf parameters of the firm sizes and assess the changes in the magnitude of the exponents. The results show that the calculated Zipf exponents over time increased prior to the financial crisis, but decreased after the crisis. This pattern implies that the degree of inequality in Korean firm sizes had severely deepened prior to the crisis, but lessened after the crisis. Overall, the distribution of Korean firm sizes changes over time, and Zipf’s law is not universal but does hold as a special case.

  4. Effect of particle size distribution on sintering of tungsten

    International Nuclear Information System (INIS)

    Patterson, B.R.; Griffin, J.A.

    1984-01-01

    To date, very little is known about the effect of the nature of the particle size distribution on sintering. It is reasonable that there should be an effect of size distribution, and theory and prior experimental work examining the effects of variations in bimodal and continuous distributions have shown marked effects on sintering. Most importantly, even with constant mean particle size, variations in distribution width, or standard deviation, have been shown to produce marked variations in microstructure and sintering rate. In the latter work, in which spherical copper powders were blended to produce lognormal distributions of constant geometric mean particle size by weight frequency, blends with larger values of geometric standard deviation, 1nσ, sintered more rapidly. The goals of the present study were to examine in more detail the effects of variations in the width of lognormal particle size distributions of tungsten powder and determine the effects of 1nσ on the microstructural evolution during sintering

  5. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    Science.gov (United States)

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2010-12-01

    Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.

  7. Experimental study on two-phase flow parameters of subcooled boiling in inclined annulus

    International Nuclear Information System (INIS)

    Lee, Tae Ho; Kim, Moon Oh; Park, Goon Cherl

    1999-01-01

    Local two-phase flow parameters of subcooled flow boiling in inclined annulus were measured to investigate the effect of inclination on the internal flow structure. Two-conductivity probe technique was applied to measured local gas phasic parameters, including void by fraction, vapor bubble frequency, chord length, vapor bubble velocity and interfacial area concentration. Local liquid velocity was measured by Pitot tube. Experiments were conducted for three angles of inclination: 0 o (vertical), 30 o , 60 o . The system pressure was maintained at atmospheric pressure. The range of average void fraction was up to 10 percent and the average liquid superficial velocities were less than 1.3 m/sec. The results of experiments showed that the distributions of two-phase flow parameters were influenced by the angle of channel inclination. Especially, the void fraction and chord length distributions were strongly affected by the increase of inclination angle, and flow pattern transition to slug flow was observed depending on the flow conditions. The profiles of vapor velocity, liquid velocity and interfacial area concentration were found to be affected by the non-symmetric bubble size distribution in inclined channel. Using the measured distributions of local phasic parameters, an analysis for predicting average void fraction was performed based on the drift flux model and flowing volumetric concentration. And it was demonstrated that the average void fraction can be more appropriately presented in terms of flowing volumetric concentration. (Author). 18 refs., 2 tabs., 18 figs

  8. On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications

    KAUST Repository

    Al-Quwaiee, Hessa; Alouini, Mohamed-Slim

    2015-01-01

    Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high

  9. Probabilistic distributions of wind velocity for the evaluation of the wind power potential; Distribuicoes probabilisticas de velocidades do vento para avaliacao do potencial energetico eolico

    Energy Technology Data Exchange (ETDEWEB)

    Vendramini, Elisa Zanuncio

    1986-10-01

    The theoretical model of wind speed distributions allow valuable information about the probability of events relative to the variable in study eliminating the necessity of a new experiment. The most used distributions has been the Weibull and the Rayleigh. These distributions are examined in the present investigation, as well as the exponential, gamma, chi square and lognormal distributions. Three years of hourly averages wind data recorded from a anemometer setting at the city of Ataliba Leonel, Sao Paulo State, Brazil, were used. Using wind speed distribution the theoretical relative frequency was calculated from the distributions which have been examined. Results from the Kolmogorov - Smirnov test allow to conclude that the lognormal distribution fit better the wind speed data, followed by the gamma and Rayleigh distributions. Using the lognormal probability density function the yearly energy output from a wind generator installed in the side was calculated. 30 refs, 4 figs, 14 tabs

  10. Application of extreme value distribution function in the determination of standard meteorological parameters for nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Haimei; Liu Xinjian; Qiu Lin; Li Fengju

    2014-01-01

    Based on the meteorological data from weather stations around several domestic nuclear power plants, the statistical results of extreme minimum temperatures, minimum. central pressures of tropical cyclones and some other parameters are calculated using extreme value I distribution function (EV- I), generalized extreme value distribution function (GEV) and generalized Pareto distribution function (GP), respectively. The influence of different distribution functions and parameter solution methods on the statistical results of extreme values is investigated. Results indicate that generalized extreme value function has better applicability than the other two distribution functions in the determination of standard meteorological parameters for nuclear power plants. (authors)

  11. Solliton-like order parameter distributions in the critical region

    Directory of Open Access Journals (Sweden)

    A.V.Babich

    2006-01-01

    Full Text Available Some exact one-component order parameter distributions for the Michelson thermodynamic potential are obtained. The phase transition of second kind in Ginzburg-Landau type model is investigated. The exact partial distribution of the order parameter in the form of Jakobi elliptic function is obtained. The energy of this distribution is lower at some temperature interval than for the best known models.

  12. The Czech Wage Distribution and the Minimum Wage Impacts: the Empirical Analysis

    Directory of Open Access Journals (Sweden)

    Kateřina Duspivová

    2013-06-01

    Full Text Available A well-fi tting wage distribution is a crucial precondition for economic modeling of the labour market processes.In the fi rst part, this paper provides the evidence that – as for wages in the Czech Republic – the most oft enused log-normal distribution failed and the best-fi tting one is the Dagum distribution. Th en we investigatethe role of wage distribution in the process of the economic modeling. By way of an example of the minimumwage impacts on the Czech labour market, we examine the response of Meyer and Wise’s (1983 model to theDagum and log-normal distributions. Th e results suggest that the wage distribution has important implicationsfor the eff ects of the minimum wage on the shape of the lower tail of the measured wage distribution andis thus an important feature for interpreting the eff ects of minimum wages.

  13. Ensemble distribution for immiscible two-phase flow in porous media.

    Science.gov (United States)

    Savani, Isha; Bedeaux, Dick; Kjelstrup, Signe; Vassvik, Morten; Sinha, Santanu; Hansen, Alex

    2017-02-01

    We construct an ensemble distribution to describe steady immiscible two-phase flow of two incompressible fluids in a porous medium. The system is found to be ergodic. The distribution is used to compute macroscopic flow parameters. In particular, we find an expression for the overall mobility of the system from the ensemble distribution. The entropy production at the scale of the porous medium is shown to give the expected product of the average flow and its driving force, obtained from a black-box description. We test numerically some of the central theoretical results.

  14. Modeling wind speed and wind power distributions in Rwanda

    Energy Technology Data Exchange (ETDEWEB)

    Safari, Bonfils [Department of Physics, National University of Rwanda, P.O. Box 117, Huye District, South Province (Rwanda)

    2011-02-15

    Utilization of wind energy as an alternative energy source may offer many environmental and economical advantages compared to fossil fuels based energy sources polluting the lower layer atmosphere. Wind energy as other forms of alternative energy may offer the promise of meeting energy demand in the direct, grid connected modes as well as stand alone and remote applications. Wind speed is the most significant parameter of the wind energy. Hence, an accurate determination of probability distribution of wind speed values is very important in estimating wind speed energy potential over a region. In the present study, parameters of five probability density distribution functions such as Weibull, Rayleigh, lognormal, normal and gamma were calculated in the light of long term hourly observed data at four meteorological stations in Rwanda for the period of the year with fairly useful wind energy potential (monthly hourly mean wind speed anti v{>=}2 m s{sup -1}). In order to select good fitting probability density distribution functions, graphical comparisons to the empirical distributions were made. In addition, RMSE and MBE have been computed for each distribution and magnitudes of errors were compared. Residuals of theoretical distributions were visually analyzed graphically. Finally, a selection of three good fitting distributions to the empirical distribution of wind speed measured data was performed with the aid of a {chi}{sup 2} goodness-of-fit test for each station. (author)

  15. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    Science.gov (United States)

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  16. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-06-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  17. Notes on representing grain size distributions obtained by electron backscatter diffraction

    International Nuclear Information System (INIS)

    Toth, Laszlo S.; Biswas, Somjeet; Gu, Chengfan; Beausir, Benoit

    2013-01-01

    Grain size distributions measured by electron backscatter diffraction are commonly represented by histograms using either number or area fraction definitions. It is shown here that they should be presented in forms of density distribution functions for direct quantitative comparisons between different measurements. Here we make an interpretation of the frequently seen parabolic tales of the area distributions of bimodal grain structures and a transformation formula between the two distributions are given in this paper. - Highlights: • Grain size distributions are represented by density functions. • The parabolic tales corresponds to equal number of grains in a bin of the histogram. • A simple transformation formula is given to number and area weighed distributions. • The particularities of uniform and lognormal distributions are examined

  18. Bayesian estimation of Weibull distribution parameters

    International Nuclear Information System (INIS)

    Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.

    1994-11-01

    In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs

  19. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    Science.gov (United States)

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  20. MICROSTRUCTURAL PARAMETERS IN 8 MeV ELECTRON‐IRRADIATED BOMBYX MORI SILK FIBERS BY Wide‐ANGLE X‐RAY SCATTERING STUDIES (WAXS)

    Energy Technology Data Exchange (ETDEWEB)

    Sangappa,, E-mail: sangappa@mangaloreuniversity.ac.in; Asha, S, E-mail: sangappa@mangaloreuniversity.ac.in [Department of Studies in Physics, Mangalore University, Mangalagangotri‐574 199 (India); Sanjeev, Ganesh, E-mail: sangappa@mangaloreuniversity.ac.in [Microtron Center, Mangalore University, Mangalagangotri‐574 199 (India); Subramanya, G, E-mail: sangappa@mangaloreuniversity.ac.in [Department of Studies in Sericulture, University of Mysore, Manasagangotri, Mysore‐570 006 (India); Parameswara, P, E-mail: sangappa@mangaloreuniversity.ac.in; Somashekar, R, E-mail: sangappa@mangaloreuniversity.ac.in [Department of Studies in Physics, University of Mysore, Manasagangotri, Mysore‐570 006 (India)

    2010-01-05

    The present work looks into the microstructural modification in electron irradiated Bombyx mori P31 silk fibers. The irradiation process was performed in air at room temperature using 8 MeV electron accelerator at different doses: 0, 25, 50 and 100 kGy. Irradiation of polymer is used to cross‐link or degrade the desired component or to fix the polymer morphology. The changes in microstructural parameters in these natural polymer fibers have been computed using wide angle X‐ray scattering (WAXS) data and employing line profile analysis (LPA) using Fourier transform technique of Warren. Exponential, Lognormal and Reinhold functions for the column length distributions have been used for the determination of crystal size, lattice strain and enthalpy parameters.

  1. Microstructural Parameters in 8 MeV Electron-Irradiated BOMBYX MORI Silk Fibers by Wide-ANGLE X-Ray Scattering Studies (waxs)

    Science.gov (United States)

    Sangappa, Asha, S.; Sanjeev, Ganesh; Subramanya, G.; Parameswara, P.; Somashekar, R.

    2010-01-01

    The present work looks into the microstructural modification in electron irradiated Bombyx mori P31 silk fibers. The irradiation process was performed in air at room temperature using 8 MeV electron accelerator at different doses: 0, 25, 50 and 100 kGy. Irradiation of polymer is used to cross-link or degrade the desired component or to fix the polymer morphology. The changes in microstructural parameters in these natural polymer fibers have been computed using wide angle X-ray scattering (WAXS) data and employing line profile analysis (LPA) using Fourier transform technique of Warren. Exponential, Lognormal and Reinhold functions for the column length distributions have been used for the determination of crystal size, lattice strain and enthalpy parameters.

  2. MICROSTRUCTURAL PARAMETERS IN 8 MeV ELECTRON‐IRRADIATED BOMBYX MORI SILK FIBERS BY Wide‐ANGLE X‐RAY SCATTERING STUDIES (WAXS)

    International Nuclear Information System (INIS)

    Sangappa,; Asha, S; Sanjeev, Ganesh; Subramanya, G; Parameswara, P; Somashekar, R

    2010-01-01

    The present work looks into the microstructural modification in electron irradiated Bombyx mori P31 silk fibers. The irradiation process was performed in air at room temperature using 8 MeV electron accelerator at different doses: 0, 25, 50 and 100 kGy. Irradiation of polymer is used to cross‐link or degrade the desired component or to fix the polymer morphology. The changes in microstructural parameters in these natural polymer fibers have been computed using wide angle X‐ray scattering (WAXS) data and employing line profile analysis (LPA) using Fourier transform technique of Warren. Exponential, Lognormal and Reinhold functions for the column length distributions have been used for the determination of crystal size, lattice strain and enthalpy parameters.

  3. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim; Rached, Nadhir B.; Kammoun, Abla; Tempone, Raul

    2018-01-01

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However

  4. A Study of Transmission Control Method for Distributed Parameters Measurement in Large Factories and Storehouses

    Directory of Open Access Journals (Sweden)

    Shujing Su

    2015-01-01

    Full Text Available For the characteristics of parameters dispersion in large factories, storehouses, and other applications, a distributed parameter measurement system is designed that is based on the ring network. The structure of the system and the circuit design of the master-slave node are described briefly. The basic protocol architecture about transmission communication is introduced, and then this paper comes up with two kinds of distributed transmission control methods. Finally, the reliability, extendibility, and control characteristic of these two methods are tested through a series of experiments. Moreover, the measurement results are compared and discussed.

  5. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    Science.gov (United States)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  6. A data science based standardized Gini index as a Lorenz dominance preserving measure of the inequality of distributions.

    Science.gov (United States)

    Ultsch, Alfred; Lötsch, Jörn

    2017-01-01

    The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World's countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World's counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World's countries' income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.

  7. A data science based standardized Gini index as a Lorenz dominance preserving measure of the inequality of distributions.

    Directory of Open Access Journals (Sweden)

    Alfred Ultsch

    Full Text Available The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World's countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World's counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World's countries' income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.

  8. Oxide particle size distribution from shearing irradiated and unirradiated LWR fuels in Zircaloy and stainless steel cladding: significance for risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Davis, W. Jr.; West, G.A.; Stacy, R.G.

    1979-03-22

    Sieve fractionation was performed with oxide particles dislodged during shearing of unirradiated or irradiated fuel bundles or single rods of UO/sub 2/ or 96 to 97% ThO/sub 2/--3 to 4% UO/sub 2/. Analyses of these data by nonlinear least-squares techniques demonstrated that the particle size distribution is lognormal. Variables involved in the numerical analyses include lognormal median size, lognormal standard deviation, and shear cut length. Sieve-fractionation data are presented for unirradiated bundles of stainless-steel-clad or Zircaloy-2-clad UO/sub 2/ or ThO/sub 2/--UO/sub 2/ sheared into lengths from 0.5 to 2.0 in. Data are also presented for irradiated single rods (sheared into lengths of 0.25 to 2.0 in.) of Zircaloy-2-clad UO/sub 2/ from BWRs and of Zircaloy-4-clad UO/sub 2/ from PWRs. Median particle sizes of UO/sub 2/ from shearing irradiated stainless-steel-clad fuel ranged from 103 to 182 ..mu..m; particle sizes of ThO/sub 2/--UO/sub 2/, under these same conditions, ranged from 137 to 202 ..mu..m. Similarly, median particle sizes of UO/sub 2/ from shearing unirradiated Zircaloy-2-clad fuel ranged from 230 to 957 ..mu..m. Irradiation levels of fuels from reactors ranged from 9,000 to 28,000 MWd/MTU. In general, particle sizes from shearing these irradiated fuels are larger than those from the unirradiated fuels; however, unirradiated fuel from vendors was not available for performing comparative shearing experiments. In addition, variations in particle size parameters pertaining to samples of a single vendor varied as much as those between different vendors. The fraction of fuel dislodged from the cladding is nearly proportional to the reciprocal of the shear cut length, until the cut length attains some minimum value below which all fuel is dislodged. Particles of fuel are generally elongated with a long-to-short axis ratio usually less than 3. Using parameters of the lognormal distribution estimates can be made of fractions of dislodged fuel having

  9. Development of probabilistic fatigue curve for asphalt concrete based on viscoelastic continuum damage mechanics

    Directory of Open Access Journals (Sweden)

    Himanshu Sharma

    2016-07-01

    Full Text Available Due to its roots in fundamental thermodynamic framework, continuum damage approach is popular for modeling asphalt concrete behavior. Currently used continuum damage models use mixture averaged values for model parameters and assume deterministic damage process. On the other hand, significant scatter is found in fatigue data generated even under extremely controlled laboratory testing conditions. Thus, currently used continuum damage models fail to account the scatter observed in fatigue data. This paper illustrates a novel approach for probabilistic fatigue life prediction based on viscoelastic continuum damage approach. Several specimens were tested for their viscoelastic properties and damage properties under uniaxial mode of loading. The data thus generated were analyzed using viscoelastic continuum damage mechanics principles to predict fatigue life. Weibull (2 parameter, 3 parameter and lognormal distributions were fit to fatigue life predicted using viscoelastic continuum damage approach. It was observed that fatigue damage could be best-described using Weibull distribution when compared to lognormal distribution. Due to its flexibility, 3-parameter Weibull distribution was found to fit better than 2-parameter Weibull distribution. Further, significant differences were found between probabilistic fatigue curves developed in this research and traditional deterministic fatigue curve. The proposed methodology combines advantages of continuum damage mechanics as well as probabilistic approaches. These probabilistic fatigue curves can be conveniently used for reliability based pavement design. Keywords: Probabilistic fatigue curve, Continuum damage mechanics, Weibull distribution, Lognormal distribution

  10. Random cyclic stress-strain responses of a stainless steel pipe-weld metal. I. A statistical investigation

    International Nuclear Information System (INIS)

    Zhao, Y.X.; Wang, J.N.

    2000-01-01

    For pt.II see ibid., vol.199, p.315-26, 2000. This paper pays a special attention to the issue that there is a significant scatter of the stress-strain responses of a nuclear engineering material, 1Cr18Ni9Ti stainless steel pipe-weld metal. Statistical investigation is made to the cyclic stress amplitudes of this material. Three considerations are given. They consist of the total fit, the consistency with fatigue physics and the safety in practice of the seven commonly used statistical distributions, namely Weibull (two- and three-parameter), normal, lognormal, extreme minimum value, extreme maximum value and exponential. Results reveal that the data follow meanwhile the seven distributions but the local effects of the distributions yield a significant difference. Any of the normal, lognormal, extreme minimum value and extreme maximum value distributions might be an appropriate assumed distribution for characterizing the data. The normal and extreme minimum models are excellent. Other distributions do not fit the data as they violate two or three of the mentioned considerations. (orig.)

  11. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  12. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    Science.gov (United States)

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  13. Particles size distribution effect on 3D packing of nanoparticles in to a bounded region

    International Nuclear Information System (INIS)

    Farzalipour Tabriz, M.; Salehpoor, P.; Esmaielzadeh Kandjani, A.; Vaezi, M. R.; Sadrnezhaad, S. K.

    2007-01-01

    In this paper, the effects of two different Particle Size Distributions on packing behavior of ideal rigid spherical nanoparticles using a novel packing model based on parallel algorithms have been reported. A mersenne twister algorithm was used to generate pseudo random numbers for the particles initial coordinates. Also, for this purpose a nano sized tetragonal confined container with a square floor (300 * 300 nm) were used in this work. The Andreasen and the Lognormal Particle Size Distributions were chosen to investigate the packing behavior in a 3D bounded region. The effects of particle numbers on packing behavior of these two Particle Size Distributions have been investigated. Also the reproducibility and the distribution of packing factor of these Particle Size Distributions were compared

  14. Crystallite size distribution of clay minerals from selected Serbian clay deposits

    Directory of Open Access Journals (Sweden)

    Simić Vladimir

    2006-01-01

    Full Text Available The BWA (Bertaut-Warren-Averbach technique for the measurement of the mean crystallite thickness and thickness distributions of phyllosilicates was applied to a set of kaolin and bentonite minerals. Six samples of kaolinitic clays, one sample of halloysite, and five bentonite samples from selected Serbian deposits were analyzed. These clays are of sedimentary volcano-sedimentary (diagenetic, and hydrothermal origin. Two different types of shape of thickness distribution were found - lognormal, typical for bentonite and halloysite, and polymodal, typical for kaolinite. The mean crystallite thickness (T BWA seams to be influenced by the genetic type of the clay sample.

  15. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric; Haakon, Hoel; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

  16. Reservoir theory, groundwater transit time distributions, and lumped parameter models

    International Nuclear Information System (INIS)

    Etcheverry, D.; Perrochet, P.

    1999-01-01

    The relation between groundwater residence times and transit times is given by the reservoir theory. It allows to calculate theoretical transit time distributions in a deterministic way, analytically, or on numerical models. Two analytical solutions validates the piston flow and the exponential model for simple conceptual flow systems. A numerical solution of a hypothetical regional groundwater flow shows that lumped parameter models could be applied in some cases to large-scale, heterogeneous aquifers. (author)

  17. Control of complex dynamics and chaos in distributed parameter systems

    Energy Technology Data Exchange (ETDEWEB)

    Chakravarti, S.; Marek, M.; Ray, W.H. [Univ. of Wisconsin, Madison, WI (United States)

    1995-12-31

    This paper discusses a methodology for controlling complex dynamics and chaos in distributed parameter systems. The reaction-diffusion system with Brusselator kinetics, where the torus-doubling or quasi-periodic (two characteristic incommensurate frequencies) route to chaos exists in a defined range of parameter values, is used as an example. Poincare maps are used for characterization of quasi-periodic and chaotic attractors. The dominant modes or topos, which are inherent properties of the system, are identified by means of the Singular Value Decomposition. Tested modal feedback control schemas based on identified dominant spatial modes confirm the possibility of stabilization of simple quasi-periodic trajectories in the complex quasi-periodic or chaotic spatiotemporal patterns.

  18. Modified distribution parameter for churn-turbulent flows in large diameter channels

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, J.P., E-mail: jschlege@purdue.edu; Macke, C.J.; Hibiki, T.; Ishii, M.

    2013-10-15

    Highlights: • Void fraction data collected in pipe sizes up to 0.304 m using impedance void meters. • Flow conditions extend to transition between churn-turbulent and annular flow. • Flow regime identification results agree with previous studies. • A new model for the distribution parameter in churn-turbulent flow is proposed. -- Abstract: Two phase flows in large diameter channels are important in a wide range of industrial applications, but especially in analysis of nuclear reactor safety for the prediction of BWR behavior and safety analysis in PWRs. To remedy an inability of current drift-flux models to accurately predict the void fraction in churn-turbulent flows in large diameter pipes, extensive experiments have been performed in pipes with diameters of 0.152 m, 0.203 m and 0.304 m to collect area-averaged void fraction data using electrical impedance void meters. The standard deviation and skewness of the impedance meter signal have been used to characterize the flow regime and confirm previous flow regime transition results. By treating churn-turbulent flow as a transition between cap-bubbly dispersed flow and annular separated flow and using a linear ramp, the distribution parameter has been modified for churn-turbulent flow. The modified distribution parameter has been evaluated through comparison of the void fraction predicted by the drift-flux model and the measured void fraction.

  19. Modified distribution parameter for churn-turbulent flows in large diameter channels

    International Nuclear Information System (INIS)

    Schlegel, J.P.; Macke, C.J.; Hibiki, T.; Ishii, M.

    2013-01-01

    Highlights: • Void fraction data collected in pipe sizes up to 0.304 m using impedance void meters. • Flow conditions extend to transition between churn-turbulent and annular flow. • Flow regime identification results agree with previous studies. • A new model for the distribution parameter in churn-turbulent flow is proposed. -- Abstract: Two phase flows in large diameter channels are important in a wide range of industrial applications, but especially in analysis of nuclear reactor safety for the prediction of BWR behavior and safety analysis in PWRs. To remedy an inability of current drift-flux models to accurately predict the void fraction in churn-turbulent flows in large diameter pipes, extensive experiments have been performed in pipes with diameters of 0.152 m, 0.203 m and 0.304 m to collect area-averaged void fraction data using electrical impedance void meters. The standard deviation and skewness of the impedance meter signal have been used to characterize the flow regime and confirm previous flow regime transition results. By treating churn-turbulent flow as a transition between cap-bubbly dispersed flow and annular separated flow and using a linear ramp, the distribution parameter has been modified for churn-turbulent flow. The modified distribution parameter has been evaluated through comparison of the void fraction predicted by the drift-flux model and the measured void fraction

  20. Distribution of runup heights of the December 26, 2004 tsunami in the Indian Ocean

    Science.gov (United States)

    Choi, Byung Ho; Hong, Sung Jin; Pelinovsky, Efim

    2006-07-01

    A massive earthquake with magnitude 9.3 occurred on December 26, 2004 off the northern Sumatra generated huge tsunami waves affected many coastal countries in the Indian Ocean. A number of field surveys have been performed after this tsunami event; in particular, several surveys in the south/east coast of India, Andaman and Nicobar Islands, Sri Lanka, Sumatra, Malaysia, and Thailand have been organized by the Korean Society of Coastal and Ocean Engineers from January to August 2005. Spatial distribution of the tsunami runup is used to analyze the distribution function of the wave heights on different coasts. Theoretical interpretation of this distribution is associated with random coastal bathymetry and coastline led to the log-normal functions. Observed data also are in a very good agreement with log-normal distribution confirming the important role of the variable ocean bathymetry in the formation of the irregular wave height distribution along the coasts.

  1. Outage and Capacity Performance Evaluation of Distributed MIMO Systems over a Composite Fading Channel

    Directory of Open Access Journals (Sweden)

    Wenjie Peng

    2014-01-01

    Full Text Available The exact closed-form expressions regarding the outage probability and capacity of distributed MIMO (DMIMO systems over a composite fading channel are derived. This is achieved firstly by using a lognormal approximation to a gamma-lognormal distribution when a mobile station (MS in the cell is in a fixed position, and the so-called maximum ratio transmission/selected combining (MRT-SC and selected transmission/maximum ratio combining (ST-MRC schemes are adopted in uplink and downlink, respectively. Then, based on a newly proposed nonuniform MS cell distribution model, which is more consistent with the MS cell hotspot distribution in an actual communication environment, the average outage probability and capacity formulas are further derived. Finally, the accuracy of the approximation method and the rationality of the corresponding theoretical analysis regarding the system performance are proven and illustrated by computer simulations.

  2. Full Two-Body Problem Mass Parameter Observability Explored Through Doubly Synchronous Systems

    Science.gov (United States)

    Davis, Alex Benjamin; Scheeres, Daniel

    2018-04-01

    The full two-body problem (F2BP) is often used to model binary asteroid systems, representing the bodies as two finite mass distributions whose dynamics are influenced by their mutual gravity potential. The emergent behavior of the F2BP is highly coupled translational and rotational mutual motion of the mass distributions. For these systems the doubly synchronous equilibrium occurs when both bodies are tidally-locked and in a circular co-orbit. Stable oscillations about this equilibrium can be shown, for the nonplanar system, to be combinations of seven fundamental frequencies of the system and the mutual orbit rate. The fundamental frequencies arise as the linear periods of center manifolds identified about the equilibrium which are heavily influenced by each body’s mass parameters. We leverage these eight dynamical constraints to investigate the observability of binary asteroid mass parameters via dynamical observations. This is accomplished by proving the nonsingularity of the relationship between the frequencies and mass parameters for doubly synchronous systems. Thus we can invert the relationship to show that given observations of the frequencies, we can solve for the mass parameters of a target system. In so doing we are able to predict the estimation covariance of the mass parameters based on observation quality and define necessary observation accuracies for desired mass parameter certainties. We apply these tools to 617 Patroclus, a doubly synchronous Trojan binary and flyby target of the LUCY mission, as well as the Pluto and Charon system in order to predict mutual behaviors of these doubly synchronous systems and to provide observational requirements for these systems’ mass parameters

  3. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...

  4. Distribution of 226Ra, 232Th, and 40K in soils of Rio Grande do Norte (Brazil)

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1996-01-01

    A survey programme aimed at studying the environmental radioactivity in the Brazilian state of Rio Grande do Norte was undertaken. Fifty-two soil samples, together with two rock and two uraniferrous ore samples were collected from the eastern and central regions of this state. Concentrations of radioelements in samples were determined by γ-ray spectrometry. The average concentrations of 226 Ra, 232 Th, and 40 K in the surveyed soils were 29.2 ± 19.5 (SD), 47.8 ± 37.3, and 704 ± 437 Bq kg -1 , respectively. Higher values were found in the rock samples. The distributions of 226 Ra and 232 Th were fitted by log-normal curves. Radiological measurements carried out with a portable scintillometer at the sampled sites revealed an average absorbed dose rate of 55 ± 27 (SD) nGy h -1 . Computed dose rates obtained through the Beck formula ranged from 15-179 nGy h -1 , with a mean value of 72.6 ± 38.7 (SD) nGy h -1 , and their distribution fitted a log-normal curve. An annual average effective dose equivalent of 552 μSν (range: 117-1361 μSν) was estimated for 51 sites in Rio Grande do Norte. (author)

  5. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  6. A two-population sporadic meteoroid bulk density distribution and its implications for environment models

    Science.gov (United States)

    Moorhead, Althea V.; Blaauw, Rhiannon C.; Moser, Danielle E.; Campbell-Brown, Margaret D.; Brown, Peter G.; Cooke, William J.

    2017-12-01

    The bulk density of a meteoroid affects its dynamics in space, its ablation in the atmosphere, and the damage it does to spacecraft and lunar or planetary surfaces. Meteoroid bulk densities are also notoriously difficult to measure, and we are typically forced to assume a density or attempt to measure it via a proxy. In this paper, we construct a density distribution for sporadic meteoroids based on existing density measurements. We considered two possible proxies for density: the KB parameter introduced by Ceplecha and Tisserand parameter, TJ. Although KB is frequently cited as a proxy for meteoroid material properties, we find that it is poorly correlated with ablation-model-derived densities. We therefore follow the example of Kikwaya et al. in associating density with the Tisserand parameter. We fit two density distributions to meteoroids originating from Halley-type comets (TJ 2); the resulting two-population density distribution is the most detailed sporadic meteoroid density distribution justified by the available data. Finally, we discuss the implications for meteoroid environment models and spacecraft risk assessments. We find that correcting for density increases the fraction of meteoroid-induced spacecraft damage produced by the helion/antihelion source.

  7. Probability distributions of placental morphological measurements and origins of variability of placental shapes.

    Science.gov (United States)

    Yampolsky, M; Salafia, C M; Shlakhter, O

    2013-06-01

    While the mean shape of human placenta is round with centrally inserted umbilical cord, significant deviations from this ideal are fairly common, and may be clinically meaningful. Traditionally, they are explained by trophotropism. We have proposed a hypothesis explaining typical variations in placental shape by randomly determined fluctuations in the growth process of the vascular tree. It has been recently reported that umbilical cord displacement in a birth cohort has a log-normal probability distribution, which indicates that the displacement between an initial point of origin and the centroid of the mature shape is a result of accumulation of random fluctuations of the dynamic growth of the placenta. To confirm this, we investigate statistical distributions of other features of placental morphology. In a cohort of 1023 births at term digital photographs of placentas were recorded at delivery. Excluding cases with velamentous cord insertion, or missing clinical data left 1001 (97.8%) for which placental surface morphology features were measured. Best-fit statistical distributions for them were obtained using EasyFit. The best-fit distributions of umbilical cord displacement, placental disk diameter, area, perimeter, and maximal radius calculated from the cord insertion point are of heavy-tailed type, similar in shape to log-normal distributions. This is consistent with a stochastic origin of deviations of placental shape from normal. Deviations of placental shape descriptors from average have heavy-tailed distributions similar in shape to log-normal. This evidence points away from trophotropism, and towards a spontaneous stochastic evolution of the variants of placental surface shape features. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Boundary feedback stabilization of distributed parameter systems

    DEFF Research Database (Denmark)

    Pedersen, Michael

    1988-01-01

    The author introduces the method of pseudo-differential stabilization. He notes that the theory of pseudo-differential boundary operators is a fruitful approach to problems arising in control and stabilization theory of distributed-parameter systems. The basic pseudo-differential calculus can...

  9. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  10. Log-concavity property for some well-known distributions

    Directory of Open Access Journals (Sweden)

    G. R. Mohtashami Borzadaran

    2011-12-01

    Full Text Available Interesting properties and propositions, in many branches of science such as economics have been obtained according to the property of cumulative distribution function of a random variable as a concave function. Caplin and Nalebuff (1988,1989, Bagnoli and Khanna (1989 and Bagnoli and Bergstrom (1989 , 1989, 2005 have discussed the log-concavity property of probability distributions and their applications, especially in economics. Log-concavity concerns twice differentiable real-valued function g whose domain is an interval on extended real line. g as a function is said to be log-concave on the interval (a,b if the function ln(g is a concave function on (a,b. Log-concavity of g on (a,b is equivalent to g'/g being monotone decreasing on (a,b or (ln(g" 6] have obtained log-concavity for distributions such as normal, logistic, extreme-value, exponential, Laplace, Weibull, power function, uniform, gamma, beta, Pareto, log-normal, Student's t, Cauchy and F distributions. We have discussed and introduced the continuous versions of the Pearson family, also found the log-concavity for this family in general cases, and then obtained the log-concavity property for each distribution that is a member of Pearson family. For the Burr family these cases have been calculated, even for each distribution that belongs to Burr family. Also, log-concavity results for distributions such as generalized gamma distributions, Feller-Pareto distributions, generalized Inverse Gaussian distributions and generalized Log-normal distributions have been obtained.

  11. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  12. Calculations of grade and tonnage for two co-products from a projected South African gold mine

    International Nuclear Information System (INIS)

    Magri, E.J.

    1982-01-01

    Consideration is given to the problem of the estimation, from limited data, of the likely grade and tonnage for a new mining property that is to be exploited by the selective mining of two metals that have substantial contributions to make to the total revenue. In particular, the case of a new gold and uranium mine in South Africa is analysed as follows. (1) The applicability of the underlying lognormal bivariate model for different support (i.e. ore unit) sizes is examined. (2) The necessary parameters for the bivariate lognormal models for different block sizes are estimated from abundant chip-sampling data from a section of the Hartebeestfontein Gold Mine using alternative approaches, and the results are compared. (3) A method is given for obtaining the necessary parameters for a tonnage-grade relationship relative to a joint pay limit from the very limited information likely to be available at the end of the exploration stage of a gold mine, and the results are compared with those obtained from the large volume of data available from a mined-out area

  13. Mathematical Model to estimate the wind power using four-parameter Burr distribution

    Science.gov (United States)

    Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu

    2018-03-01

    When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.

  14. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    Science.gov (United States)

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  15. Estimation of the shape parameter of a generalized Pareto distribution based on a transformation to Pareto distributed variables

    OpenAIRE

    van Zyl, J. Martin

    2012-01-01

    Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...

  16. Multivariate phase type distributions - Applications and parameter estimation

    DEFF Research Database (Denmark)

    Meisch, David

    The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... projects and depend on reliable cost estimates. The Successive Principle is a group analysis method primarily used for analyzing medium to large projects in relation to cost or duration. We believe that the mathematical modeling used in the Successive Principle can be improved. We suggested a novel...

  17. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  18. Concentration of Tritium and Members of the Uranium and Thorium Decay Chains in Groundwaters in Slovenia and their Implication for Managing Groundwater Resources

    Energy Technology Data Exchange (ETDEWEB)

    Korun, M.; Kovacic, K.; Kozar-Logar, J. [Jozef Stefan Institute, Ljubljana (Slovenia)

    2013-07-15

    Samples of groundwater were measured in terms of their activity concentration of gamma ray emitters, members of the uranium and thorium decay chains and tritium. The distributions of the number of samples over the measured activity concentrations are presented for {sup 238}U, {sup 226}Ra, {sup 210}Pb, {sup 228}Ra, {sup 228}Th, {sup 40}K and {sup 3}H. The distributions have three distinct shapes: log-normal distributions ({sup 238}U, {sup 226}Ra, {sup 228}Ra, {sup 228}Th), bimodal distributions ({sup 210}Pb, {sup 40}K), and a normal distribution ({sup 3}H). It appears that the log-normal distributions reflect the dilution of the radionuclides dissolved in the water. The bimodal distributions, being the sum of a log-normal distribution and a distribution having its maximum at the activity concentration of the higher mode, indicate influences from the soil surface, e.g. washout from the atmosphere and fertilizing. The normal distribution indicates mixing with rainwater under circumstances that are characterized by several independent variable parameters. (author)

  19. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    Science.gov (United States)

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  20. Firm Size Distribution in Fortune Global 500

    Science.gov (United States)

    Chen, Qinghua; Chen, Liujun; Liu, Kai

    By analyzing the data of Fortune Global 500 firms from 1996 to 2008, we found that their ranks and revenues always obey the same distribution, which implies that worldwide firm structure has been stable for a long time. The fitting results show that simple Zipf distribution is not an ideal model for global firms, while SCL, FSS have better fitting goodness, and lognormal fitting is the best. And then, we proposed a simple explanation.

  1. Effects of network topology on wealth distributions

    International Nuclear Information System (INIS)

    Garlaschelli, Diego; Loffredo, Maria I

    2008-01-01

    We focus on the problem of how the wealth is distributed among the units of a networked economic system. We first review the empirical results documenting that in many economies the wealth distribution is described by a combination of the log-normal and power-law behaviours. We then focus on the Bouchaud-Mezard model of wealth exchange, describing an economy of interacting agents connected through an exchange network. We report analytical and numerical results showing that the system self-organizes towards a stationary state whose associated wealth distribution depends crucially on the underlying interaction network. In particular, we show that if the network displays a homogeneous density of links, the wealth distribution displays either the log-normal or the power-law form. This means that the first-order topological properties alone (such as the scale-free property) are not enough to explain the emergence of the empirically observed mixed form of the wealth distribution. In order to reproduce this nontrivial pattern, the network has to be heterogeneously divided into regions with a variable density of links. We show new results detailing how this effect is related to the higher-order correlation properties of the underlying network. In particular, we analyse assortativity by degree and the pairwise wealth correlations, and discuss the effects that these properties have on each other

  2. With timing options and heterogeneous costs, the lognormal diffusion is hardly an equilibrium price process for exhaustible resources

    International Nuclear Information System (INIS)

    Lund, D.

    1992-01-01

    The report analyses the possibility that the lognormal diffusion process should be an equilibrium spot price process for an exhaustible resource. A partial equilibrium model is used under the assumption that the resource deposits have different extraction costs. Two separate problems have been pointed out. Under full certainty, when the process reduces to an exponentially growing price, the equilibrium places a very strong restriction on a relationship between the demand function and the cost density function. Under uncertainty there is an additional problem that during periods in which the price is lower than its previously recorded high, no new deposits will start extraction. 30 refs., 1 fig

  3. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  4. The stochastic distribution of available coefficient of friction on quarry tiles for human locomotion.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2012-01-01

    The available coefficient of friction (ACOF) for human locomotion is the maximum coefficient of friction that can be supported without a slip at the shoe and floor interface. A statistical model was introduced to estimate the probability of slip by comparing the ACOF with the required coefficient of friction, assuming that both coefficients have stochastic distributions. This paper presents an investigation of the stochastic distributions of the ACOF of quarry tiles under dry, water and glycerol conditions. One hundred friction measurements were performed on a walkway under the surface conditions of dry, water and 45% glycerol concentration. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF appears to fit the normal and log-normal distributions better than the Weibull distribution for the water and glycerol conditions. However, no match was found between the distribution of ACOF under the dry condition and any of the three continuous distributions evaluated. Based on limited data, a normal distribution might be more appropriate due to its simplicity, practicality and familiarity among the three distributions evaluated.

  5. Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.; Adebisi, Bamidele; Alouini, Mohamed-Slim

    2017-01-01

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.

  6. Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-02-07

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.

  7. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  8. A study of two estimation approaches for parameters of Weibull distribution based on WPP

    International Nuclear Information System (INIS)

    Zhang, L.F.; Xie, M.; Tang, L.C.

    2007-01-01

    Least-squares estimation (LSE) based on Weibull probability plot (WPP) is the most basic method for estimating the Weibull parameters. The common procedure of this method is using the least-squares regression of Y on X, i.e. minimizing the sum of squares of the vertical residuals, to fit a straight line to the data points on WPP and then calculate the LS estimators. This method is known to be biased. In the existing literature the least-squares regression of X on Y, i.e. minimizing the sum of squares of the horizontal residuals, has been used by the Weibull researchers. This motivated us to carry out this comparison between the estimators of the two LS regression methods using intensive Monte Carlo simulations. Both complete and censored data are examined. Surprisingly, the result shows that LS Y on X performs better for small, complete samples, while the LS X on Y performs better in other cases in view of bias of the estimators. The two methods are also compared in terms of other model statistics. In general, when the shape parameter is less than one, LS Y on X provides a better model; otherwise, LS X on Y tends to be better

  9. Measurement of void fraction and bubble size distribution in two-phase flow system

    International Nuclear Information System (INIS)

    Huahun, G.

    1987-01-01

    The importance of study two phase flow parameter and microstructure has appeared increasingly, with the development of two-phase flow discipline. In the paper, the measurement methods of several important microstructure parameter in a two phase flow vertical channel have been studied. Using conductance probe the two phase flow pattern and the average void fraction have been measured previously by the authors. This paper concerns microstructure of the bubble size distribution and local void fraction. The authors studied the methods of measuring bubble velocity, size distribution and local void fraction using double conductance probes and a set of apparatus. Based on our experiments and Yoshihiro work, a formula of calculated local void fraction has been deduced by using the statistical characteristics of bubbles in two phase flow and the relation between calculated bubble size and voltage has been determined. Finally the authors checked by using photograph and fast valve, which is classical but reliable. The results are the same with what has been studied before

  10. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    Science.gov (United States)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  11. The social architecture of capitalism

    Science.gov (United States)

    Wright, Ian

    2005-02-01

    A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.

  12. Vertical random variability of the distribution coefficient in the soil and its effect on the migration of fallout radionuclides

    International Nuclear Information System (INIS)

    Bunzl, K.

    2002-01-01

    In the field, the distribution coefficient, K d , for the sorption of a radionuclide by the soil cannot be expected to be constant. Even in a well defined soil horizon, K d will vary stochastically in horizontal as well as in vertical direction around a mean value. The horizontal random variability of K d produce a pronounced tailing effect in the concentration depth profile of a fallout radionuclide, much less is known on the corresponding effect of the vertical random variability. To analyze this effect theoretically, the classical convection-dispersion model in combination with the random-walk particle method was applied. The concentration depth profile of a radionuclide was calculated one year after deposition assuming constant values of the pore water velocity, the diffusion/dispersion coefficient, and the distribution coefficient (K d = 100 cm 3 x g -1 ) and exhibiting a vertical variability for K d according to a log-normal distribution with a geometric mean of 100 cm 3 x g -1 and a coefficient of variation of CV 0.53. The results show that these two concentration depth profiles are only slightly different, the location of the peak is shifted somewhat upwards, and the dispersion of the concentration depth profile is slightly larger. A substantial tailing effect of the concentration depth profile is not perceivable. Especially with respect to the location of the peak, a very good approximation of the concentration depth profile is obtained if the arithmetic mean of the K d -values (K d = 113 cm 3 x g -1 ) and a slightly increased dispersion coefficient are used in the analytical solution of the classical convection-dispersion equation with constant K d . The evaluation of the observed concentration depth profile with the analytical solution of the classical convection-dispersion equation with constant parameters will, within the usual experimental limits, hardly reveal the presence of a log-normal random distribution of K d in the vertical direction in

  13. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  14. Reliability parameters of distribution networks components

    Energy Technology Data Exchange (ETDEWEB)

    Gono, R.; Kratky, M.; Rusek, S.; Kral, V. [Technical Univ. of Ostrava (Czech Republic)

    2009-03-11

    This paper presented a framework for the retrieval of parameters from various heterogenous power system databases. The framework was designed to transform the heterogenous outage data in a common relational scheme. The framework was used to retrieve outage data parameters from the Czech and Slovak republics in order to demonstrate the scalability of the framework. A reliability computation of the system was computed in 2 phases representing the retrieval of component reliability parameters and the reliability computation. Reliability rates were determined using component reliability and global reliability indices. Input data for the reliability was retrieved from data on equipment operating under similar conditions, while the probability of failure-free operations was evaluated by determining component status. Anomalies in distribution outage data were described as scheme, attribute, and term differences. Input types consisted of input relations; transformation programs; codebooks; and translation tables. The system was used to successfully retrieve data from 7 distributors in the Czech Republic and Slovak Republic between 2000-2007. The database included 301,555 records. Data were queried using SQL language. 29 refs., 2 tabs., 2 figs.

  15. Off-line tracking of series parameters in distribution systems using AMI data

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Tess L.; Sun, Yannan; Schneider, Kevin

    2016-05-01

    Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimation procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.

  16. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    Science.gov (United States)

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  17. Angular anisotropy parameters for sequential two-photon double ionization of helium

    International Nuclear Information System (INIS)

    Ivanov, I A; Kheifets, A S

    2009-01-01

    We evaluate photoelectron angular anisotropy /3-parameters for the process of sequential two-photon double electron ionization of helium within the time-independent lowest order perturbation theory (LOPT). Our results indicate that for the photoelectron energies outside the interval (E slow , E fast ), where E slow = ω - IP He + and E fast ω - IP He , there is a considerable deviation from the dipole angular distribution thus indicating the effect of electron correlation.

  18. Fitting Statistical Distributions Functions on Ozone Concentration Data at Coastal Areas

    International Nuclear Information System (INIS)

    Muhammad Yazid Nasir; Nurul Adyani Ghazali; Muhammad Izwan Zariq Mokhtar; Norhazlina Suhaimi

    2016-01-01

    Ozone is known as one of the pollutant that contributes to the air pollution problem. Therefore, it is important to carry out the study on ozone. The objective of this study is to find the best statistical distribution for ozone concentration. There are three distributions namely Inverse Gaussian, Weibull and Lognormal were chosen to fit one year hourly average ozone concentration data in 2010 at Port Dickson and Port Klang. Maximum likelihood estimation (MLE) method was used to estimate the parameters to develop the probability density function (PDF) graph and cumulative density function (CDF) graph. Three performance indicators (PI) that are normalized absolute error (NAE), prediction accuracy (PA), and coefficient of determination (R 2 ) were used to determine the goodness-of-fit criteria of the distribution. Result shows that Weibull distribution is the best distribution with the smallest error measure value (NAE) at Port Klang and Port Dickson is 0.08 and 0.31, respectively. The best score for highest adequacy measure (PA: 0.99) with the value of R 2 is 0.98 (Port Klang) and 0.99 (Port Dickson). These results provide useful information to local authorities for prediction purpose. (author)

  19. Robust D-optimal designs under correlated error, applicable invariantly for some lifetime distributions

    International Nuclear Information System (INIS)

    Das, Rabindra Nath; Kim, Jinseog; Park, Jeong-Soo

    2015-01-01

    In quality engineering, the most commonly used lifetime distributions are log-normal, exponential, gamma and Weibull. Experimental designs are useful for predicting the optimal operating conditions of the process in lifetime improvement experiments. In the present article, invariant robust first-order D-optimal designs are derived for correlated lifetime responses having the above four distributions. Robust designs are developed for some correlated error structures. It is shown that robust first-order D-optimal designs for these lifetime distributions are always robust rotatable but the converse is not true. Moreover, it is observed that these designs depend on the respective error covariance structure but are invariant to the above four lifetime distributions. This article generalizes the results of Das and Lin [7] for the above four lifetime distributions with general (intra-class, inter-class, compound symmetry, and tri-diagonal) correlated error structures. - Highlights: • This paper presents invariant robust first-order D-optimal designs under correlated lifetime responses. • The results of Das and Lin [7] are extended for the four lifetime (log-normal, exponential, gamma and Weibull) distributions. • This paper also generalizes the results of Das and Lin [7] to more general correlated error structures

  20. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  1. The complete information for phenomenal distributed parameter control of multicomponent chemical processes in gas, fluid and solid phase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    A constitutive mathematical model of distributed parameters of multicomponent chemical processes in gas, fluid and solid phase is utilized to the realization of phenomenal distributed parameter control of these processes. Original systems of partial differential constitutive state equations, in the following derivative forms /I/, /II/ and /III/ are solved in this paper from the point of view of information for phenomenal distributed parameter control of considered processes. Obtained in this way for multicomponent chemical processes in gas, fluid and solid phase: -dynamical working space-time characteristics/analytical solutions in working space-time of chemical reactors/, -dynamical phenomenal Green functions as working space-time transfer functions, -statical working space characteristics /analytical solutions in working space of chemical reactors/, -statical phenomenal Green functions as working space transfer functions, are applied, as information for realization of constitutive distributed parameter control of mass, energy and momentum aspects of above processes. Two cases are considered by existence of: A/sup o/ - initial conditions, B/sup o/ - initial and boundary conditions, for multicomponent chemical processes in gas, fluid and solid phase

  2. Gaze Step Distributions Reflect Fixations and Saccades: A Comment on Stephen and Mirman (2010)

    Science.gov (United States)

    Bogartz, Richard S.; Staub, Adrian

    2012-01-01

    In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye…

  3. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    Science.gov (United States)

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  4. Distribution of two-phase flow thermal and hydraulic parameters over the cross-section of channels with a rod bundle

    International Nuclear Information System (INIS)

    Mironov, Yu.V.; Shpanskij, S.V.

    1975-01-01

    The paper describes PUCHOK-2, a program for thermohydraulic calculation of a channel with a bundle of smooth fuel elements. The pro.gram takes into consideration the non-uniformity of flow parameter distributions over the channel cross-section. The channel cross-section was divided into elementary cells, within which changes in flow parameters (mass velocity, heat- and steam content) were disregarded. The bundle was considered to be a system of parallel interconnected channels. Accounting for equal pressure drops in all the cells, the above model led to a system of non-linear algebraic equations. The system of equations was solved by the method of successive approximations. Theoretical results were compared with experimental data

  5. Determination of probability density functions for parameters in the Munson-Dawson model for creep behavior of salt

    International Nuclear Information System (INIS)

    Pfeifle, T.W.; Mellegard, K.D.; Munson, D.E.

    1992-10-01

    The modified Munson-Dawson (M-D) constitutive model that describes the creep behavior of salt will be used in performance assessment calculations to assess compliance of the Waste Isolation Pilot Plant (WIPP) facility with requirements governing the disposal of nuclear waste. One of these standards requires that the uncertainty of future states of the system, material model parameters, and data be addressed in the performance assessment models. This paper presents a method in which measurement uncertainty and the inherent variability of the material are characterized by treating the M-D model parameters as random variables. The random variables can be described by appropriate probability distribution functions which then can be used in Monte Carlo or structural reliability analyses. Estimates of three random variables in the M-D model were obtained by fitting a scalar form of the model to triaxial compression creep data generated from tests of WIPP salt. Candidate probability distribution functions for each of the variables were then fitted to the estimates and their relative goodness-of-fit tested using the Kolmogorov-Smirnov statistic. A sophisticated statistical software package obtained from BMDP Statistical Software, Inc. was used in the M-D model fitting. A separate software package, STATGRAPHICS, was used in fitting the candidate probability distribution functions to estimates of the variables. Skewed distributions, i.e., lognormal and Weibull, were found to be appropriate for the random variables analyzed

  6. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    Science.gov (United States)

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Statistical Methods and Software for the Analysis of Occupational Exposure Data with Non-detectable Values

    Energy Technology Data Exchange (ETDEWEB)

    Frome, EL

    2005-09-20

    Environmental exposure measurements are, in general, positive and may be subject to left censoring; i.e,. the measured value is less than a ''detection limit''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. Parametric methods used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level, an upper percentile, and the exceedance fraction are used to characterize exposure levels, and confidence limits are used to describe the uncertainty in these estimates. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on an upper percentile (i.e., the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical data analysis and graphics has greatly enhanced the availability of high-quality nonproprietary (open source) software that serves as the basis for implementing the methods in this paper.

  8. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)

    2017-03-20

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  9. Estimation of modal parameters using bilinear joint time frequency distributions

    Science.gov (United States)

    Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.

    2007-07-01

    In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.

  10. Geometric effects of 90-degree vertical elbows on local two-phase flow parameters

    International Nuclear Information System (INIS)

    Yadav, M.; Worosz, T.; Kim, S.

    2011-01-01

    This study presents the geometric effects of 90-degree vertical elbows on the development of the local two-phase flow parameters. A multi-sensor conductivity probe is used to measure local two-phase flow parameters. It is found that immediately downstream of the vertical-upward elbow, the bubbles have a bimodal distribution along the horizontal radius of the pipe cross-section causing a dual-peak in the profiles of local void fraction and local interfacial area concentration. Immediately downstream of the vertical-downward elbow it is observed that the bubbles tend to migrate towards the inside of the elbow's curvature. The axial transport of void fraction and interfacial area concentration indicates that the elbows promote bubble disintegration. Preliminary predictions are obtained from group-one interfacial area transport equation (IATE) model for vertical-upward and vertical-downward two-phase flow. (author)

  11. Statistical Analysis of Data with Non-Detectable Values

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.

    2004-08-26

    Environmental exposure measurements are, in general, positive and may be subject to left censoring, i.e. the measured value is less than a ''limit of detection''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. A basic problem of interest in environmental risk assessment is to determine if the mean concentration of an analyte is less than a prescribed action level. Parametric methods, used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level and/or an upper percentile (e.g. the 95th percentile) are used to characterize exposure levels, and upper confidence limits are needed to describe the uncertainty in these estimates. In certain situations it is of interest to estimate the probability of observing a future (or ''missed'') value of a lognormal variable. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on the 95th percentile (i.e. the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical

  12. DNN-state identification of 2D distributed parameter systems

    Science.gov (United States)

    Chairez, I.; Fuentes, R.; Poznyak, A.; Poznyak, T.; Escudero, M.; Viana, L.

    2012-02-01

    There are many examples in science and engineering which are reduced to a set of partial differential equations (PDEs) through a process of mathematical modelling. Nevertheless there exist many sources of uncertainties around the aforementioned mathematical representation. Moreover, to find exact solutions of those PDEs is not a trivial task especially if the PDE is described in two or more dimensions. It is well known that neural networks can approximate a large set of continuous functions defined on a compact set to an arbitrary accuracy. In this article, a strategy based on the differential neural network (DNN) for the non-parametric identification of a mathematical model described by a class of two-dimensional (2D) PDEs is proposed. The adaptive laws for weights ensure the 'practical stability' of the DNN-trajectories to the parabolic 2D-PDE states. To verify the qualitative behaviour of the suggested methodology, here a non-parametric modelling problem for a distributed parameter plant is analysed.

  13. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  14. Experimental investigation on local parameter measurement using optical probes in two-phase flow under rolling condition

    International Nuclear Information System (INIS)

    Tian Daogui; Sun Licheng; Yan Changqi; Liu Guoqiang

    2013-01-01

    In order to get more local interfacial information as well as to further comprehend the intrinsic mechanism of two-phase flow under rolling condition, a method was proposed to measure the local parameters by using optical probes under rolling condition in this paper. An experimental investigation of two-phase flow under rolling condition was conducted using the probe fabricated by the authors. It is verified that the probe method is feasible to measure the local parameters in two'-phase flow under rolling condition. The results show that the interfacial parameters distribution near wall region has a distinct periodicity due to the rolling motion. The averaged deviation of the void fraction measured by the probe from that obtained from measured pressure drop is about 8%. (authors)

  15. Identification of systems with distributed parameters

    International Nuclear Information System (INIS)

    Moret, J.M.

    1990-10-01

    The problem of finding a model for the dynamical response of a system with distributed parameters based on measured data is addressed. First a mathematical formalism is developed in order to obtain the specific properties of such a system. Then a linear iterative identification algorithm is proposed that includes these properties, and that produces better results than usual non linear minimisation techniques. This algorithm is further improved by an original data decimation that allow to artificially increase the sampling period without losing between sample information. These algorithms are tested with real laboratory data

  16. On the low SNR capacity of log-normal turbulence channels with full CSI

    KAUST Repository

    Benkhelifa, Fatma; Tall, Abdoulaye; Rezki, Zouheir; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.

  17. On the low SNR capacity of log-normal turbulence channels with full CSI

    KAUST Repository

    Benkhelifa, Fatma

    2014-09-01

    In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.

  18. Income Distribution and Consumption Deprivation: An Analytical Link

    OpenAIRE

    Sushanta K. Mallick

    2008-01-01

    This article conceives poverty in terms of the consumption of essential food, makes use of a new deprivation (or poverty) function, and examines the effects of changes in the mean and the variance of the income distribution on poverty, assuming a log-normal income distribution. The presence of a saturation level of consumption can be treated as a poverty-line threshold as opposed to an exogenous income-based poverty line. Within such a consumption deprivation approach, the article proves anal...

  19. Fully Burdened Cost of Energy Analysis: A Model for Marine Corps Systems

    Science.gov (United States)

    2013-03-01

    and the lognormal parameters are not used in the creation of the output distribution since they are not required values for a triangular distribution...Army energy security implementation strategy. Washington, DC: Government Printing Office. Bell Helicopter. (n.d.). The Bell AH-1Z Zulu [Image

  20. Ocular biometric parameters among 3-year-old Chinese children: testability, distribution and association with anthropometric parameters

    Science.gov (United States)

    Huang, Dan; Chen, Xuejuan; Gong, Qi; Yuan, Chaoqun; Ding, Hui; Bai, Jing; Zhu, Hui; Fu, Zhujun; Yu, Rongbin; Liu, Hu

    2016-01-01

    This survey was conducted to determine the testability, distribution and associations of ocular biometric parameters in Chinese preschool children. Ocular biometric examinations, including the axial length (AL) and corneal radius of curvature (CR), were conducted on 1,688 3-year-old subjects by using an IOLMaster in August 2015. Anthropometric parameters, including height and weight, were measured according to a standardized protocol, and body mass index (BMI) was calculated. The testability was 93.7% for the AL and 78.6% for the CR overall, and both measures improved with age. Girls performed slightly better in AL measurements (P = 0.08), and the difference in CR was statistically significant (P < 0.05). The AL distribution was normal in girls (P = 0.12), whereas it was not in boys (P < 0.05). For CR1, all subgroups presented normal distributions (P = 0.16 for boys; P = 0.20 for girls), but the distribution varied when the subgroups were combined (P < 0.05). CR2 presented a normal distribution (P = 0.11), whereas the AL/CR ratio was abnormal (P < 0.001). Boys exhibited a significantly longer AL, a greater CR and a greater AL/CR ratio than girls (all P < 0.001). PMID:27384307

  1. STOCHASTIC MODEL OF THE SPIN DISTRIBUTION OF DARK MATTER HALOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Juhan [Center for Advanced Computation, Korea Institute for Advanced Study, Heogiro 85, Seoul 130-722 (Korea, Republic of); Choi, Yun-Young [Department of Astronomy and Space Science, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of); Kim, Sungsoo S.; Lee, Jeong-Eun [School of Space Research, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of)

    2015-09-15

    We employ a stochastic approach to probing the origin of the log-normal distributions of halo spin in N-body simulations. After analyzing spin evolution in halo merging trees, it was found that a spin change can be characterized by a stochastic random walk of angular momentum. Also, spin distributions generated by random walks are fairly consistent with those directly obtained from N-body simulations. We derived a stochastic differential equation from a widely used spin definition and measured the probability distributions of the derived angular momentum change from a massive set of halo merging trees. The roles of major merging and accretion are also statistically analyzed in evolving spin distributions. Several factors (local environment, halo mass, merging mass ratio, and redshift) are found to influence the angular momentum change. The spin distributions generated in the mean-field or void regions tend to shift slightly to a higher spin value compared with simulated spin distributions, which seems to be caused by the correlated random walks. We verified the assumption of randomness in the angular momentum change observed in the N-body simulation and detected several degrees of correlation between walks, which may provide a clue for the discrepancies between the simulated and generated spin distributions in the voids. However, the generated spin distributions in the group and cluster regions successfully match the simulated spin distribution. We also demonstrated that the log-normality of the spin distribution is a natural consequence of the stochastic differential equation of the halo spin, which is well described by the Geometric Brownian Motion model.

  2. Charge distributions in transverse coordinate space and in impact parameter space

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae Sung [Department of Physics, Sejong University, Seoul 143-747 (Korea, Republic of)], E-mail: dshwang@slac.stanford.edu; Kim, Dong Soo [Department of Physics, Kangnung National University, Kangnung 210-702 (Korea, Republic of); Kim, Jonghyun [Department of Physics, Sejong University, Seoul 143-747 (Korea, Republic of)

    2008-11-27

    We study the charge distributions of the valence quarks inside nucleon in the transverse coordinate space, which is conjugate to the transverse momentum space. We compare the results with the charge distributions in the impact parameter space.

  3. Charge distributions in transverse coordinate space and in impact parameter space

    OpenAIRE

    Hwang, Dae Sung; Kim, Dong Soo; Kim, Jonghyun

    2008-01-01

    We study the charge distributions of the valence quarks inside nucleon in the transverse coordinate space, which is conjugate to the transverse momentum space. We compare the results with the charge distributions in the impact parameter space.

  4. Stability of the laws for the distribution of the cumulative failures in railway transport

    OpenAIRE

    Kirill VOYNOV

    2008-01-01

    There are very many different laws of distribution (for example), bellshaped (Gaussian) distribution, lognormal, Weibull distribution, exponential, uniform, Poisson’s, Student’s distributions and so on, which help to describe the real picture of failures with elements in various mechanical systems, in locomotives and carriages, too. To diminish the possibility of getting the rough error in the output of maths data treatment the new method is demonstrated in this article. The task is solved bo...

  5. Increasing parameter certainty and data utility through multi-objective calibration of a spatially distributed temperature and solute model

    Directory of Open Access Journals (Sweden)

    C. Bandaragoda

    2011-05-01

    Full Text Available To support the goal of distributed hydrologic and instream model predictions based on physical processes, we explore multi-dimensional parameterization determined by a broad set of observations. We present a systematic approach to using various data types at spatially distributed locations to decrease parameter bounds sampled within calibration algorithms that ultimately provide information regarding the extent of individual processes represented within the model structure. Through the use of a simulation matrix, parameter sets are first locally optimized by fitting the respective data at one or two locations and then the best results are selected to resolve which parameter sets perform best at all locations, or globally. This approach is illustrated using the Two-Zone Temperature and Solute (TZTS model for a case study in the Virgin River, Utah, USA, where temperature and solute tracer data were collected at multiple locations and zones within the river that represent the fate and transport of both heat and solute through the study reach. The result was a narrowed parameter space and increased parameter certainty which, based on our results, would not have been as successful if only single objective algorithms were used. We also found that the global optimum is best defined by multiple spatially distributed local optima, which supports the hypothesis that there is a discrete and narrowly bounded parameter range that represents the processes controlling the dominant hydrologic responses. Further, we illustrate that the optimization process itself can be used to determine which observed responses and locations are most useful for estimating the parameters that result in a global fit to guide future data collection efforts.

  6. Parameters of the Two-Phase Sand-Air Stream in the Blowing Process

    Directory of Open Access Journals (Sweden)

    Danko J.

    2012-12-01

    Full Text Available Theoretical problems concerning the determination of work parameters of the two-phase sand-air stream in the cores making process by blowing methods as well as experimental methods of determination of the main and auxiliary parameters of this process decisive on the cores quality assessed by the value and distribution of their apparent density are presented in the paper. In addition the results of visualisations of the core-box filling with the sand-air stream, from the blowing chamber, obtained by the process filming by means of the quick-action camera are presented in the paper and compared with the results of simulation calculations with the application of the ProCast software.

  7. Parameters of the Two-Phase Sand-Air Stream in the Blowing Process

    Directory of Open Access Journals (Sweden)

    J. Danko

    2012-12-01

    Full Text Available Theoretical problems concerning the determination of work parameters of the two-phase sand-air stream in the cores making process byblowing methods as well as experimental methods of determination of the main and auxiliary parameters of this process decisive on thecores quality assessed by the value and distribution of their apparent density are presented in the paper. In addition the results of visualisations of the core-box filling with the sand-air stream, from the blowing chamber, obtained by the process filming by means of the quick-action camera are presented in the paper and compared with the results of simulation calculations with the application of the ProCast software.

  8. Discrete Lognormal Model as an Unbiased Quantitative Measure of Scientific Performance Based on Empirical Citation Data

    Science.gov (United States)

    Moreira, Joao; Zeng, Xiaohan; Amaral, Luis

    2013-03-01

    Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants

  9. Excitation functions of parameters in Erlang distribution, Schwinger mechanism, and Tsallis statistics in RHIC BES program

    International Nuclear Information System (INIS)

    Gao, Li-Na; Liu, Fu-Hu; Lacey, Roy A.

    2016-01-01

    Experimental results of the transverse-momentum distributions of φ mesons and Ω hyperons produced in gold-gold (Au-Au) collisions with different centrality intervals, measured by the STAR Collaboration at different energies (7.7, 11.5, 19.6, 27, and 39 GeV) in the beam energy scan (BES) program at the relativistic heavy-ion collider (RHIC), are approximately described by the single Erlang distribution and the two-component Schwinger mechanism. Moreover, the STAR experimental transverse-momentum distributions of negatively charged particles, produced in Au-Au collisions at RHIC BES energies, are approximately described by the two-component Erlang distribution and the single Tsallis statistics. The excitation functions of free parameters are obtained from the fit to the experimental data. A weak softest point in the string tension in Ω hyperon spectra is observed at 7.7 GeV. (orig.)

  10. A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2016-05-01

    Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

  11. Environmental Transmission Electron Microscopy Study of the Origins of Anomalous Particle Size Distributions in Supported Metal Catalysts

    DEFF Research Database (Denmark)

    Benavidez, Angelica D.; Kovarik, Libor; Genc, Arda

    2012-01-01

    of the particle size distribution (PSD). The abundance of the larger particles did not fit the log-normal distribution. We can rule out sample nonuniformity as a cause for the growth of these large particles, since images were recorded prior to heat treatments. The anomalous growth of these particles may help...

  12. Radon parameters in outdoor air

    International Nuclear Information System (INIS)

    Porstendoerfer, J.; Zock, Ch.; Wendt, J.; Reineking, A.

    2002-01-01

    For dose estimation by inhalation of the short lived radon progeny in outdoor air, the equilibrium factor (F), the unattached fraction (f p ), and the activity size distribution of the radon progeny were measured. Besides the radon parameter the meteorological parameter like temperature, wind speed, and rainfall intensity were registered. The measurements were carried out continuously for several weeks to find out the variation with time (day/night) and for different weather conditions. The radon gas, the unattached and aerosol-attached radon progenies were measured with an monitor developed for continuous measurements in outdoor air with low activity concentrations. For the determination of the activity size distribution a low pressure online alpha cascade impactor was used. The measured values of the equilibrium factor varied between 0.5-0.8 depending on weather conditions and time of the day. For high pressure weather conditions a diurnal variation of the F-factor was obtained. A lower average value (F=0.25) was registered during rainy days. The obtained f p -values varied between 0.04 and 0.12. They were higher than expected. The measured activity size distribution of the radon progeny averaged over a measurement period of three weeks can be approximated by a sum of three log-normal distributions. The greatest activity fraction is adsorbed on aerosol particles in the accumulation size range (100-1000 nm) with activity median diameters and geometric standard deviation values between 250-450 nm and 1.5-3.0, respectively. The activity median diameter of this accumulation mode in outdoor air was significantly greater than in indoor air (150-250 nm). An influence of the weather conditions on the activity of the accumulation particles was not significant. In contrast to the results of measurements in houses a small but significant fraction of the radon progeny (average value: 2%) is attached on coarse particles (>1000 nm). This fraction varied between 0-10%. 20

  13. Positive random variables with a discrete probability mass at the origin: Parameter estimation for left-censored samples with application to air quality monitoring data

    International Nuclear Information System (INIS)

    Gogolak, C.V.

    1986-11-01

    The concentration of a contaminant measured in a particular medium might be distributed as a positive random variable when it is present, but it may not always be present. If there is a level below which the concentration cannot be distinguished from zero by the analytical apparatus, a sample from such a population will be censored on the left. The presence of both zeros and positive values in the censored portion of such samples complicates the problem of estimating the parameters of the underlying positive random variable and the probability of a zero observation. Using the method of maximum likelihood, it is shown that the solution to this estimation problem reduces largely to that of estimating the parameters of the distribution truncated at the point of censorship. The maximum likelihood estimate of the proportion of zero values follows directly. The derivation of the maximum likelihood estimates for a lognormal population with zeros is given in detail, and the asymptotic properties of the estimates are examined. The estimation method was used to fit several different distributions to a set of severely censored 85 Kr monitoring data from six locations at the Savannah River Plant chemical separations facilities

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  15. Multiplicity distributions in impact parameter space

    International Nuclear Information System (INIS)

    Wakano, Masami

    1976-01-01

    A definition for the average multiplicity of pions as a function of momentum transfer and total energy in the high energy proton-proton collisions is proposed by using the n-pion production differential cross section with the given momentum transfer from a proton to other final products and the given energy of the latter. Contributions from nondiffractive and diffractive processes are formulated in a multi-Regge model. We define a relationship between impact parameter and momentum transfer in the sense of classical theory for inelastic processes and we obtain the average multiplicity of pions as a function of impact parameter and total energy from the corresponding quantity afore-mentioned. By comparing this quantity with the square root of the opaqueness at given impact parameter, we conclude that the overlap of localized constituents is important in determining the opaqueness at given impact parameter in a collision of two hadrons. (auth.)

  16. From conservation laws to port-Hamiltonian representations of distributed-parameter systems

    NARCIS (Netherlands)

    Maschke, B.M.; van der Schaft, Arjan; Piztek, P.

    Abstract: In this paper it is shown how the port-Hamiltonian formulation of distributed-parameter systems is closely related to the general thermodynamic framework of systems of conservation laws and closure equations. The situation turns out to be similar to the lumped-parameter case where the

  17. Effect of two-temperature electrons distribution on an electrostatic plasma sheath

    International Nuclear Information System (INIS)

    Ou, Jing; Xiang, Nong; Gan, Chunyun; Yang, Jinhong

    2013-01-01

    A magnetized collisionless plasma sheath containing two-temperature electrons is studied using a one-dimensional model in which the low-temperature electrons are described by Maxwellian distribution (MD) and high-temperature electrons are described by truncated Maxwellian distribution (TMD). Based on the ion wave approach, a modified sheath criterion including effect of TMD caused by high-temperature electrons energy above the sheath potential energy is established theoretically. The model is also used to investigate numerically the sheath structure and energy flux to the wall for plasmas parameters of an open divertor tokamak-like. Our results show that the profiles of the sheath potential, two-temperature electrons and ions densities, high-temperature electrons and ions velocities as well as the energy flux to the wall depend on the high-temperature electrons concentration, temperature, and velocity distribution function associated with sheath potential. In addition, the results obtained in the high-temperature electrons with TMD as well as with MD sheaths are compared for the different sheath potential

  18. Global distribution of urban parameters derived from high-resolution global datasets for weather modelling

    Science.gov (United States)

    Kawano, N.; Varquez, A. C. G.; Dong, Y.; Kanda, M.

    2016-12-01

    Numerical model such as Weather Research and Forecasting model coupled with single-layer Urban Canopy Model (WRF-UCM) is one of the powerful tools to investigate urban heat island. Urban parameters such as average building height (Have), plain area index (λp) and frontal area index (λf), are necessary inputs for the model. In general, these parameters are uniformly assumed in WRF-UCM but this leads to unrealistic urban representation. Distributed urban parameters can also be incorporated into WRF-UCM to consider a detail urban effect. The problem is that distributed building information is not readily available for most megacities especially in developing countries. Furthermore, acquiring real building parameters often require huge amount of time and money. In this study, we investigated the potential of using globally available satellite-captured datasets for the estimation of the parameters, Have, λp, and λf. Global datasets comprised of high spatial resolution population dataset (LandScan by Oak Ridge National Laboratory), nighttime lights (NOAA), and vegetation fraction (NASA). True samples of Have, λp, and λf were acquired from actual building footprints from satellite images and 3D building database of Tokyo, New York, Paris, Melbourne, Istanbul, Jakarta and so on. Regression equations were then derived from the block-averaging of spatial pairs of real parameters and global datasets. Results show that two regression curves to estimate Have and λf from the combination of population and nightlight are necessary depending on the city's level of development. An index which can be used to decide which equation to use for a city is the Gross Domestic Product (GDP). On the other hand, λphas less dependence on GDP but indicated a negative relationship to vegetation fraction. Finally, a simplified but precise approximation of urban parameters through readily-available, high-resolution global datasets and our derived regressions can be utilized to estimate a

  19. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    Science.gov (United States)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  20. Temporal distribution of earthquakes using renewal process in the Dasht-e-Bayaz region

    Science.gov (United States)

    Mousavi, Mehdi; Salehi, Masoud

    2018-01-01

    Temporal distribution of earthquakes with M w > 6 in the Dasht-e-Bayaz region, eastern Iran has been investigated using time-dependent models. Based on these types of models, it is assumed that the times between consecutive large earthquakes follow a certain statistical distribution. For this purpose, four time-dependent inter-event distributions including the Weibull, Gamma, Lognormal, and the Brownian Passage Time (BPT) are used in this study and the associated parameters are estimated using the method of maximum likelihood estimation. The suitable distribution is selected based on logarithm likelihood function and Bayesian Information Criterion. The probability of the occurrence of the next large earthquake during a specified interval of time was calculated for each model. Then, the concept of conditional probability has been applied to forecast the next major ( M w > 6) earthquake in the site of our interest. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within a specified time, space, and magnitude windows. According to obtained results, the probability of occurrence of an earthquake with M w > 6 in the near future is significantly high.

  1. Distribution and Parameter's Calculations of Television Cameras Inside a Nuclear Facility

    International Nuclear Information System (INIS)

    El-kafas, A.A.

    2009-01-01

    In this work, a distribution of television cameras and parameter's calculation inside and outside a nuclear facility is presented. Each of exterior and interior camera systems will be described and explained. The work shows the overall closed circuit television system. Fixed and moving cameras with various lens format and different angles of view are used. The calculations of width of images sensitive area and Lens focal length for the cameras will be introduced. The work shows the camera locations and distributions inside and outside the nuclear facility. The technical specifications and parameters for cameras selection are tabulated

  2. A new mathematical approximation of sunlight attenuation in rocks for surface luminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Laskaris, Nikolaos, E-mail: nick.laskaris@gmail.com [University of the Aegean, Department of Mediterranean Studies, Laboratory of Archaeometry, 1 Demokratias Avenue, Rhodes 85100 (Greece); Liritzis, Ioannis, E-mail: liritzis@rhodes.aegean.gr [University of the Aegean, Department of Mediterranean Studies, Laboratory of Archaeometry, 1 Demokratias Avenue, Rhodes 85100 (Greece)

    2011-09-15

    The attenuation of sunlight through different rock surfaces and the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals clock resetting derived from sunlight induced eviction of electrons from electron traps, is a prerequisite criterion for potential dating. The modeling of change of residual luminescence as a function of two variables, the solar radiation path length (or depth) and exposure time offers further insight into the dating concept. The double exponential function modeling based on the Lambert-Beer law, valid under certain assumptions, constructed by a quasi-manual equation fails to offer a general and statistically sound expression of the best fit for most rock types. A cumulative log-normal distribution fitting provides a most satisfactory mathematical approximation for marbles, marble schists and granites, where absorption coefficient and residual luminescence parameters are defined per each type of rock or marble quarry. The new model is applied on available data and age determination tests. - Highlights: > Study of aattenuation of sunlight through different rock surfaces. > Study of the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals as a function of depth. > A Cumulative Log-Normal Distribution fitting provides the most satisfactory modeling for marbles, marble schists and granites. > The new model (Cummulative Log-Norm Fitting) is applied on available data and age determination tests.

  3. A new mathematical approximation of sunlight attenuation in rocks for surface luminescence dating

    International Nuclear Information System (INIS)

    Laskaris, Nikolaos; Liritzis, Ioannis

    2011-01-01

    The attenuation of sunlight through different rock surfaces and the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals clock resetting derived from sunlight induced eviction of electrons from electron traps, is a prerequisite criterion for potential dating. The modeling of change of residual luminescence as a function of two variables, the solar radiation path length (or depth) and exposure time offers further insight into the dating concept. The double exponential function modeling based on the Lambert-Beer law, valid under certain assumptions, constructed by a quasi-manual equation fails to offer a general and statistically sound expression of the best fit for most rock types. A cumulative log-normal distribution fitting provides a most satisfactory mathematical approximation for marbles, marble schists and granites, where absorption coefficient and residual luminescence parameters are defined per each type of rock or marble quarry. The new model is applied on available data and age determination tests. - Highlights: → Study of aattenuation of sunlight through different rock surfaces. → Study of the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals as a function of depth. → A Cumulative Log-Normal Distribution fitting provides the most satisfactory modeling for marbles, marble schists and granites. → The new model (Cummulative Log-Norm Fitting) is applied on available data and age determination tests.

  4. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  5. Stability of the laws for the distribution of the cumulative failures in railway transport

    Directory of Open Access Journals (Sweden)

    Kirill VOYNOV

    2008-01-01

    Full Text Available There are very many different laws of distribution (for example, bellshaped (Gaussian distribution, lognormal, Weibull distribution, exponential, uniform, Poisson’s, Student’s distributions and so on, which help to describe the real picture of failures with elements in various mechanical systems, in locomotives and carriages, too. To diminish the possibility of getting the rough error in the output of maths data treatment the new method is demonstrated in this article. The task is solved both to the discrete, and to the continuous distributions.

  6. Eliciting hyperparameters of prior distributions for the parameters of paired comparison models

    Directory of Open Access Journals (Sweden)

    Nasir Abbas

    2013-02-01

    Full Text Available Normal 0 false false false EN-US X-NONE AR-SA In the study of paired comparisons (PC, items may be ranked or issues may be prioritized through subjective assessment of certain judges. PC models are developed and then used to serve the purpose of ranking. The PC models may be studied through classical or Bayesian approach. Bayesian inference is a modern statistical technique used to draw conclusions about the population parameters. Its beauty lies in incorporating prior information about the parameters into the analysis in addition to current information (i.e. data. The prior and current information are formally combined to yield a posterior distribution about the population parameters, which is the work bench of the Bayesian statisticians. However, the problems the Bayesians face correspond to the selection and formal utilization of prior distribution. Once the type of prior distribution is decided to be used, the problem of estimating the parameters of the prior distribution (i.e. elicitation still persists. Different methods are devised to serve the purpose. In this study an attempt is made to use Minimum Chi-square (hence forth MCS for the elicitation purpose. Though it is a classical estimation technique, but is used here for the election purpose. The entire elicitation procedure is illustrated through a numerical data set.

  7. Improving control and estimation for distributed parameter systems utilizing mobile actuator-sensor network.

    Science.gov (United States)

    Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian

    2014-07-01

    This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Confidence limits for parameters of Poisson and binomial distributions

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-04-01

    The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

  9. Distributed parameter modeling and simulation for the evaporation system of a controlled circulation boiler based on 3-D combustion monitoring

    International Nuclear Information System (INIS)

    Chu Yuntao; Lou Chun; Cheng Qiang; Zhou Huaichun

    2008-01-01

    In this paper, a dynamic, distributed parameter model for the evaporation system of a controlled circulation boiler was developed. As an essential basis, the 3-D temperature distribution and the average emissivity of the particle phase inside its furnace can be got by a flame image processing technique from multiple, visible flame image detectors in a real-time combustion monitoring system. Then the transient, 2-D radiation flux can be obtained by solving a set of energy balance equations for the water wall elements, which serves as a distributed boundary condition for the dynamic, distributed parameter model proposed for the evaporation system. For large change of the boiler load, two important parameters, the correction factor of equivalent flame emissivity and the coefficient of the steam mass flow rate at the outlet of the drum, were determined using the operation data from a 300 MW boiler. The model was validated by comparing the simulation results for some main steam parameters of the system with those from measurements. Besides that, the transient distributions of the parameters, such as the steam quality and the mass velocity, were predicted by the model. This model can be used for on-line calculation or off-line prediction of the local abnormal phenomena occurring on the water walls, forming an important basis to effectively evaluate the security and the reliability of a power plant boiler

  10. Box-Cox transformation of firm size data in statistical analysis

    Science.gov (United States)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  11. Box-Cox transformation of firm size data in statistical analysis

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-01-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions

  12. Ajustes de funções de distribuição de probabilidade à radiação solar global no Estado do Rio Grande do Sul Adjustments of probability distribution functions to global solar radiation in Rio Grande do Sul State

    Directory of Open Access Journals (Sweden)

    Alberto Cargnelutti Filho

    2004-12-01

    Full Text Available O objetivo deste trabalho foi verificar o ajuste das séries de dados de radiação solar global média decendial, de 22 municípios do Estado do Rio Grande do Sul, às funções de distribuições de probabilidade normal, log-normal, gama, gumbel e weibull. Aplicou-se o teste de aderência de Kolmogorov-Smirnov, nas 792 séries de dados (22 municípios x 36 decêndios de radiação solar global média decendial, para verificar o ajuste dos dados às distribuições normal, log-normal, gama, gumbel e weibull, totalizando 3.960 testes. Os dados decendiais de radiação solar global média se ajustam às funções de distribuições de probabilidade normal, log-normal, gama, gumbel e weibull, e apresentam melhor ajuste à função de distribuição de probabilidade normal.The objective of this work was to verify the adjustment of data series for average global solar radiation to the normal, log-normal, gamma, gumbel and weibull probability distribution functions. Data were collected from 22 cities in Rio Grande do Sul State, Brazil. The Kolmogorov-Smirnov test was applied in the 792 series of data (22 localities x 36 periods of ten days of average global solar radiation to verify the adjustment of the data to the normal, log-normal, gamma, gumbel and weibull probability distribution functions, totalizing 3,960 tests. The data of average global solar radiation adjust to the normal, log-normal, gamma, gumbel and weibull probability distribution functions, and present a better adjustment to the normal probability function.

  13. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  14. Dielectric and electro-optical parameters of two ferroelectric liquid crystals: a comparative study

    International Nuclear Information System (INIS)

    Kumar Misra, Abhishek; Kumar Srivastava, Abhishek; Shukla, J P; Manohar, Rajiv

    2008-01-01

    Dielectric relaxation and an electro-optical study of two ferroelectric liquid crystals having different spontaneous polarizations (Felix 16/100 and Felix 17/000) showing SmC* and SmA phases have been performed in the temperature range 30-80 compfn C. The experimental data have been used to determine different relaxation parameters, viz. distribution parameter, relaxation frequency, dielectric strength and rotational viscosity. The Goldstone mode of dielectric permittivity has been well observed for both the samples under investigation. The activation energy of both the samples has also been determined by the best theoretical fitting of the Arrhenius plot. We have also evaluated the optical response time and anchoring energy coefficients from electro-optical measurement techniques for these samples.

  15. A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2018-02-01

    Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

  16. Distributions asymptotically homogeneous along the trajectories determined by one-parameter groups

    International Nuclear Information System (INIS)

    Drozhzhinov, Yurii N; Zav'yalov, Boris I

    2012-01-01

    We give a complete description of distributions that are asymptotically homogeneous (including the case of critical index of the asymptotic scale) along the trajectories determined by continuous multiplicative one-parameter transformation groups such that the real parts of all eigenvalues of the infinitesimal matrix are positive. To do this, we introduce and study special spaces of distributions. As an application of our results, we describe distributions that are homogeneous along such groups.

  17. Football goal distributions and extremal statistics

    Science.gov (United States)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  18. Statistical substantiation of introduction of the distributions containing lifetime as thermodynamic parameter

    OpenAIRE

    Ryazanov, V. V.

    2007-01-01

    By means of an inequality of the information and parametrization of family of distributions of the probabilities, supposing an effective estimation, introduction of the distributions containing time of the first achievement of a level as internal thermodynamic parameter ground.

  19. Two-parameter asymptotics in magnetic Weyl calculus

    International Nuclear Information System (INIS)

    Lein, Max

    2010-01-01

    This paper is concerned with small parameter asymptotics of magnetic quantum systems. In addition to a semiclassical parameter ε, the case of small coupling λ to the magnetic vector potential naturally occurs in this context. Magnetic Weyl calculus is adapted to incorporate both parameters, at least one of which needs to be small. Of particular interest is the expansion of the Weyl product which can be used to expand the product of operators in a small parameter, a technique which is prominent to obtain perturbation expansions. Three asymptotic expansions for the magnetic Weyl product of two Hoermander class symbols are proven as (i) ε<< 1 and λ<< 1, (ii) ε<< 1 and λ= 1, as well as (iii) ε= 1 and λ<< 1. Expansions (i) and (iii) are impossible to obtain with ordinary Weyl calculus. Furthermore, I relate the results derived by ordinary Weyl calculus with those obtained with magnetic Weyl calculus by one- and two-parameter expansions. To show the power and versatility of magnetic Weyl calculus, I derive the semirelativistic Pauli equation as a scaling limit from the Dirac equation up to errors of fourth order in 1/c.

  20. Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.

    Science.gov (United States)

    Blumberg, Leonid M

    2017-03-31

    If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.

    Science.gov (United States)

    Xie, Xianchao; Kou, S C; Brown, Lawrence

    2016-03-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.

  2. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    Directory of Open Access Journals (Sweden)

    Koen Degeling

    2017-12-01

    Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  3. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    Science.gov (United States)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  4. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio; Sawlan, Zaid A; Scavino, Marco; Tempone, Raul

    2016-01-01

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  5. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2015-01-07

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  6. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2016-01-06

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  7. Study of the behaviour of radon in soil and the interpretation of radon anomalies in the exploration for uranium

    International Nuclear Information System (INIS)

    Bhatnagar, A.S.

    1975-04-01

    The report presents detailed tables of data on radon distribution patterns to enable an interpretation of the anomalies to be carried out in the process of exploration for uranium. The distribution of radon in soils fits into a lognormal pattern. In places where uranium mineralization exists, the distribution pattern is a two lognormal one. This method can be used to classify areas and delineate them according to the distribution pattern found over them. The field work was carried out in the Delhi area, in Turumdih and in Udaisagar

  8. Probabilistic analysis of glass elements with three-parameter Weibull distribution; Analisis probabilistico de elementos de vidrio recocido mediante una distribucion triparametrica Weibull

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, A.; Muniz-Calvente, M.; Fernandez, P.; Fernandez Cantel, A.; Lamela, M. J.

    2015-10-01

    Glass and ceramics present a brittle behaviour so a large scatter in the test results is obtained. This dispersion is mainly due to the inevitable presence of micro-cracks on its surface, edge defects or internal defects, which must be taken into account using an appropriate failure criteria non-deterministic but probabilistic. Among the existing probability distributions, the two or three parameter Weibull distribution is generally used in adjusting material resistance results, although the method of use thereof is not always correct. Firstly, in this work, the results of a large experimental programme using annealed glass specimens of different dimensions based on four-point bending and coaxial double ring tests was performed. Then, the finite element models made for each type of test, the adjustment of the parameters of the three-parameter Weibull distribution function (cdf) (λ: location, β: shape, d: scale) for a certain failure criterion and the calculation of the effective areas from the cumulative distribution function are presented. Summarizing, this work aims to generalize the use of the three-parameter Weibull function in structural glass elements with stress distributions not analytically described, allowing to apply the probabilistic model proposed in general loading distributions. (Author)

  9. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    Science.gov (United States)

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  10. SU-E-T-113: Dose Distribution Using Respiratory Signals and Machine Parameters During Treatment

    International Nuclear Information System (INIS)

    Imae, T; Haga, A; Saotome, N; Kida, S; Nakano, M; Takeuchi, Y; Shiraki, T; Yano, K; Yamashita, H; Nakagawa, K; Ohtomo, K

    2014-01-01

    Purpose: Volumetric modulated arc therapy (VMAT) is a rotational intensity-modulated radiotherapy (IMRT) technique capable of acquiring projection images during treatment. Treatment plans for lung tumors using stereotactic body radiotherapy (SBRT) are calculated with planning computed tomography (CT) images only exhale phase. Purpose of this study is to evaluate dose distribution by reconstructing from only the data such as respiratory signals and machine parameters acquired during treatment. Methods: Phantom and three patients with lung tumor underwent CT scans for treatment planning. They were treated by VMAT while acquiring projection images to derive their respiratory signals and machine parameters including positions of multi leaf collimators, dose rates and integrated monitor units. The respiratory signals were divided into 4 and 10 phases and machine parameters were correlated with the divided respiratory signals based on the gantry angle. Dose distributions of each respiratory phase were calculated from plans which were reconstructed from the respiratory signals and the machine parameters during treatment. The doses at isocenter, maximum point and the centroid of target were evaluated. Results and Discussion: Dose distributions during treatment were calculated using the machine parameters and the respiratory signals detected from projection images. Maximum dose difference between plan and in treatment distribution was −1.8±0.4% at centroid of target and dose differences of evaluated points between 4 and 10 phases were no significant. Conclusion: The present method successfully evaluated dose distribution using respiratory signals and machine parameters during treatment. This method is feasible to verify the actual dose for moving target

  11. Mathematical modeling and comparison of protein size distribution in different plant, animal, fungal and microbial species reveals a negative correlation between protein size and protein number, thus providing insight into the evolution of proteomes

    Directory of Open Access Journals (Sweden)

    Tiessen Axel

    2012-02-01

    Full Text Available Abstract Background The sizes of proteins are relevant to their biochemical structure and for their biological function. The statistical distribution of protein lengths across a diverse set of taxa can provide hints about the evolution of proteomes. Results Using the full genomic sequences of over 1,302 prokaryotic and 140 eukaryotic species two datasets containing 1.2 and 6.1 million proteins were generated and analyzed statistically. The lengthwise distribution of proteins can be roughly described with a gamma type or log-normal model, depending on the species. However the shape parameter of the gamma model has not a fixed value of 2, as previously suggested, but varies between 1.5 and 3 in different species. A gamma model with unrestricted shape parameter described best the distributions in ~48% of the species, whereas the log-normal distribution described better the observed protein sizes in 42% of the species. The gamma restricted function and the sum of exponentials distribution had a better fitting in only ~5% of the species. Eukaryotic proteins have an average size of 472 aa, whereas bacterial (320 aa and archaeal (283 aa proteins are significantly smaller (33-40% on average. Average protein sizes in different phylogenetic groups were: Alveolata (628 aa, Amoebozoa (533 aa, Fornicata (543 aa, Placozoa (453 aa, Eumetazoa (486 aa, Fungi (487 aa, Stramenopila (486 aa, Viridiplantae (392 aa. Amino acid composition is biased according to protein size. Protein length correlated negatively with %C, %M, %K, %F, %R, %W, %Y and positively with %D, %E, %Q, %S and %T. Prokaryotic proteins had a different protein size bias for %E, %G, %K and %M as compared to eukaryotes. Conclusions Mathematical modeling of protein length empirical distributions can be used to asses the quality of small ORFs annotation in genomic releases (detection of too many false positive small ORFs. There is a negative correlation between average protein size and total number of

  12. Particle size distributions of radioactive aerosols measured in workplaces

    International Nuclear Information System (INIS)

    Dorrian, M.-D.; Bailey, M.R.

    1995-01-01

    A survey of published values of Activity Median Aerodynamic Diameter (AMAD) measured in working environments was conducted to assist in the selection of a realistic default AMAD for occupational exposures. Results were compiled from 52 publications covering a wide variety of industries and workplaces. Reported values of AMAD from all studies ranged from 0.12 μm to 25 μm, and most were well fitted by a log-normal distribution with a median value of 4.4 μm. This supports the choice of a 5 μm default AMAD, as a realistic rounded value for occupational exposures, by the ICRP Task Group on Human Respiratory Tract Models for Radiological Protection and its acceptance by ICRP. Both the nuclear power and nuclear fuel handling industries gave median values of approximately 4 μm. Uranium mills gave a median value of 6.8 μm with AMADs frequently greater than 10 μm. High temperature and arc saw cutting operations generated submicron particles and occasionally, biomodal log-normal particle size distributions. It is concluded that in view of the wide range of AMADs found in the surveyed literature, greater emphasis should be placed on air sampling to characterise aerosol particle size distributions for individual work practices, especially as doses estimated with the new 5 μm default AMAD will not always be conservative. (author)

  13. Calculation uncertainty of distribution-like parameters in NPP of PAKS

    International Nuclear Information System (INIS)

    Szecsenyi, Zsolt; Korpas, Layos

    2000-01-01

    In the reactor-physical point of view there were two important events in the Nuclear Power Plant of PAKS in this year. The Russian type profiled assemblies were loaded into the PAKS Unit 3, and new limitation system was introduced on the same Unit. It was required to solve a lot of problems because of these both events. One of these problems was the determination of uncertainty of quantities of the new limitation considering the fabrication uncertainties for the profiled assembly. The importance of determination of uncertainty is to guarantee on 99.9% level the avoidance of fuel failure. In this paper the principles of determination of calculation accuracy, applied methods and obtained results are presented in case of distribution-like parameters. A few elements of the method have been presented on earlier symposiums, so in this paper the whole method is just outlined. For example the GPT method was presented in the following paper: Uncertainty analysis of pin wise power distribution of WWER-440 assembly considering fabrication uncertainties. Finally in the summary of this paper additional intrinsic opportunities in the method are presented. (Authors)

  14. Aerosol formation from high-velocity uranium drops: Comparison of number and mass distributions. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rader, D.J.; Benson, D.A.

    1995-05-01

    This report presents the results of an experimental study of the aerosol produced by the combustion of high-velocity molten-uranium droplets produced by the simultaneous heating and electromagnetic launch of uranium wires. These tests are intended to simulate the reduction of high-velocity fragments into aerosol in high-explosive detonations or reactor accidents involving nuclear materials. As reported earlier, the resulting aerosol consists mainly of web-like chain agglomerates. A condensation nucleus counter was used to investigate the decay of the total particle concentration due to coagulation and losses. Number size distributions based on mobility equivalent diameter obtained soon after launch with a Differential Mobility Particle Sizer showed lognormal distributions with an initial count median diameter (CMD) of 0.3 {mu}m and a geometric standard deviation, {sigma}{sub g} of about 2; the CMD was found to increase and {sigma}{sub g} decrease with time due to coagulation. Mass size distributions based on aerodynamic diameter were obtained for the first time with a Microorifice Uniform Deposit Impactor, which showed lognormal distributions with mass median aerodynamic diameters of about 0.5 {mu}m and an aerodynamic geometric standard deviation of about 2. Approximate methods for converting between number and mass distributions and between mobility and aerodynamic equivalent diameters are presented.

  15. Aerosol formation from high-velocity uranium drops: Comparison of number and mass distributions. Final report

    International Nuclear Information System (INIS)

    Rader, D.J.; Benson, D.A.

    1995-05-01

    This report presents the results of an experimental study of the aerosol produced by the combustion of high-velocity molten-uranium droplets produced by the simultaneous heating and electromagnetic launch of uranium wires. These tests are intended to simulate the reduction of high-velocity fragments into aerosol in high-explosive detonations or reactor accidents involving nuclear materials. As reported earlier, the resulting aerosol consists mainly of web-like chain agglomerates. A condensation nucleus counter was used to investigate the decay of the total particle concentration due to coagulation and losses. Number size distributions based on mobility equivalent diameter obtained soon after launch with a Differential Mobility Particle Sizer showed lognormal distributions with an initial count median diameter (CMD) of 0.3 μm and a geometric standard deviation, σ g of about 2; the CMD was found to increase and σ g decrease with time due to coagulation. Mass size distributions based on aerodynamic diameter were obtained for the first time with a Microorifice Uniform Deposit Impactor, which showed lognormal distributions with mass median aerodynamic diameters of about 0.5 μm and an aerodynamic geometric standard deviation of about 2. Approximate methods for converting between number and mass distributions and between mobility and aerodynamic equivalent diameters are presented

  16. Probability Distributions for Cyclone Key Parameters and Cyclonic Wind Speed for the East Coast of Indian Region

    Directory of Open Access Journals (Sweden)

    Pradeep K. Goyal

    2011-09-01

    Full Text Available This paper presents a study conducted on the probabilistic distribution of key cyclone parameters and the cyclonic wind speed by analyzing the cyclone track records obtained from India meteorological department for east coast region of India. The dataset of historical landfalling storm tracks in India from 1975–2007 with latitude /longitude and landfall locations are used to map the cyclone tracks in a region of study. The statistical tests were performed to find a best fit distribution to the track data for each cyclone parameter. These parameters include central pressure difference, the radius of maximum wind speed, the translation velocity, track angle with site and are used to generate digital simulated cyclones using wind field simulation techniques. For this, different sets of values for all the cyclone key parameters are generated randomly from their probability distributions. Using these simulated values of the cyclone key parameters, the distribution of wind velocity at a particular site is obtained. The same distribution of wind velocity at the site is also obtained from actual track records and using the distributions of the cyclone key parameters as published in the literature. The simulated distribution is compared with the wind speed distributions obtained from actual track records. The findings are useful in cyclone disaster mitigation.

  17. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    Science.gov (United States)

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  18. A Dual Power Law Distribution for the Stellar Initial Mass Function

    Science.gov (United States)

    Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett

    2018-05-01

    We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.

  19. Extracting Galaxy Cluster Gas Inhomogeneity from X-Ray Surface Brightness: A Statistical Approach and Application to Abell 3667

    Science.gov (United States)

    Kawahara, Hajime; Reese, Erik D.; Kitayama, Tetsu; Sasaki, Shin; Suto, Yasushi

    2008-11-01

    Our previous analysis indicates that small-scale fluctuations in the intracluster medium (ICM) from cosmological hydrodynamic simulations follow the lognormal probability density function. In order to test the lognormal nature of the ICM directly against X-ray observations of galaxy clusters, we develop a method of extracting statistical information about the three-dimensional properties of the fluctuations from the two-dimensional X-ray surface brightness. We first create a set of synthetic clusters with lognormal fluctuations around their mean profile given by spherical isothermal β-models, later considering polytropic temperature profiles as well. Performing mock observations of these synthetic clusters, we find that the resulting X-ray surface brightness fluctuations also follow the lognormal distribution fairly well. Systematic analysis of the synthetic clusters provides an empirical relation between the three-dimensional density fluctuations and the two-dimensional X-ray surface brightness. We analyze Chandra observations of the galaxy cluster Abell 3667, and find that its X-ray surface brightness fluctuations follow the lognormal distribution. While the lognormal model was originally motivated by cosmological hydrodynamic simulations, this is the first observational confirmation of the lognormal signature in a real cluster. Finally we check the synthetic cluster results against clusters from cosmological hydrodynamic simulations. As a result of the complex structure exhibited by simulated clusters, the empirical relation between the two- and three-dimensional fluctuation properties calibrated with synthetic clusters when applied to simulated clusters shows large scatter. Nevertheless we are able to reproduce the true value of the fluctuation amplitude of simulated clusters within a factor of 2 from their two-dimensional X-ray surface brightness alone. Our current methodology combined with existing observational data is useful in describing and inferring the

  20. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  1. Influence of experimental parameters inherent to optical fibers on Quantum Key Distribution, the protocol BB84

    Directory of Open Access Journals (Sweden)

    L. Bouchoucha

    2018-03-01

    Full Text Available In this work, we represent the principle of quantum cryptography (QC that is based on fundamental laws of quantum physics. QC or Quantum Key Distribution (QKD uses various protocols to exchange a secret key between two communicating parties. This research paper focuses and examines the quantum key distribution by using the protocol BB84 in the case of encoding on the single-photon polarization and shows the influence of optical components parameters on the quantum key distribution. We also introduce Quantum Bit Error Rate (QBER to better interpret our results and show its relationship with the intrusion of the eavesdropper called Eve on the optical channel to exploit these vulnerabilities.

  2. A "total parameter estimation" method in the varification of distributed hydrological models

    Science.gov (United States)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in

  3. Tumour control probability (TCP) for non-uniform activity distribution in radionuclide therapy

    International Nuclear Information System (INIS)

    Uusijaervi, Helena; Bernhardt, Peter; Forssell-Aronsson, Eva

    2008-01-01

    Non-uniform radionuclide distribution in tumours will lead to a non-uniform absorbed dose. The aim of this study was to investigate how tumour control probability (TCP) depends on the radionuclide distribution in the tumour, both macroscopically and at the subcellular level. The absorbed dose in the cell nuclei of tumours was calculated for 90 Y, 177 Lu, 103m Rh and 211 At. The radionuclides were uniformly distributed within the subcellular compartment and they were uniformly, normally or log-normally distributed among the cells in the tumour. When all cells contain the same amount of activity, the cumulated activities required for TCP = 0.99 (A-tilde TCP=0.99 ) were 1.5-2 and 2-3 times higher when the activity was distributed on the cell membrane compared to in the cell nucleus for 103m Rh and 211 At, respectively. TCP for 90 Y was not affected by different radionuclide distributions, whereas for 177 Lu, it was slightly affected when the radionuclide was in the nucleus. TCP for 103m Rh and 211 At were affected by different radionuclide distributions to a great extent when the radionuclides were in the cell nucleus and to lesser extents when the radionuclides were distributed on the cell membrane or in the cytoplasm. When the activity was distributed in the nucleus, A-tilde TCP=0.99 increased when the activity distribution became more heterogeneous for 103m Rh and 211 At, and the increase was large when the activity was normally distributed compared to log-normally distributed. When the activity was distributed on the cell membrane, A-tilde TCP=0.99 was not affected for 103m Rh and 211 At when the activity distribution became more heterogeneous. A-tilde TCP=0.99 for 90 Y and 177 Lu were not affected by different activity distributions, neither macroscopic nor subcellular

  4. Measurement of local two-phase flow parameters of nanofluids using conductivity double-sensor probe.

    Science.gov (United States)

    Park, Yu Sun; Chang, Soon Heung

    2011-04-04

    A two-phase flow experiment using air and water-based γ-Al2O3 nanofluid was conducted to observe the basic hydraulic phenomenon of nanofluids. The local two-phase flow parameters were measured with a conductivity double-sensor two-phase void meter. The void fraction, interfacial velocity, interfacial area concentration, and mean bubble diameter were evaluated, and all of those results using the nanofluid were compared with the corresponding results for pure water. The void fraction distribution was flattened in the nanofluid case more than it was in the pure water case. The higher interfacial area concentration resulted in a smaller mean bubble diameter in the case of the nanofluid. This was the first attempt to measure the local two-phase flow parameters of nanofluids using a conductivity double-sensor two-phase void meter. Throughout this experimental study, the differences in the internal two-phase flow structure of the nanofluid were identified. In addition, the heat transfer enhancement of the nanofluid can be resulted from the increase of the interfacial area concentration which means the available area of the heat and mass transfer.

  5. Identification of Synchronous Generator Electric Parameters Connected to the Distribution Grid

    Directory of Open Access Journals (Sweden)

    Frolov M. Yu.

    2017-04-01

    Full Text Available According to modern trends, the power grids with distributed generation will have an open system architecture. It means that active consumers, owners of distributed power units, including mobile units, must have free access to the grid, like when using internet, so it is necessary to have plug and play technologies. Thanks to them, the system will be able to identify the unit type and the unit parameters. Therefore, the main aim of research, described in the paper, was to develop and research a new method of electric parameters identification of synchronous generator. The main feature of the proposed method is that parameter identification is performed while the generator to the grid, so it fits in the technological process of operation of the machine and does not influence on the connection time of the machine. For the implementation of the method, it is not necessary to create dangerous operation modes for the machine or to have additional expensive equipment and it can be used for salient pole machines and round rotor machines. The parameter identification accuracy can be achieved by more accurate account of electromechanical transient process, and making of overdetermined system with many more numbers of equations. Parameter identification will be made with each generator connection to the grid. Comparing data obtained from each connection, the middle values can be find by numerical method, and thus, each subsequent identification will accurate the machine parameters.

  6. Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-08-15

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels, which represent outdoor environments. In contrast, this paper is dedicated to analyze the performance of dual-hop relaying systems with EH over indoor channels characterized by log-normal fading. Both half-duplex (HD) and full-duplex (FD) relaying mechanisms are studied in this work with decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols. In addition, three EH schemes are investigated, namely, time switching relaying, power splitting relaying and ideal relaying receiver which serves as a lower bound. The system performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions. Monte Carlo simulations are provided throughout to validate the accuracy of our analysis. Results reveal that, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can generally outperform HD relaying schemes as long as the loop-back interference in FD is relatively small. Furthermore, increasing the variance of the log-normal channel has shown to deteriorate the performance in all the relaying and EH protocols considered.

  7. Extraction of market expectations from risk-neutral density

    Directory of Open Access Journals (Sweden)

    Josip Arnerić

    2015-12-01

    Full Text Available The purpose of this paper is to investigate which of the proposed parametric models for extracting risk-neutral density; among Black-Scholes Merton, mixture of two log-normals and generalized beta; give the best fit. The model that fits sample data better is used to describe different characteristics (moments of the ex ante probability distribution. The empirical findings indicate that no matter which parametric model is used, the best fit is always obtained for short maturity horizon, but when comparing models in short-run, the mixture of two log-normals gives statistically significant smaller MSE. According to the pair-wise comparison results, the basic conclusion is that the mixture of two log-normals is superior to the other parametric models and has proven to be very flexible in capturing commonly observed characteristics of the underlying financial assets, such as asymmetries and “fat-tails” in implied probability distribution.

  8. Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2016-12-01

    Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.

  9. Econophysical anchoring of unimodal power-law distributions

    International Nuclear Information System (INIS)

    Eliazar, Iddo I; Cohen, Morrel H

    2013-01-01

    The sciences are abundant with size distributions whose densities have a unimodal shape and power-law tails both at zero and at infinity. The quintessential examples of such unimodal and power-law (UPL) distributions are the sizes of income and wealth in human societies. While the tails of UPL distributions are precisely quantified by their corresponding power-law exponents, their bulks are only qualitatively characterized as unimodal. Consequently, different statistical models of UPL distributions exist, the most popular considering lognormal bulks. In this paper we present a general econophysical framework for UPL distributions termed ‘the anchoring method’. This method: (i) universally approximates UPL distributions via three ‘anchors’ set at zero, at infinity, and at an intermediate point between zero and infinity (e.g. the mode); (ii) is highly versatile and broadly applicable; (iii) encompasses the existing statistical models of UPL distributions as special cases; (iv) facilitates the introduction of new statistical models of UPL distributions and (v) yields a socioeconophysical analysis of UPL distributions. (paper)

  10. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  11. Statistical study of spatio-temporal distribution of precursor solar flares associated with major flares

    Science.gov (United States)

    Gyenge, N.; Ballai, I.; Baranyi, T.

    2016-07-01

    The aim of the present investigation is to study the spatio-temporal distribution of precursor flares during the 24 h interval preceding M- and X-class major flares and the evolution of follower flares. Information on associated (precursor and follower) flares is provided by Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Flare list, while the major flares are observed by the Geostationary Operational Environmental Satellite (GOES) system satellites between 2002 and 2014. There are distinct evolutionary differences between the spatio-temporal distributions of associated flares in about one-day period depending on the type of the main flare. The spatial distribution was characterized by the normalized frequency distribution of the quantity δ (the distance between the major flare and its precursor flare normalized by the sunspot group diameter) in four 6 h time intervals before the major event. The precursors of X-class flares have a double-peaked spatial distribution for more than half a day prior to the major flare, but it changes to a lognormal-like distribution roughly 6 h prior to the event. The precursors of M-class flares show lognormal-like distribution in each 6 h subinterval. The most frequent sites of the precursors in the active region are within a distance of about 0.1 diameter of sunspot group from the site of the major flare in each case. Our investigation shows that the build-up of energy is more effective than the release of energy because of precursors.

  12. Measurements of the charged particle multiplicity distribution in restricted rapidity intervals

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1995-01-01

    Charged particle multiplicity distributions have been measured with the ALEPH detector in restricted rapidity intervals |Y| \\leq 0.5,1.0, 1.5,2.0\\/ along the thrust axis and also without restriction on rapidity. The distribution for the full range can be parametrized by a log-normal distribution. For smaller windows one finds a more complicated structure, which is understood to arise from perturbative effects. The negative-binomial distribution fails to describe the data both with and without the restriction on rapidity. The JETSET model is found to describe all aspects of the data while the width predicted by HERWIG is in significant disagreement.

  13. On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-05-01

    Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high signal-to-noise (SNR). In this work, we propose simple closed-form asymptotic expressions of the ergodic capacity of dual-branch correlated Log- Normal corresponding to selection combining, and switch-and-stay combining. Furthermore, we capitalize on these new results to find new asymptotic ergodic capacity of correlated dual- branch free-space optical communication system under the impact of pointing error with both heterodyne and intensity modulation/direct detection. © 2015 IEEE.

  14. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    International Nuclear Information System (INIS)

    Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.

    2012-01-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.

  15. Levenberg-Marquardt application to two-phase nonlinear parameter estimation for finned-tube coil evaporators

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available A procedure for calculation of refrigerant mass flow rate is implemented in the distributed numerical model to simulate the flow in finned-tube coil dry-expansion evaporators, usually found in refrigeration and air-conditioning systems. Two-phase refrigerant flow inside the tubes is assumed to be one-dimensional, unsteady, and homogeneous. In the model the effects of refrigerant pressure drop and the moisture condensation from the air flowing over the external surface of the tubes are considered. The results obtained are the distributions of refrigerant velocity, temperature and void fraction, tube-wall temperature, air temperature, and absolute humidity. The finite volume method is used to discretize the governing equations. Additionally, given the operation conditions and the geometric parameters, the model allows the calculation of the refrigerant mass flow rate. The value of mass flow rate is computed using the process of parameter estimation with the minimization method of Levenberg-Marquardt minimization. In order to validate the developed model, the obtained results using HFC-134a as a refrigerant are compared with available data from the literature.

  16. The thermal pressure distribution of a simulated cold neutral medium

    Energy Technology Data Exchange (ETDEWEB)

    Gazol, Adriana, E-mail: a.gazol@crya.unam.mx [Centro de Radioastronomía y Astrofísica, UNAM, A. P. 3-72, c.p. 58089 Morelia, Michoacán (Mexico)

    2014-07-01

    We numerically study the thermal pressure distribution in a gas with thermal properties similar to those of the cold neutral interstellar gas by analyzing three-dimensional hydrodynamic models in boxes with sides of 100 pc with turbulent compressible forcing at 50 pc and different Mach numbers. We find that at high pressures and for large Mach numbers, both the volume-weighted and the density-weighted distributions can be appropriately described by a log-normal distribution, whereas for small Mach numbers they are better described by a power law. Thermal pressure distributions resulting from similar simulations but with self-gravity differ only for low Mach numbers; in this case, they develop a high pressure tail.

  17. Global patterns of city size distributions and their fundamental drivers.

    Directory of Open Access Journals (Sweden)

    Ethan H Decker

    2007-09-01

    Full Text Available Urban areas and their voracious appetites are increasingly dominating the flows of energy and materials around the globe. Understanding the size distribution and dynamics of urban areas is vital if we are to manage their growth and mitigate their negative impacts on global ecosystems. For over 50 years, city size distributions have been assumed to universally follow a power function, and many theories have been put forth to explain what has become known as Zipf's law (the instance where the exponent of the power function equals unity. Most previous studies, however, only include the largest cities that comprise the tail of the distribution. Here we show that national, regional and continental city size distributions, whether based on census data or inferred from cluster areas of remotely-sensed nighttime lights, are in fact lognormally distributed through the majority of cities and only approach power functions for the largest cities in the distribution tails. To explore generating processes, we use a simple model incorporating only two basic human dynamics, migration and reproduction, that nonetheless generates distributions very similar to those found empirically. Our results suggest that macroscopic patterns of human settlements may be far more constrained by fundamental ecological principles than more fine-scale socioeconomic factors.

  18. The constitutive distributed parameter model of multicomponent chemical processes in gas, fluid and solid phase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    In the literature of distributed parameter modelling of real processes is not considered the class of multicomponent chemical processes in gas, fluid and solid phase. The aim of paper is constitutive distributed parameter physicochemical model, constructed on kinetics and phenomenal analysis of multicomponent chemical processes in gas, fluid and solid phase. The mass, energy and momentum aspects of these multicomponent chemical reactions and adequate phenomena are utilized in balance operations, by conditions of: constitutive invariance for continuous media with space and time memories, reciprocity principle for isotropic and anisotropic nonhomogeneous media with space and time memories, application of definitions of following derivative and equation of continuity, to the construction of systems of partial differential constitutive state equations, in the following derivative forms for gas, fluid and solid phase. Couched in this way all physicochemical conditions of multicomponent chemical processes in gas, fluid and solid phase are new form of constitutive distributed parameter model for automatics and its systems of equations are new form of systems of partial differential constitutive state equations in sense of phenomenal distributed parameter control

  19. Effect of thermohydraulic parameter on the flux distribution and the effective multiplication factor

    International Nuclear Information System (INIS)

    Mello, J.C.; Valladares, G.L.

    1990-01-01

    The influence of two thermohydraulics parameters; the coolant flow velocity along the reactor channels and the increase of the average water temperature through the core, on the thermal flux distribution and on the effective multiplication factor, was studied in a radioisotopes production reactor. The results show that, for a fixed values of the thermohydraulics parameters reffered above, there are limits for the reactor core volume reduction for each value of the V sub(mod)/V sub(comb) ratio. These thermohydraulics conditions determine the higher termal flux value in the flux-trap and the lower value of the reactor effective multiplication factor. It is also show that there is a V sub(mod)/V sub(comb) ratio value that correspond to the higher value of the lower effective multiplication factor. These results was interpreted and comment using fundamentals concepts and relations of reactor physics. (author)

  20. Achieving reasonable conservatism in nuclear safety analyses

    International Nuclear Information System (INIS)

    Jamali, Kamiar

    2015-01-01

    In the absence of methods that explicitly account for uncertainties, seeking reasonable conservatism in nuclear safety analyses can quickly lead to extreme conservatism. The rate of divergence to extreme conservatism is often beyond the expert analysts’ intuitive feeling, but can be demonstrated mathematically. Too much conservatism in addressing the safety of nuclear facilities is not beneficial to society. Using certain properties of lognormal distributions for representation of input parameter uncertainties, example calculations for the risk and consequence of a fictitious facility accident scenario are presented. Results show that there are large differences between the calculated 95th percentiles and the extreme bounding values derived from using all input variables at their upper-bound estimates. Showing the relationship of the mean values to the key parameters of the output distributions, the paper concludes that the mean is the ideal candidate for representation of the value of an uncertain parameter. The mean value is proposed as the metric that is consistent with the concept of reasonable conservatism in nuclear safety analysis, because its value increases towards higher percentiles of the underlying positively skewed distribution with increasing levels of uncertainty. Insensitivity of the results to the actual underlying distributions is briefly demonstrated. - Highlights: • Multiple conservative assumptions can quickly diverge into extreme conservatism. • Mathematics and attractive properties provide basis for wide use of lognormal distribution. • Mean values are ideal candidates for representation of parameter uncertainties. • Mean values are proposed as reasonably conservative estimates of parameter uncertainties

  1. Statistical universalities in fragmentation under scaling symmetry with a constant frequency of fragmentation

    International Nuclear Information System (INIS)

    Gorokhovski, M A; Saveliev, V L

    2008-01-01

    This paper analyses statistical universalities that arise over time during constant frequency fragmentation under scaling symmetry. The explicit expression of particle-size distribution obtained from the evolution kinetic equation shows that, with increasing time, the initial distribution tends to the ultimate steady-state delta function through at least two intermediate universal asymptotics. The earlier asymptotic is the well-known log-normal distribution of Kolmogorov (1941 Dokl. Akad. Nauk. SSSR 31 99-101). This distribution is the first universality and has two parameters: the first and the second logarithmic moments of the fragmentation intensity spectrum. The later asymptotic is a power function (stronger universality) with a single parameter that is given by the ratio of the first two logarithmic moments. At large times, the first universality implies that the evolution equation can be reduced exactly to the Fokker-Planck equation instead of making the widely used but inconsistent assumption about the smallness of higher than second order moments. At even larger times, the second universality shows evolution towards a fractal state with dimension identified as a measure of the fracture resistance of the medium

  2. Decoy-state quantum key distribution with two-way classical postprocessing

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, C.-H.F.; Chen Kai; Lo, H.-K.; Dupuis, Frederic; Tamaki, Kiyoshi

    2006-01-01

    Decoy states have recently been proposed as a useful method for substantially improving the performance of quantum key distribution (QKD) protocols when a coherent-state source is used. Previously, data postprocessing schemes based on one-way classical communications were considered for use with decoy states. In this paper, we develop two data postprocessing schemes for the decoy-state method using two-way classical communications. Our numerical simulation (using parameters from a specific QKD experiment as an example) results show that our scheme is able to extend the maximal secure distance from 142 km (using only one-way classical communications with decoy states) to 181 km. The second scheme is able to achieve a 10% greater key generation rate in the whole regime of distances. We conclude that decoy-state QKD with two-way classical postprocessing is of practical interest

  3. Platoon Dispersion Analysis Based on Diffusion Theory

    Directory of Open Access Journals (Sweden)

    Badhrudeen Mohamed

    2017-01-01

    Full Text Available Urbanization and gro wing demand for travel, causes the traffic system to work ineffectively in most urban areas leadin g to traffic congestion. Many approaches have been adopted to address this problem, one among them being the signal co-ordination. This can be achieved if the platoon of vehicles that gets discharged at one signal gets green at consecutive signals with minimal delay. However, platoons tend to get dispersed as they travel and this dispersion phenomenon should be taken into account for effective signal coordination. Reported studies in this area are from the homogeneous and lane disciplined traffic conditions. This paper analyse the platoon dispersion characteristics under heterogeneous and lane-less traffic conditions. Out of the various modeling techniques reported, the approach based on diffusion theory is used in this study. The diffusion theory based models so far assumed thedata to follow normal distribution. However, in the present study, the data was found to follow lognormal distribution and hence the implementation was carried out using lognormal distribution. The parameters of lognormal distribution were calibrated for the study condition. For comparison purpose, normal distribution was also calibrated and the results were evaluated. It was foun d that model with log normal distribution performed better in all cases than the o ne with normal distribution.

  4. Multiplicity distributions in small phase-space domains in central nucleus-nucleus collisions

    International Nuclear Information System (INIS)

    Baechler, J.; Hoffmann, M.; Runge, K.; Schmoetten, E.; Bartke, J.; Gladysz, E.; Kowalski, M.; Stefanski, P.; Bialkowska, H.; Bock, R.; Brockmann, R.; Sandoval, A.; Buncic, P.; Ferenc, D.; Kadija, K.; Ljubicic, A. Jr.; Vranic, D.; Chase, S.I.; Harris, J.W.; Odyniec, G.; Pugh, H.G.; Rai, G.; Teitelbaum, L.; Tonse, S.; Derado, I.; Eckardt, V.; Gebauer, H.J.; Rauch, W.; Schmitz, N.; Seyboth, P.; Seyerlein, J.; Vesztergombi, G.; Eschke, J.; Heck, W.; Kabana, S.; Kuehmichel, A.; Lahanas, M.; Lee, Y.; Le Vine, M.; Margetis, S.; Renfordt, R.; Roehrich, D.; Rothard, H.; Schmidt, E.; Schneider, I.; Stock, R.; Stroebele, H.; Wenig, S.; Fleischmann, B.; Fuchs, M.; Gazdzicki, M.; Kosiec, J.; Skrzypczak, E.; Keidel, R.; Piper, A.; Puehlhofer, F.; Nappi, E.; Posa, F.; Paic, G.; Panagiotou, A.D.; Petridis, A.; Vassileiadis, G.; Pfenning, J.; Wosiek, B.

    1992-10-01

    Multiplicity distributions of negatively charged particles have been studied in restricted phase space intervals for central S + S, O + Au and S + Au collisions at 200 GeV/nucleon. It is shown that multiplicity distributions are well described by a negative binomial form irrespectively of the size and dimensionality of phase space domain. A clan structure analysis reveals interesting similarities between complex nuclear collisions and a simple partonic shower. The lognormal distribution agrees reasonably well with the multiplicity data in large domains, but fails in the case of small intervals. No universal scaling function was found to describe the shape of multiplicity distributions in phase space intervals of varying size. (orig.)

  5. WIPP Compliance Certification Application calculations parameters. Part 1: Parameter development

    International Nuclear Information System (INIS)

    Howarth, S.M.

    1997-01-01

    The Waste Isolation Pilot Plant (WIPP) in southeast New Mexico has been studied as a transuranic waste repository for the past 23 years. During this time, an extensive site characterization, design, construction, and experimental program was completed, which provided in-depth understanding of the dominant processes that are most likely to influence the containment of radionuclides for 10,000 years. Nearly 1,500 parameters were developed using information gathered from this program; the parameters were input to numerical models for WIPP Compliance Certification Application (CCA) Performance Assessment (PA) calculations. The CCA probabilistic codes frequently require input values that define a statistical distribution for each parameter. Developing parameter distributions begins with the assignment of an appropriate distribution type, which is dependent on the type, magnitude, and volume of data or information available. The development of the parameter distribution values may require interpretation or statistical analysis of raw data, combining raw data with literature values, scaling of lab or field data to fit code grid mesh sizes, or other transformation. Parameter development and documentation of the development process were very complicated, especially for those parameters based on empirical data; they required the integration of information from Sandia National Laboratories (SNL) code sponsors, parameter task leaders (PTLs), performance assessment analysts (PAAs), and experimental principal investigators (PIs). This paper, Part 1 of two parts, contains a discussion of the parameter development process, roles and responsibilities, and lessons learned. Part 2 will discuss parameter documentation, traceability and retrievability, and lessons learned from related audits and reviews

  6. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition.

    Directory of Open Access Journals (Sweden)

    Xiliang Zheng

    2015-04-01

    Full Text Available We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity, the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.

  7. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    Science.gov (United States)

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  8. Bayesian Estimation of the Scale Parameter of Inverse Weibull Distribution under the Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    Farhad Yahgmaei

    2013-01-01

    Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.

  9. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  10. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  11. Bending analysis of agglomerated carbon nanotube-reinforced beam resting on two parameters modified Vlasov model foundation

    Science.gov (United States)

    Ghorbanpour Arani, A.; Zamani, M. H.

    2018-06-01

    The present work deals with bending behavior of nanocomposite beam resting on two parameters modified Vlasov model foundation (MVMF), with consideration of agglomeration and distribution of carbon nanotubes (CNTs) in beam matrix. Equivalent fiber based on Eshelby-Mori-Tanaka approach is employed to determine influence of CNTs aggregation on elastic properties of CNT-reinforced beam. The governing equations are deduced using the principle of minimum potential energy under assumption of the Euler-Bernoulli beam theory. The MVMF required the estimation of γ parameter; to this purpose, unique iterative technique based on variational principles is utilized to compute value of the γ and subsequently fourth-order differential equation is solved analytically. Eventually, the transverse displacements and bending stresses are obtained and compared for different agglomeration parameters, various boundary conditions simultaneously and variant elastic foundation without requirement to instate values for foundation parameters.

  12. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  13. Study on a resource allocation scheme in multi-hop MIMO-OFDM systems over lognormal-rayleigh compound channels

    Directory of Open Access Journals (Sweden)

    LIU Jun

    2015-10-01

    Full Text Available For new generation wireless communication networks,this paper studies the optimization of the capacity and end-to-end throughput of the MIMO-OFDM based multi-hop relay systems.A water-filling power allocation method is proposed to improve the channel capacity and the throughput of the MIMO-OFDM system based multi-hop relay system in the Lognormal-Rayleigh shadowing compound channels.Simulations on the capacity and throughput show that the water-filling algorithm can improve the system throughput effectively in the MIMO-OFDM multi-hop relay system.

  14. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....

  15. Improvement of two-way continuous-variable quantum key distribution using optical amplifiers

    International Nuclear Information System (INIS)

    Zhang, Yi-Chen; Yu, Song; Gu, Wanyi; Li, Zhengyu; Sun, Maozhu; Peng, Xiang; Guo, Hong; Weedbrook, Christian

    2014-01-01

    The imperfections of a receiver's detector affect the performance of two-way continuous-variable (CV) quantum key distribution (QKD) protocols and are difficult to adjust in practical situations. We propose a method to improve the performance of two-way CV-QKD by adding a parameter-adjustable optical amplifier at the receiver. A security analysis is derived against a two-mode collective entangling cloner attack. Our simulations show that the proposed method can improve the performance of protocols as long as the inherent noise of the amplifier is lower than a critical value, defined as the tolerable amplifier noise. Furthermore, the optimal performance can approach the scenario where a perfect detector is used. (paper)

  16. A Bayesian setting for an inverse problem in heat transfer

    KAUST Repository

    Ruggeri, Fabrizio

    2014-01-06

    In this work a Bayesian setting is developed to infer the thermal conductivity, an unknown parameter that appears into heat equation. Temperature data are available on the basis of cooling experiments. The realistic assumption that the boundary data are noisy is introduced, for a given prescribed initial condition. We show how to derive the global likelihood function for the forward boundary-initial condition problem, given the values of the temperature field plus Gaussian noise. We assume that the thermal conductivity parameter can be modelled a priori through a lognormal distributed random variable or by means of a space-dependent stationary lognormal random field. In both cases, given Gaussian priors for the time-dependent Dirichlet boundary values, we marginalize out analytically the joint posterior distribution of and the random boundary conditions, TL and TR, using the linearity of the heat equation. Synthetic data are used to carry out the inference. We exploit the concentration of the posterior distribution of , using the Laplace approximation and therefore avoiding costly MCMC computations.

  17. A Bayesian setting for an inverse problem in heat transfer

    KAUST Repository

    Ruggeri, Fabrizio; Sawlan, Zaid A; Scavino, Marco; Tempone, Raul

    2014-01-01

    In this work a Bayesian setting is developed to infer the thermal conductivity, an unknown parameter that appears into heat equation. Temperature data are available on the basis of cooling experiments. The realistic assumption that the boundary data are noisy is introduced, for a given prescribed initial condition. We show how to derive the global likelihood function for the forward boundary-initial condition problem, given the values of the temperature field plus Gaussian noise. We assume that the thermal conductivity parameter can be modelled a priori through a lognormal distributed random variable or by means of a space-dependent stationary lognormal random field. In both cases, given Gaussian priors for the time-dependent Dirichlet boundary values, we marginalize out analytically the joint posterior distribution of and the random boundary conditions, TL and TR, using the linearity of the heat equation. Synthetic data are used to carry out the inference. We exploit the concentration of the posterior distribution of , using the Laplace approximation and therefore avoiding costly MCMC computations.

  18. Dynamics of a neuron model in different two-dimensional parameter-spaces

    Science.gov (United States)

    Rech, Paulo C.

    2011-03-01

    We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.

  19. Estimation of blocking temperatures from ZFC/FC curves

    DEFF Research Database (Denmark)

    Hansen, Mikkel Fougt; Mørup, Steen

    1999-01-01

    We present a new method to extract the parameters of a log-normal distribution of energy barriers in an assembly of ultrafine magnetic particles from simple featurees of the zero-field cooled and field cooled magnetisation curves. The method is established using numerical simulations and is tested...

  20. THE INTRINSIC EDDINGTON RATIO DISTRIBUTION OF ACTIVE GALACTIC NUCLEI IN STAR-FORMING GALAXIES FROM THE SLOAN DIGITAL SKY SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Mackenzie L.; Hickox, Ryan C.; Black, Christine S.; Hainline, Kevin N.; DiPompeo, Michael A. [Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755 (United States); Goulding, Andy D. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2016-07-20

    An important question in extragalactic astronomy concerns the distribution of black hole accretion rates of active galactic nuclei (AGNs). Based on observations at X-ray wavelengths, the observed Eddington ratio distribution appears as a power law, while optical studies have often yielded a lognormal distribution. There is increasing evidence that these observed discrepancies may be due to contamination by star formation and other selection effects. Using a sample of galaxies from the Sloan Digital Sky Survey Data Release 7, we test whether or not an intrinsic Eddington ratio distribution that takes the form of a Schechter function is consistent with previous work suggesting that young galaxies in optical surveys have an observed lognormal Eddington ratio distribution. We simulate the optical emission line properties of a population of galaxies and AGNs using a broad, instantaneous luminosity distribution described by a Schechter function near the Eddington limit. This simulated AGN population is then compared to observed galaxies via their positions on an emission line excitation diagram and Eddington ratio distributions. We present an improved method for extracting the AGN distribution using BPT diagnostics that allows us to probe over one order of magnitude lower in Eddington ratio, counteracting the effects of dilution by star formation. We conclude that for optically selected AGNs in young galaxies, the intrinsic Eddington ratio distribution is consistent with a possibly universal, broad power law with an exponential cutoff, as this distribution is observed in old, optically selected galaxies and X-rays.

  1. Statistical investigation of the crack initiation lives of piping structural welded joint in low cycle fatigue test of 240 degree C

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Gao Qing; Cai Lixun

    1999-01-01

    A statistical investigation into the fitting of four possible fatigue assumed distributions (three parameter Weibull, two parameter Weibull, lognormal and extreme maximum value distributions) for the crack initiation lives of piping structural welded joint in low cycle fatigue test of 240 degree C is performed by linear regression and least squares methods. The results reveal that the three parameters Weibull distribution may give misleading results in fatigue reliability analysis because the shape parameter is often less than 1. This means that the failure rate decreases with fatigue cycling which is contrary to the general understanding of the behaviour of welded joint. Reliability analyses may also affected by the slightly nonconservative evaluations in tail regions of this distribution. The other three distributions are slightly poor in the total fit effects, but they can be safety assumed in reliability analyses due to the non-conservative evaluations in tail regions mostly and the consistency with the fatigue physics of the structural behaviour of welded joint in the range of engineering practice. In addition, the extreme maximum value distribution is in good consists with the general physical understanding of the structural behaviour of welded joint

  2. BLOOD DONOR HAEMATOLOGY PARAMETERS IN TWO ...

    African Journals Online (AJOL)

    hi-tech

    2005-03-03

    Mar 3, 2005 ... determine distribution frequencies (including mean and median) and 95% percentile ranges (defined as the mean ± 2SD). These were determined individually for Kisumu and Nairobi, and comparisons between the two donor groups were made using the independent samples t-test. RESULTS. Red blood ...

  3. Temporal evolution of electron energy distribution function and plasma parameters in the afterglow of drifting magnetron plasma

    International Nuclear Information System (INIS)

    Seo, Sang-Hun; In, Jung-Hwan; Chang, Hong-Young

    2005-01-01

    The temporal behaviour of the electron energy distribution function (EEDF) and the plasma parameters such as electron density, electron temperature and plasma and floating potentials in a mid-frequency pulsed dc magnetron plasma are investigated using time-resolved probe measurements. A negative-voltage dc pulse with an average power of 160 W during the pulse-on period, a repetition frequency of 20 kHz and a duty cycle of 50% is applied to the cathode of a planar unbalanced magnetron discharge with a grounded substrate. The measured electron energy distribution is found to exhibit a bi-Maxwellian distribution, which can be resolved with the low-energy electron group and the high-energy tail part during the pulse-on period, and a Maxwellian distribution only with low-energy electrons as a consequence of initially rapid decay of the high-energy tail part during the pulse-off period. This characteristic evolution of the EEDF is reflected in the decay characteristics of the electron density and temperature in the afterglow. These parameters exhibit twofold decay represented by two characteristic decay times of an initial fast decay time τ 1 , and a subsequent slower decay time τ 2 in the afterglow when approximated with a bi-exponential function. While the initial fast decay times are of the order of 1 μs (τ T1 ∼ 0.99 μs and τ N1 ∼ 1.5 μs), the slower decay times are of the order of a few tens of microseconds (τ T2 ∼ 7 μs and τ N2 ∼ 40 μs). The temporal evolution of the plasma parameters are qualitatively explained by considering the formation mechanism of the bi-Maxwellian electron distribution function and the electron transports of these electron groups in bulk plasma

  4. Thermodynamics of two-parameter quantum group Bose and Fermi gases

    International Nuclear Information System (INIS)

    Algin, A.

    2005-01-01

    The high and low temperature thermodynamic properties of the two-parameter deformed quantum group Bose and Fermi gases with SU p/q (2) symmetry are studied. Starting with a SU p/q (2)-invariant bosonic as well as fermionic Hamiltonian, several thermodynamic functions of the system such as the average number of particles, internal energy and equation of state are derived. The effects of two real independent deformation parameters p and q on the properties of the systems are discussed. Particular emphasis is given to a discussion of the Bose-Einstein condensation phenomenon for the two-parameter deformed quantum group Bose gas. The results are also compared with earlier undeformed and one-parameter deformed versions of Bose and Fermi gas models. (author)

  5. Modification of the natural radionuclide distribution by some human activities in Canada

    International Nuclear Information System (INIS)

    Knight, G.B.; Makepeace, C.E.

    1980-01-01

    Examples are presented of three types of human activity that have resulted in elevated natural radiation levels. Investigations carried out by a Federal-Provincial Task Force are described. The distributions of grab sample measurements of radon and radon daughter concentrations are compared for the Bancroft area, Cobourg, Deloro, Elliot Lake, and Port Hope in Ontario, and Uranium City in Saskatchewan; it is concluded that the major point of difference between the communities that were investigated and the reference community of Cobourg is the departure from a symmetrical lognormal distribution at the higher concentrations

  6. Estimating the parameters of a generalized lambda distribution

    International Nuclear Information System (INIS)

    Fournier, B.; Rupin, N.; Najjar, D.; Iost, A.; Rupin, N.; Bigerelle, M.; Wilcox, R.; Fournier, B.

    2007-01-01

    The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n ≥ 2 * 10(3). The.025 and.975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to 1/root n.. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n = 10(6), bounded GLD modeling cannot be rejected by usual goodness-of-fit tests. (authors)

  7. DISTRIBUTION OF TWO-PHASE FLOW IN A DISTRIBUTOR

    Directory of Open Access Journals (Sweden)

    AZRIDJAL AZIZ

    2012-02-01

    Full Text Available The flow configuration and distribution behavior of two-phase flow in a distributor made of acrylic resin have been investigated experimentally. In this study, air and water were used as two-phase flow working fluids. The distributor consists of one inlet and two outlets, which are set as upper and lower, respectively. The flow visualization at the distributor was made by using a high–speed camera. The flow rates of air and water flowing out from the upper and lower outlet branches were measured. Effects of inclination angle of the distributor were investigated. By changing the inclination angle from vertical to horizontal, uneven distributions were also observed. The distribution of two-phase flow through distributor tends even flow distribution on the vertical position and tends uneven distribution on inclined and horizontal positions. It is shown that even distribution could be achieved at high superficial velocities of both air and water.

  8. Distributed Nonstationary Heat Model of Two-Channel Solar Air Heater

    International Nuclear Information System (INIS)

    Klychev, Sh. I.; Bakhramov, S. A.; Ismanzhanov, A. I.; Tashiev, N.N.

    2011-01-01

    An algorithm for a distributed nonstationary heat model of a solar air heater (SAH) with two operating channels is presented. The model makes it possible to determine how the coolant temperature changes with time along the solar air heater channel by considering its main thermal and ambient parameters, as well as variations in efficiency. Examples of calculations are presented. It is shown that the time within which the mean-day efficiency of the solar air heater becomes stable is significantly higher than the time within which the coolant temperature reaches stable values. The model can be used for investigation of the performances of solar water-heating collectors. (authors)

  9. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    Science.gov (United States)

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Virtual walks in spin space: A study in a family of two-parameter models

    Science.gov (United States)

    Mullick, Pratik; Sen, Parongama

    2018-05-01

    We investigate the dynamics of classical spins mapped as walkers in a virtual "spin" space using a generalized two-parameter family of spin models characterized by parameters y and z [de Oliveira et al., J. Phys. A 26, 2317 (1993), 10.1088/0305-4470/26/10/006]. The behavior of S (x ,t ) , the probability that the walker is at position x at time t , is studied in detail. In general S (x ,t ) ˜t-αf (x /tα) with α ≃1 or 0.5 at large times depending on the parameters. In particular, S (x ,t ) for the point y =1 ,z =0.5 corresponding to the Voter model shows a crossover in time; associated with this crossover, two timescales can be defined which vary with the system size L as L2logL . We also show that as the Voter model point is approached from the disordered regions along different directions, the width of the Gaussian distribution S (x ,t ) diverges in a power law manner with different exponents. For the majority Voter case, the results indicate that the the virtual walk can detect the phase transition perhaps more efficiently compared to other nonequilibrium methods.

  11. Method for determining appropriate statistical models of the random cyclic stress amplitudes of a stainless pipe weld metal

    International Nuclear Information System (INIS)

    Wang Jinnuo; Zhao Yongxiang; Wang Shaohua

    2001-01-01

    It is revealed by a strain-controlled fatigue test that there is a significant scatter of the cyclic stress-strain responses for a nuclear engineering material, 1Cr18Ni9Ti stainless steel pipe-weld metal. This implies that the existent deterministic analysis might be non-conservative. Taking into account the scatter, a method for determining the appropriate statistical models of material cyclic stress amplitudes is presented by considering the total fit, consistency with fatigue physics, and safety of design of seven commonly used distribution fitting into the test data. The seven distribution are Weibull (two-and three-parameter), normal, lognormal, extreme minimum value, extreme maximum value, and exponential. In the method, statistical parameters of the distributions are evaluated by a linear regression technique. Statistical tests are made by a transformation from t-distribution function to Pearson statistical parameter. Statistical tests are made by a transformation from t-distribution function to Pearson statistical parameter, i.e. the linear relationship coefficient. The total fit is assessed by a parameter so-called fitted relationship coefficient of the empirical and theoretical failure probabilities. The consistency with fatigue physics is analyzed by the hazard rate curves of distributions. The safety of design is measured by examining the change of predicted errors in the tail regions of distributions

  12. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity.

    Directory of Open Access Journals (Sweden)

    James D Englehardt

    Full Text Available Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a toxicokinetic models, (b biologically-based network models, (c scholastic and psychological test score data for children with prenatal mercury exposure, and (d time-to-tumor data of the ED01 study.

  13. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  14. Two Photon Distribution Amplitudes

    International Nuclear Information System (INIS)

    El Beiyad, M.; Pire, B.; Szymanowski, L.; Wallon, S.

    2008-01-01

    The factorization of the amplitude of the process γ*γ→γγ in the low energy and high photon virtuality region is demonstrated at the Born order and in the leading logarithmic approximation. The leading order two photon (generalized) distribution amplitudes exhibit a characteristic ln Q 2 behaviour and obey new inhomogeneous evolution equations

  15. Urban particle size distributions during two contrasting dust events originating from Taklimakan and Gobi Deserts

    International Nuclear Information System (INIS)

    Zhao, Suping; Yu, Ye; Xia, Dunsheng; Yin, Daiying; He, Jianjun; Liu, Na; Li, Fang

    2015-01-01

    The dust origins of the two events were identified using HYSPLIT trajectory model and MODIS and CALIPSO satellite data to understand the particle size distribution during two contrasting dust events originated from Taklimakan and Gobi deserts. The supermicron particles significantly increased during the dust events. The dust event from Gobi desert affected significantly on the particles larger than 2.5 μm, while that from Taklimakan desert impacted obviously on the particles in 1.0–2.5 μm. It is found that the particle size distributions and their modal parameters such as VMD (volume median diameter) have significant difference for varying dust origins. The dust from Taklimakan desert was finer than that from Gobi desert also probably due to other influencing factors such as mixing between dust and urban emissions. Our findings illustrated the capacity of combining in situ, satellite data and trajectory model to characterize large-scale dust plumes with a variety of aerosol parameters. - Highlights: • Dust particle size distributions had large differences for varying origins. • Dust originating from Taklimakan Desert was finer than that from Gobi Desert. • Effect of dust on the supermicron particles was obvious. • PM_1_0 concentrations increased by a factor of 3.4–25.6 during the dust event. - Dust particle size distributions had large differences for varying origins, which may be also related to other factors such as mixing between dust and urban emissions.

  16. Contact parameters in two dimensions for general three-body systems

    DEFF Research Database (Denmark)

    F. Bellotti, F.; Frederico, T.; T. Yamashita, M.

    2014-01-01

    a subsystem is composed of two identical non-interacting particles. We also show that the three-body contact parameter is negligible in the case of one non-interacting subsystem compared to the situation where all subsystem are bound. As example, we present results for mixtures of Lithium with two Cesium......We study the two dimensional three-body problem in the general case of three distinguishable particles interacting through zero-range potentials. The Faddeev decomposition is used to write the momentum-space wave function. We show that the large-momentum asymptotic spectator function has the same...... to obtain two- and three-body contact parameters. We specialize from the general cases to examples of two identical, interacting or non-interacting, particles. We find that the two-body contact parameter is not a universal constant in the general case and show that the universality is recovered when...

  17. Effects of abdominal fat distribution parameters on severity of acute pancreatitis.

    LENUS (Irish Health Repository)

    O'Leary, D P

    2012-07-01

    Obesity is a well-established risk factor for acute pancreatitis. Increased visceral fat has been shown to exacerbate the pro-inflammatory milieu experienced by patients. This study aimed to investigate the relationship between the severity of acute pancreatitis and abdominal fat distribution parameters measured on computed tomography (CT) scan.

  18. Consideration of time-evolving capacity distributions and improved degradation models for seismic fragility assessment of aging highway bridges

    International Nuclear Information System (INIS)

    Ghosh, Jayadipta; Sood, Piyush

    2016-01-01

    This paper presents a methodology to develop seismic fragility curves for deteriorating highway bridges by uniquely accounting for realistic pitting corrosion deterioration and time-dependent capacity distributions for reinforced concrete columns under chloride attacks. The proposed framework offers distinct improvements over state-of-the-art procedures for fragility assessment of degrading bridges which typically assume simplified uniform corrosion deterioration model and pristine limit state capacities. Depending on the time in service life and deterioration mechanism, this study finds that capacity limit states for deteriorating bridge columns follow either lognormal distribution or generalized extreme value distributions (particularly for pitting corrosion). Impact of column degradation mechanism on seismic response and fragility of bridge components and system is assessed using nonlinear time history analysis of three-dimensional finite element bridge models reflecting the uncertainties across structural modeling parameters, deterioration parameters and ground motion. Comparisons are drawn between the proposed methodology and traditional approaches to develop aging bridge fragility curves. Results indicate considerable underestimations of system level fragility across different damage states using the traditional approach compared to the proposed realistic pitting model for chloride induced corrosion. Time-dependent predictive functions are provided to interpolate logistic regression coefficients for continuous seismic reliability evaluation along the service life with reasonable accuracy. - Highlights: • Realistic modeling of chloride induced corrosion deterioration in the form of pitting. • Time-evolving capacity distribution for aging bridge columns under chloride attacks. • Time-dependent seismic fragility estimation of highway bridges at component and system level. • Mathematical functions for continuous tracking of seismic fragility along service

  19. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  20. Reliability-based sensitivity of mechanical components with arbitrary distribution parameters

    International Nuclear Information System (INIS)

    Zhang, Yi Min; Yang, Zhou; Wen, Bang Chun; He, Xiang Dong; Liu, Qiaoling

    2010-01-01

    This paper presents a reliability-based sensitivity method for mechanical components with arbitrary distribution parameters. Techniques from the perturbation method, the Edgeworth series, the reliability-based design theory, and the sensitivity analysis approach were employed directly to calculate the reliability-based sensitivity of mechanical components on the condition that the first four moments of the original random variables are known. The reliability-based sensitivity information of the mechanical components can be accurately and quickly obtained using a practical computer program. The effects of the design parameters on the reliability of mechanical components were studied. The method presented in this paper provides the theoretic basis for the reliability-based design of mechanical components

  1. Nash equilibria in quantum games with generalized two-parameter strategies

    International Nuclear Information System (INIS)

    Flitney, Adrian P.; Hollenberg, Lloyd C.L.

    2007-01-01

    In the Eisert protocol for 2x2 quantum games [J. Eisert, et al., Phys. Rev. Lett. 83 (1999) 3077], a number of authors have investigated the features arising from making the strategic space a two-parameter subset of single qubit unitary operators. We argue that the new Nash equilibria and the classical-quantum transitions that occur are simply an artifact of the particular strategy space chosen. By choosing a different, but equally plausible, two-parameter strategic space we show that different Nash equilibria with different classical-quantum transitions can arise. We generalize the two-parameter strategies and also consider these strategies in a multiplayer setting

  2. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  3. Improved control of distributed parameter systems using wireless sensor and actuator networks: An observer-based method

    International Nuclear Information System (INIS)

    Jiang Zheng-Xian; Cui Bao-Tong; Lou Xu-Yang; Zhuang Bo

    2017-01-01

    In this paper, the control problem of distributed parameter systems is investigated by using wireless sensor and actuator networks with the observer-based method. Firstly, a centralized observer which makes use of the measurement information provided by the fixed sensors is designed to estimate the distributed parameter systems. The mobile agents, each of which is affixed with a controller and an actuator, can provide the observer-based control for the target systems. By using Lyapunov stability arguments, the stability for the estimation error system and distributed parameter control system is proved, meanwhile a guidance scheme for each mobile actuator is provided to improve the control performance. A numerical example is finally used to demonstrate the effectiveness and the advantages of the proposed approaches. (paper)

  4. Ion acoustic solitons in a plasma with two-temperature kappa-distributed electrons

    International Nuclear Information System (INIS)

    Baluku, T. K.; Hellberg, M. A.

    2012-01-01

    Existence domains and characteristics of ion acoustic solitons are studied in a two-temperature electron plasma with both electron components being kappa-distributed, as found in Saturn's magnetosphere. As is the case for double-Boltzmann electrons, solitons of both polarities can exist over restricted ranges of fractional hot electron density ratio for this plasma model. Low κ values, which indicate increased suprathermal particles in the tail of the distribution, yield a smaller domain in the parameter space of hot density fraction and normalized soliton velocity (f, M), over which both soliton polarities are supported for a given plasma composition (the coexistence region). For some density ratios that support coexistence, solitons occur even at the lowest (critical) Mach number (i.e., at the acoustic speed), as found recently for a number of other plasma models. Like Maxwellians, low-κ distributions also support positive potential double layers over a narrow range of low fractional cool electron density (<10%).

  5. Ion acoustic solitons in a plasma with two-temperature kappa-distributed electrons

    Energy Technology Data Exchange (ETDEWEB)

    Baluku, T. K.; Hellberg, M. A. [School of Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa)

    2012-01-15

    Existence domains and characteristics of ion acoustic solitons are studied in a two-temperature electron plasma with both electron components being kappa-distributed, as found in Saturn's magnetosphere. As is the case for double-Boltzmann electrons, solitons of both polarities can exist over restricted ranges of fractional hot electron density ratio for this plasma model. Low {kappa} values, which indicate increased suprathermal particles in the tail of the distribution, yield a smaller domain in the parameter space of hot density fraction and normalized soliton velocity (f, M), over which both soliton polarities are supported for a given plasma composition (the coexistence region). For some density ratios that support coexistence, solitons occur even at the lowest (critical) Mach number (i.e., at the acoustic speed), as found recently for a number of other plasma models. Like Maxwellians, low-{kappa} distributions also support positive potential double layers over a narrow range of low fractional cool electron density (<10%).

  6. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    Science.gov (United States)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  7. Lognormal distribution of natural radionuclides in freshwater ecosystems and coal-ash repositories

    International Nuclear Information System (INIS)

    Drndarski, N.; Lavi, N.

    1997-01-01

    This study summarizes and analyses data for natural radionuclides, 40 K, 226 Ra and 'Th, measured by gamma spectrometry in water samples, sediments and coal-ash samples collected from regional freshwater ecosystems and near-by coal-ash repositories during the last decade, 1986-1996, respectively. The frequency plots of natural radionuclide data, for which the hypothesis of the regional scale log normality was accepted, exhibited single population groups with exception of 226 Ra and 232 Th data for waters. Thus the presence of break points in the frequency distribution plots indicated that 226 Ra and 232 Th data for waters do not come from a single statistical population. Thereafter the hypothesis of log normality was accepted for the separate population groups of 226 Ra and '-32 Th in waters. (authors)

  8. Unification of the Two-Parameter Equation of State and the Principle of Corresponding States

    DEFF Research Database (Denmark)

    Mollerup, Jørgen

    1998-01-01

    A two-parameter equation of state is a two-parameter corresponding states model. A two-parameter corresponding states model is composed of two scale factor correlations and a reference fluid equation of state. In a two-parameter equation of state the reference equation of state is the two-paramet...

  9. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  10. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  11. A two-compartment model of VEGF distribution in the mouse.

    Directory of Open Access Journals (Sweden)

    Phillip Yen

    Full Text Available Vascular endothelial growth factor (VEGF is a key regulator of angiogenesis--the growth of new microvessels from existing microvasculature. Angiogenesis is a complex process involving numerous molecular species, and to better understand it, a systems biology approach is necessary. In vivo preclinical experiments in the area of angiogenesis are typically performed in mouse models; this includes drug development targeting VEGF. Thus, to quantitatively interpret such experimental results, a computational model of VEGF distribution in the mouse can be beneficial. In this paper, we present an in silico model of VEGF distribution in mice, determine model parameters from existing experimental data, conduct sensitivity analysis, and test the validity of the model. The multiscale model is comprised of two compartments: blood and tissue. The model accounts for interactions between two major VEGF isoforms (VEGF(120 and VEGF(164 and their endothelial cell receptors VEGFR-1, VEGFR-2, and co-receptor neuropilin-1. Neuropilin-1 is also expressed on the surface of parenchymal cells. The model includes transcapillary macromolecular permeability, lymphatic transport, and macromolecular plasma clearance. Simulations predict that the concentration of unbound VEGF in the tissue is approximately 50-fold greater than in the blood. These concentrations are highly dependent on the VEGF secretion rate. Parameter estimation was performed to fit the simulation results to available experimental data, and permitted the estimation of VEGF secretion rate in healthy tissue, which is difficult to measure experimentally. The model can provide quantitative interpretation of preclinical animal data and may be used in conjunction with experimental studies in the development of pro- and anti-angiogenic agents. The model approximates the normal tissue as skeletal muscle and includes endothelial cells to represent the vasculature. As the VEGF system becomes better characterized in

  12. Dynamics of a neuron model in different two-dimensional parameter-spaces

    International Nuclear Information System (INIS)

    Rech, Paulo C.

    2011-01-01

    We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades. - Research highlights: → We report parameter-spaces obtained for the Hindmarsh-Rose neuron model. → Regardless of the combination of parameters, a typical scenario is preserved. → The scenario presents a comb-shaped chaotic region immersed in a periodic region. → Periodic regions near the chaotic region are in period-adding bifurcation cascades.

  13. Computational Fluid Dynamics-Population Balance Model Simulation of Effects of Cell Design and Operating Parameters on Gas-Liquid Two-Phase Flows and Bubble Distribution Characteristics in Aluminum Electrolysis Cells

    Science.gov (United States)

    Zhan, Shuiqing; Wang, Junfeng; Wang, Zhentao; Yang, Jianhong

    2018-02-01

    The effects of different cell design and operating parameters on the gas-liquid two-phase flows and bubble distribution characteristics under the anode bottom regions in aluminum electrolysis cells were analyzed using a three-dimensional computational fluid dynamics-population balance model. These parameters include inter-anode channel width, anode-cathode distance (ACD), anode width and length, current density, and electrolyte depth. The simulations results show that the inter-anode channel width has no significant effect on the gas volume fraction, electrolyte velocity, and bubble size. With increasing ACD, the above values decrease and more uniform bubbles can be obtained. Different effects of the anode width and length can be concluded in different cell regions. With increasing current density, the gas volume fraction and electrolyte velocity increase, but the bubble size keeps nearly the same. Increasing electrolyte depth decreased the gas volume fraction and bubble size in particular areas and the electrolyte velocity increased.

  14. Stochastic distribution of the required coefficient of friction for level walking--an in-depth study.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2012-01-01

    This study investigated the stochastic distribution of the required coefficient of friction (RCOF) which is a critical element for estimating slip probability. Fifty participants walked under four walking conditions. The results of the Kolmogorov-Smirnov two-sample test indicate that 76% of the RCOF data showed a difference in distribution between both feet for the same participant under each walking condition; the data from both feet were kept separate. The results of the Kolmogorov-Smirnov goodness-of-fit test indicate that most of the distribution of the RCOF appears to have a good match with the normal (85.5%), log-normal (84.5%) and Weibull distributions (81.5%). However, approximately 7.75% of the cases did not have a match with any of these distributions. It is reasonable to use the normal distribution for representation of the RCOF distribution due to its simplicity and familiarity, but each foot had a different distribution from the other foot in 76% of cases. The stochastic distribution of the required coefficient of friction (RCOF) was investigated for use in a statistical model to improve the estimate of slip probability in risk assessment. The results indicate that 85.5% of the distribution of the RCOF appears to have a good match with the normal distribution.

  15. Generalised extreme value distributions provide a natural hypothesis for the shape of seed mass distributions.

    Directory of Open Access Journals (Sweden)

    Will Edwards

    Full Text Available Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed "for normality" but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs, a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species' life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm.

  16. Measurement of void fraction distribution in two-phase flow by impedance CT with neural network

    International Nuclear Information System (INIS)

    Hayashi, Hideaki; Sumida, Isao; Sakai, Sinji; Wakai, Kazunori

    1996-01-01

    This paper describes a new method for measurement of void distribution using impedance CT with a hierarchical neural network. The present method consists of four processes. First, output electric currents are calculated by simulation of various distributions of void fraction. The relationship between distribution of void fraction and electric current is called 'teaching data'. Second, the neural network learns the teaching data by the back propagation method. Third, output electric currents are measured about actual two-phase flow. Finally, distribution of void fraction is calculated by the taught neural network using the measured electric currents. In this paper, measurement and learning parameters are adjusted, experimental results obtained using the impedance CT method are compared with data obtained by the impedance probe method. The results show that our method is effective for measurement of void fraction distribution. (author)

  17. An analytical method for evaluating the uncertainty in personal air sampler determinations of plutonium intakes

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1985-10-01

    The parameters defining aerosol particle size and activity distributions are reviewed. When considering statistical variation in sampled activity, it is convenient to express the activity in terms of the number of median units of particle activity. The aerosol size distribution is characterised by the activity median aerodynamic diameter and geometric standard deviation. Numerical values are given for the median and arithmetic mean of these activity distributions, for the range of plutonium aerosols encountered in air at the workplace. The methods used by Meggitt (1979) to evaluate (i) the probability density distribution of activity p(m/w) dm, sampled from a fixed concentration w, and (ii) the posterior probability density of concentration p(w/m) dw following a single measurement m, are also reviewed. These methods involve approximating the k-sum distribution, formed by summing the activity of k random particles from a log-normal population, by a shifted log-normal function. Meggitt's approximation of the k-sum distribution was found to be inadequate. An improved approximation is given, based on a transformed normal distribution. (author)

  18. Experimental study on characteristics of interfacial parameter distribution for upward bubbly flow in inclined tube

    International Nuclear Information System (INIS)

    Xing Dianchuan; Yan Changqi; Sun Licheng; Liu Jingyu

    2013-01-01

    Experimental study on characteristics of interfacial parameter distribution for air-water bubbly flow in an inclined circular tube was performed by using the double sensor probe method. Parameters including radial distributions of local void fraction, bubble passing frequency, interfacial area concentration and bubble equivalent diameter were measured using the probe. The inner diameter of test section is 50 mm, and the liquid superficial velocity is 0.144 m/s, with the gas superficial velocity ranging from 0 to 0.054 m/is. The results show that bubbles obviously move toward the upper wall and congregate. The local interfacial area concentration, bubble passing frequency and void fraction have similar radial distribution profiles. Different from the vertical condition, for a cross-sectional area of the test section, the peak value near the upper side increases, while decreases or even disappears near the underside. The local parameter increases as the radial positions change from lower to upper location, and the increased slope becomes larger as the inclination angles increase. The equivalent bubble diameter doesn't vary with radial position, superficial gas velocity and inclination angle, and bubble aggregation and breaking up nearly doesn't occur. The mechanism of effects of inclination on local parameter distribution for bubbly flow is explained by analyzing the transverse force governing the bubble motion. (authors)

  19. Measurement of two-particle semi-inclusive rapidity distributions at the CERN ISR

    CERN Document Server

    Amendolia, S R; Bosisio, L; Braccini, Pier Luigi; Bradaschia, C; Castaldi, R; Cavasinni, V; Cerri, C; Del Prete, T; Finocchiaro, G; Foà, L; Giromini, P; Grannis, P; Green, D; Jöstlein, H; Kephart, R; Laurelli, P; Menzione, A; Ristori, L; Sanguinetti, G; Thun, R; Valdata, M

    1976-01-01

    Data are presented on the semi-inclusive distributions of rapidities of secondary particles produced in pp collisions at very high energies. The experiment was performed at the CERN Intersecting Storage Rings (ISR). The data given, at centre-of-mass energies of square root s=23 and 62 GeV, include the single-particle distributions and two-particle correlations. The semi-inclusive correlations show pronounced short-range correlation effects which have a width considerably narrower than in the case of inclusive correlations. It is shown that these short-range effects can be understood empirically in terms of three parameters whose energy and multiplicity dependence are studied. The data support the picture of multiparticle production in which clusters of small multiplicity and small dispersion are emitted with subsequent decay into hadrons. (32 refs).

  20. On the distribution of the stochastic component in SUE traffic assignment models

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper discuss the use of different distributions of the stochastic component in SUE. A main conclusion is that they generally gave reasonable similar results, except for the LogNormal distribution which use is dissuaded. However, in cases with low link-costs (e.g. in dense urban areas, ramps...... and modelling of intersections and inter-changes), distributions with long tails (Gumbel and Normal) gave biased results com-pared with the Rectangular distribution. The Triangular distribution gave results somewhat between. Besides giving the most reasonable results, the Rectangular dis-tribution is the most...... calculation effective.All distributions gave a unique solution at link level after a sufficient large number of iterations (up to 1,000 at full-scale networks) while the usual aggregated measures of convergence converged quite fast (under 50 iterations). The tests also showed, that the distributions must...

  1. Evaluation of two typical distributed energy systems

    Science.gov (United States)

    Han, Miaomiao; Tan, Xiu

    2018-03-01

    According to the two-natural gas distributed energy system driven by gas engine driven and gas turbine, in this paper, the first and second laws of thermodynamics are used to measure the distributed energy system from the two parties of “quantity” and “quality”. The calculation results show that the internal combustion engine driven distributed energy station has a higher energy efficiency, but the energy efficiency is low; the gas turbine driven distributed energy station energy efficiency is high, but the primary energy utilization rate is relatively low. When configuring the system, we should determine the applicable natural gas distributed energy system technology plan and unit configuration plan according to the actual load factors of the project and the actual factors such as the location, background and environmental requirements of the project. “quality” measure, the utilization of waste heat energy efficiency index is proposed.

  2. Estimating the hydraulic conductivity of two-dimensional fracture networks

    Science.gov (United States)

    Leung, C. T.; Zimmerman, R. W.

    2010-12-01

    Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through stochastically generated two-dimensional fracture networks. The centres and orientations of the fractures are uniformly distributed, whereas their lengths follow either a lognormal distribution or a power law distribution. We have considered the case where the fractures in the network each have the same aperture, as well as the case where the aperture of each fracture is directly proportional to the fracture length. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this conductivity using a simple estimation method that does not require extensive computation. For our calculations, fracture networks are represented as networks composed of conducting segments (bonds) between nodes. Each bond represents the region of a single fracture between two adjacent intersections with other fractures. We assume that the bonds are arranged on a kagome lattice, with some fraction of the bonds randomly missing. The conductance of each bond is then replaced with some effective conductance, Ceff, which we take to be the arithmetic mean of the individual conductances, averaged over each bond, rather than over each fracture. This is in contrast to the usual approximation used in effective medium theories, wherein the geometric mean is used. Our

  3. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    Science.gov (United States)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely

  4. Modelling the Skinner Thesis : Consequences of a Lognormal or a Bimodal Resource Base Distribution

    NARCIS (Netherlands)

    Auping, W.L.

    2014-01-01

    The copper case is often used as an example in resource depletion studies. Despite these studies, several profound uncertainties remain in the system. One of these uncertainties is the distribution of copper grades in the lithosphere. The Skinner thesis promotes the idea that copper grades may be

  5. GROWTH RATE DISTRIBUTION OF BORAX SINGLE CRYSTALS ON THE (001 FACE UNDER VARIOUS FLOW RATES

    Directory of Open Access Journals (Sweden)

    Suharso Suharso

    2010-06-01

    Full Text Available The growth rates of borax single crystals from aqueous solutions at various flow rates in the (001 direction were measured using in situ cell method. From the growth rate data obtained, the growth rate distribution of borax crystals was investigated using Minitab Software and SPSS Software at relative supersaturation of 0807 and temperature of 25 °C. The result shows that normal, gamma, and log-normal distribution give a reasonably good fit to GRD. However, there is no correlation between growth rate distribution and flow rate of solution.   Keywords: growth rate dispersion (GRD, borax, flow rate

  6. Charged-particle thermonuclear reaction rates: II. Tables and graphs of reaction rates and probability density functions

    International Nuclear Information System (INIS)

    Iliadis, C.; Longland, R.; Champagne, A.E.; Coc, A.; Fitzgerald, R.

    2010-01-01

    Numerical values of charged-particle thermonuclear reaction rates for nuclei in the A=14 to 40 region are tabulated. The results are obtained using a method, based on Monte Carlo techniques, that has been described in the preceding paper of this issue (Paper I). We present a low rate, median rate and high rate which correspond to the 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution. The meaning of these quantities is in general different from the commonly reported, but statistically meaningless expressions, 'lower limit', 'nominal value' and 'upper limit' of the total reaction rate. In addition, we approximate the Monte Carlo probability density function of the total reaction rate by a lognormal distribution and tabulate the lognormal parameters μ and σ at each temperature. We also provide a quantitative measure (Anderson-Darling test statistic) for the reliability of the lognormal approximation. The user can implement the approximate lognormal reaction rate probability density functions directly in a stellar model code for studies of stellar energy generation and nucleosynthesis. For each reaction, the Monte Carlo reaction rate probability density functions, together with their lognormal approximations, are displayed graphically for selected temperatures in order to provide a visual impression. Our new reaction rates are appropriate for bare nuclei in the laboratory. The nuclear physics input used to derive our reaction rates is presented in the subsequent paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  7. A deterministic inventory model for deteriorating items with selling price dependent demand and three-parameter Weibull distributed deterioration

    Directory of Open Access Journals (Sweden)

    Asoke Kumar Bhunia

    2014-06-01

    Full Text Available In this paper, an attempt is made to develop two inventory models for deteriorating items with variable demand dependent on the selling price and frequency of advertisement of items. In the first model, shortages are not allowed whereas in the second, these are allowed and partially backlogged with a variable rate dependent on the duration of waiting time up to the arrival of next lot. In both models, the deterioration rate follows three-parameter Weibull distribution and the transportation cost is considered explicitly for replenishing the order quantity. This cost is dependent on the lot-size as well as the distance from the source to the destination. The corresponding models have been formulated and solved. Two numerical examples have been considered to illustrate the results and the significant features of the results are discussed. Finally, based on these examples, the effects of different parameters on the initial stock level, shortage level (in case of second model only, cycle length along with the optimal profit have been studied by sensitivity analyses taking one parameter at a time keeping the other parameters as same.

  8. Chaos-assisted tunneling in the presence of Anderson localization.

    Science.gov (United States)

    Doggen, Elmer V H; Georgeot, Bertrand; Lemarié, Gabriel

    2017-10-01

    Tunneling between two classically disconnected regular regions can be strongly affected by the presence of a chaotic sea in between. This phenomenon, known as chaos-assisted tunneling, gives rise to large fluctuations of the tunneling rate. Here we study chaos-assisted tunneling in the presence of Anderson localization effects in the chaotic sea. Our results show that the standard tunneling rate distribution is strongly modified by localization, going from the Cauchy distribution in the ergodic regime to a log-normal distribution in the strongly localized case, for both a deterministic and a disordered model. We develop a single-parameter scaling description which accurately describes the numerical data. Several possible experimental implementations using cold atoms, photonic lattices, or microwave billiards are discussed.

  9. New two- and three-parameter solutions of the MPST equation

    International Nuclear Information System (INIS)

    Krori, K.D.; Chaudhury, T.; Bhattacharjee, R.

    1981-01-01

    Some new two- and three-parameter solutions of the MPST (Misra et al. Phys. Rev.; D7:1587 (1973)) equation are presented. All the three-parameter solutions are physical in the sense of asymptotic flatness. The simplest member of the three-parameter series of solutions is identical with a three-parameter solution of the static Einstein-Maxwell equations recently discovered by Bonnor (J. Phys. A.; 12:853 (1979)). (author)

  10. Size distributions and failure initiation of submarine and subaerial landslides

    Science.gov (United States)

    ten Brink, Uri S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.

    2009-01-01

    Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area

  11. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. Pressure distribution over tube surfaces of tube bundle subjected to two phase cross flow

    International Nuclear Information System (INIS)

    Sim, Woo Gun

    2013-01-01

    Two phase vapor liquid flows exist in many shell and tube heat exchangers such as condensers, evaporators and nuclear steam generators. To understand the fluid dynamic forces acting on a structure subjected to a two phase flow, it is essential to obtain detailed information about the characteristics of a two phase flow. The characteristics of a two phase flow and the flow parameters were introduced, and then, an experiment was performed to evaluate the pressure loss in the tube bundles and the fluid dynamic force acting on the cylinder owing to the pressure distribution. A two phase flow was pre mixed at the entrance of the test section, and the experiments were undertaken using a normal triangular array of cylinders subjected to a two phase cross flow. The pressure loss along the flow direction in the tube bundles was measured to calculate the two phase friction multiplier, and the multiplier was compared with the analytical value. Furthermore, the circular distributions of the pressure on the cylinders were measured. Based on the distribution and the fundamental theory of two phase flow, the effects of the void fraction and mass flux per unit area on the pressure coefficient and the drag coefficient were evaluated. The drag coefficient was calculated by integrating the measured pressure coefficient and the drag coefficient were evaluated. The drag coefficient was calculated by integrating the measured pressure on the tube by a numerical method. It was found that for low mass fluxes, the measured two phase friction multipliers agree well with the analytical results, and good agreement for the effect of the void fraction on the drag coefficients, as calculated by the measured pressure distributions, is shown qualitatively, as compared to the existing experimental results

  13. Retrieval of cloud droplet size distribution parameters from polarized reflectance measurements

    Directory of Open Access Journals (Sweden)

    M. Alexandrov

    2011-09-01

    Full Text Available We present an algorithm for retrieval of cloud droplet size distribution parameters (effective radius and variance from the Research Scanning Polarimeter (RSP measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS, which is due to be launched as part of the NASA Glory Project. This instrument measures both polarized and total reflectances in 9 spectral channels with center wavelengths ranging from 410 to 2250 nm. For cloud droplet size retrievals we utilize the polarized reflectances in the scattering angle range between 140 and 170 degrees where they exhibit rainbow. The shape of the rainbow is determined mainly by single-scattering properties of the cloud particles, that simplifies the inversions and reduces retrieval uncertainties. The retrieval algorithm was tested using realistically simulated cloud radiation fields. Our retrievals of cloud droplet sizes from actual RSP measurements made during two recent field campaigns were compared with the correlative in situ observations.

  14. Power Law Distributions in Two Community Currencies

    Science.gov (United States)

    Kichiji, N.; Nishibe, M.

    2007-07-01

    The purpose of this paper is to highlight certain newly discovered social phenomena that accord with Zipf's law, in addition to the famous natural and social phenomena including word frequencies, earthquake magnitude, city size, income1 etc. that are already known to follow it. These phenomena have recently been discovered within the transaction amount (payments or receipts) distributions within two different Community Currencies (CC) that had been initiated as social experiments. One is a local CC circulating in a specific geographical area, such as a town. The other is a virtual CC used among members who belong to a certain community of interest (COI) on the Internet. We conducted two empirical studies to estimate the economic vitalization effects they had on their respective local economies. The results we found were that the amount of transactions (payments and receipts) of the two CCs was distributed according to a power-law distribution with a unity rank exponent. In addition, we found differences between the two CCs with regard to the shapes of their distribution over a low-transaction range. The result may originate from the difference in methods of issuing CCs or in the magnitudes of the minimum-value unit; however, this result calls for further investigation.

  15. Practical extensions to NHPP application in repairable system reliability analysis

    International Nuclear Information System (INIS)

    Krivtsov, Vasiliy V.

    2007-01-01

    An overwhelming majority of publications on Nonhomogeneous Poisson Process (NHPP) considers just two monotonic forms of the NHPP's rate of occurrence of failures (ROCOF): the log-linear model the power law model. In this paper, we propose to capitalize on the fact that NHPP's ROCOF formally coincides with the hazard function of the underlying lifetime distribution. Therefore, the variety of parametric forms for the hazard functions of traditional lifetime distributions (lognormal, Gumbel, etc.) could be used as the models for the ROCOF of respective NHPPs. Moreover, the hazard function of a mixture of underlying distributions could be used to model the non-monotonic ROCOF. Parameter estimation of such ROCOF models reduces to the estimation of the cumulative hazard function of the underlying lifetime distribution. We use real-world automotive data to illustrate the point

  16. Two-group Current-equivalent Parameters for Control Rod Cells. Autocode Programme CRCC

    Energy Technology Data Exchange (ETDEWEB)

    Norinder, O; Nyman, K

    1962-06-15

    In two-group neutron diffusion calculations there is mostly necessary to describe the influence of control rods by equivalent homogeneous two-group parameters in regions about the control rods. The problem is solved for a control rod in a medium characterized by two-group parameters. The property of fast and thermal neutr. on current equivalence is selected to obtain equivalent two-group parameters for a homogeneous cell with the same radius as the control rod cell. For the parameters determined one obtains the same fast and thermal neutron current into the rod cell and the equivalent cell independent of the fast and thermal flux amplitudes on the cell boundaries. The equivalent parameters are obtained as a solution of a system of transcendental equations. A Ferranti Mercury Autocode performing the solution is described. Calculated equivalent parameters for control rods in a heavy water lattice are given for some representative cases.

  17. Distributed dual-parameter optical fiber sensor based on cascaded microfiber Fabry-Pérot interferometers

    Science.gov (United States)

    Xiang, Yang; Luo, Yiyang; Zhang, Wei; Liu, Deming; Sun, Qizhen

    2017-04-01

    We propose and demonstrate a distributed fiber sensor based on cascaded microfiber Fabry-Perot interferometers (MFPI) for simultaneous refractive index (SRI) and temperature measurement. By employing MFPI which is fabricated by taper-drawing the center of a uniform fiber Bragg grating (FBG) on standard fiber into a section of microfiber, dual parameters including SRI and temperature can be detected through demodulating the reflection spectrum of the MFPI. Further, wavelength-division-multiplexing (WDM) is applied to realize distributed dual-parameter fiber sensor by using cascaded MFPIs with different Bragg wavelengths. A prototype sensor system with 5 cascaded MFPIs is constructed to experimentally demonstrate the sensing performance.

  18. The values of the parameters of some multilayer distributed RC null networks

    Science.gov (United States)

    Huelsman, L. P.; Raghunath, S.

    1974-01-01

    In this correspondence, the values of the parameters of some multilayer distributed RC notch networks are determined, and the usually accepted values are shown to be in error. The magnitude of the error is illustrated by graphs of the frequency response of the networks.

  19. Recurrent frequency-size distribution of characteristic events

    Directory of Open Access Journals (Sweden)

    S. G. Abaimov

    2009-04-01

    Full Text Available Statistical frequency-size (frequency-magnitude properties of earthquake occurrence play an important role in seismic hazard assessments. The behavior of earthquakes is represented by two different statistics: interoccurrent behavior in a region and recurrent behavior at a given point on a fault (or at a given fault. The interoccurrent frequency-size behavior has been investigated by many authors and generally obeys the power-law Gutenberg-Richter distribution to a good approximation. It is expected that the recurrent frequency-size behavior should obey different statistics. However, this problem has received little attention because historic earthquake sequences do not contain enough events to reconstruct the necessary statistics. To overcome this lack of data, this paper investigates the recurrent frequency-size behavior for several problems. First, the sequences of creep events on a creeping section of the San Andreas fault are investigated. The applicability of the Brownian passage-time, lognormal, and Weibull distributions to the recurrent frequency-size statistics of slip events is tested and the Weibull distribution is found to be the best-fit distribution. To verify this result the behaviors of numerical slider-block and sand-pile models are investigated and the Weibull distribution is confirmed as the applicable distribution for these models as well. Exponents β of the best-fit Weibull distributions for the observed creep event sequences and for the slider-block model are found to have similar values ranging from 1.6 to 2.2 with the corresponding aperiodicities CV of the applied distribution ranging from 0.47 to 0.64. We also note similarities between recurrent time-interval statistics and recurrent frequency-size statistics.

  20. The aerosol distribution in Europe derived with the Community Multiscale Air Quality (CMAQ model: comparison to near surface in situ and sunphotometer measurements

    Directory of Open Access Journals (Sweden)

    V. Matthias

    2008-09-01

    Full Text Available The aerosol distribution in Europe was simulated with the Community Multiscale Air Quality (CMAQ model system version 4.5 for the years 2000 and 2001. The results were compared with daily averages of PM10 measurements taken in the framework of EMEP and with aerosol optical depth (AOD values measured within AERONET. The modelled total aerosol mass is typically about 30–60% lower than the corresponding measurements. However a comparison of the chemical composition of the aerosol revealed a considerably better agreement between the modelled and the measured aerosol components for ammonium, nitrate and sulfate, which are on average only 15–20% underestimated. Sligthly worse agreement was determined for sea salt, that was only avaliable at two sites. The largest discrepancies result from the aerosol mass which was not chemically specified by the measurements. The agreement between measurements and model is better in winter than in summer. The modelled organic aerosol mass is higher in summer than in winter but it is significantly underestimated by the model. This could be one of the main reasons for the discrepancies between measurements and model results. The other is that primary coarse particles are underestimated in the emissions. The probability distribution function of the PM10 measurements follows a log-normal distribution at most sites. The model is only able to reproduce this distribution function at non-coastal low altitude stations. The AOD derived from the model results is 20–70% lower than the values observed within AERONET. This is mainly attributed to the missing aerosol mass in the model. The day-to-day variability of the AOD and the log-normal distribution functions are quite well reproduced by the model. The seasonality on the other hand is underestimated by the model results because better agreement is achieved in winter.

  1. A two dimensional approach for temperature distribution in reactor lower head during severe accident

    International Nuclear Information System (INIS)

    Cao, Zhen; Liu, Xiaojing; Cheng, Xu

    2015-01-01

    Highlights: • Two dimensional module is developed to analyze integrity of lower head. • Verification step has been done to evaluate feasibility of new module. • The new module is applied to simulate large-scale advanced PWR. • Importance of 2-D approach is clearly quantified. • Major parameters affecting vessel temperature distribution are identified. - Abstract: In order to evaluate the safety margin during a postulated severe accident, a module named ASAP-2D (Accident Simulation on Pressure vessel-2 Dimensional), which can be implemented into the severe accident simulation codes (such as ATHLET-CD), is developed in Shanghai Jiao Tong University. Based on two-dimensional spherical coordinates, heat conduction equation for transient state is solved implicitly. Together with solid vessel thickness, heat flux distribution and heat transfer coefficient at outer vessel surface are obtained. Heat transfer regime when critical heat flux has been exceeded (POST-CHF regime) could be simulated in the code, and the transition behavior of boiling crisis (from spatial and temporal points of view) can be predicted. The module is verified against a one-dimensional analytical solution with uniform heat flux distribution, and afterwards this module is applied to the benchmark illustrated in NUREG/CR-6849. Benchmark calculation indicates that maximum heat flux at outer surface of RPV could be around 20% lower than that of at inner surface due to two-dimensional heat conduction. Then a preliminary analysis is performed on the integrity of the reactor vessel for which the geometric parameters and boundary conditions are derived from a large scale advanced pressurized water reactor. Results indicate that heat flux remains lower than critical heat flux. Sensitivity analysis indicates that outer heat flux distribution is more sensitive to input heat flux distribution and the transition boiling correlation than mass flow rate in external reactor vessel cooling (ERVC) channel

  2. Data-Driven H∞ Control for Nonlinear Distributed Parameter Systems.

    Science.gov (United States)

    Luo, Biao; Huang, Tingwen; Wu, Huai-Ning; Yang, Xiong

    2015-11-01

    The data-driven H∞ control problem of nonlinear distributed parameter systems is considered in this paper. An off-policy learning method is developed to learn the H∞ control policy from real system data rather than the mathematical model. First, Karhunen-Loève decomposition is used to compute the empirical eigenfunctions, which are then employed to derive a reduced-order model (ROM) of slow subsystem based on the singular perturbation theory. The H∞ control problem is reformulated based on the ROM, which can be transformed to solve the Hamilton-Jacobi-Isaacs (HJI) equation, theoretically. To learn the solution of the HJI equation from real system data, a data-driven off-policy learning approach is proposed based on the simultaneous policy update algorithm and its convergence is proved. For implementation purpose, a neural network (NN)- based action-critic structure is developed, where a critic NN and two action NNs are employed to approximate the value function, control, and disturbance policies, respectively. Subsequently, a least-square NN weight-tuning rule is derived with the method of weighted residuals. Finally, the developed data-driven off-policy learning approach is applied to a nonlinear diffusion-reaction process, and the obtained results demonstrate its effectiveness.

  3. Where Gibrat meets Zipf: Scale and scope of French firms

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2017-09-01

    The proper characterization of the size distribution and growth of firms represents an important issue in economics and business. We use the Maximum Entropy approach to assess the plausibility of the assumption that firm size follows Lognormal or Pareto distributions, which underlies most recent works on the subject. A comprehensive dataset covering the universe of French firms allows us to draw two major conclusions. First, the Pareto hypothesis for the whole distribution should be rejected. Second, by discriminating across firms based on the number of products sold and markets served, we find that, within the class of multi-product companies active in multiple markets, the distribution converges to a Zipf's law. Conversely, Lognormal distribution is a good benchmark for small single-product firms. The size distribution of firms largely depends on firms' diversification patterns.

  4. Influence of the Determination Methods of K and C Parameters on the Ability of Weibull Distribution to Suitably Estimate Wind Potential and Electric Energy

    Directory of Open Access Journals (Sweden)

    Ruben M. Mouangue

    2014-05-01

    Full Text Available The modeling of the wind speed distribution is of great importance for the assessment of wind energy potential and the performance of wind energy conversion system. In this paper, the choice of two determination methods of Weibull parameters shows theirs influences on the Weibull distribution performances. Because of important calm winds on the site of Ngaoundere airport, we characterize the wind potential using the approach of Weibull distribution with parameters which are determined by the modified maximum likelihood method. This approach is compared to the Weibull distribution with parameters which are determined by the maximum likelihood method and the hybrid distribution which is recommended for wind potential assessment of sites having nonzero probability of calm. Using data provided by the ASECNA Weather Service (Agency for the Safety of Air Navigation in Africa and Madagascar, we evaluate the goodness of fit of the various fitted distributions to the wind speed data using the Q – Q plots, the Pearson’s coefficient of correlation, the mean wind speed, the mean square error, the energy density and its relative error. It appears from the results that the accuracy of the Weibull distribution with parameters which are determined by the modified maximum likelihood method is higher than others. Then, this approach is used to estimate the monthly and annual energy productions of the site of the Ngaoundere airport. The most energy contribution is made in March with 255.7 MWh. It also appears from the results that a wind turbine generator installed on this particular site could not work for at least a half of the time because of higher frequency of calm. For this kind of sites, the modified maximum likelihood method proposed by Seguro and Lambert in 2000 is one of the best methods which can be used to determinate the Weibull parameters.

  5. Population dose due to natural radiation in Hong Kong

    International Nuclear Information System (INIS)

    Tso, M.Y.W.; Leung, J.K.C.

    2000-01-01

    In densely populated cities such as Hong Kong where people live and work in high-rise buildings that are all built with concrete, the indoor gamma dose rate and indoor radon concentration are not wide ranging. Indoor gamma dose rates (including cosmic rays) follow a normal distribution with an arithmetic mean of 0.22 ± 0.04 (micro)Gy h -1 , whereas indoor radon concentrations follow a log-normal distribution with geometric means of 48 ± 1 Bq m -3 and 90 ± 2 Bq m -3 for the two main categories of buildings: residential and non-residential. Since different occupations result in different occupancy in different categories of buildings, the annual total dose [indoor and outdoor radon effective dose + indoor and outdoor gamma absorbed dose (including cosmic ray)] to the population in Hong Kong was estimated based on the number of people for each occupation; the occupancy of each occupation; indoor radon concentration distribution and indoor gamma dose rate distribution for each category of buildings; outdoor radon concentration and gamma dose rate; and indoor and outdoor cosmic ray dose rates. The result shows that the annual doses for every occupation follow a log-normal distribution. This is expected since the total dose is dominated by radon effective dose, which has a log-normal distribution. The annual dose to the population of Hong Kong is characterized by a log-normal distribution with a geometric mean of 2.4 mSv and a geometric standard deviation of 1.3 mSv

  6. Distribution of Total Depressive Symptoms Scores and Each Depressive Symptom Item in a Sample of Japanese Employees.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Yamada, Hiroshi; Miyake, Hirotsugu; Furukawa, Toshiaki A; Furukaw, Toshiaki A

    2016-01-01

    In a previous study, we reported that the distribution of total depressive symptoms scores according to the Center for Epidemiologic Studies Depression Scale (CES-D) in a general population is stable throughout middle adulthood and follows an exponential pattern except for at the lowest end of the symptom score. Furthermore, the individual distributions of 16 negative symptom items of the CES-D exhibit a common mathematical pattern. To confirm the reproducibility of these findings, we investigated the distribution of total depressive symptoms scores and 16 negative symptom items in a sample of Japanese employees. We analyzed 7624 employees aged 20-59 years who had participated in the Northern Japan Occupational Health Promotion Centers Collaboration Study for Mental Health. Depressive symptoms were assessed using the CES-D. The CES-D contains 20 items, each of which is scored in four grades: "rarely," "some," "much," and "most of the time." The descriptive statistics and frequency curves of the distributions were then compared according to age group. The distribution of total depressive symptoms scores appeared to be stable from 30-59 years. The right tail of the distribution for ages 30-59 years exhibited a linear pattern with a log-normal scale. The distributions of the 16 individual negative symptom items of the CES-D exhibited a common mathematical pattern which displayed different distributions with a boundary at "some." The distributions of the 16 negative symptom items from "some" to "most" followed a linear pattern with a log-normal scale. The distributions of the total depressive symptoms scores and individual negative symptom items in a Japanese occupational setting show the same patterns as those observed in a general population. These results show that the specific mathematical patterns of the distributions of total depressive symptoms scores and individual negative symptom items can be reproduced in an occupational population.

  7. Methods for obtaining distributions of uranium occurrence from estimates of geologic features

    International Nuclear Information System (INIS)

    Ford, C.E.; McLaren, R.A.

    1980-04-01

    The problem addressed in this paper is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data require obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs

  8. Methods for obtaining distributions of uranium occurrence from estimates of geologic features

    International Nuclear Information System (INIS)

    Ford, C.E.; McLaren, R.A.

    1980-04-01

    The problem addressed in this report is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data requires obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs

  9. Minimum K-S estimator using PH-transform technique

    Directory of Open Access Journals (Sweden)

    Somchit Boonthiem

    2016-07-01

    Full Text Available In this paper, we propose an improvement of the Minimum Kolmogorov-Smirnov (K-S estimator using proportional hazards transform (PH-transform technique. The data of experiment is 47 fire accidents data of an insurance company in Thailand. This experiment has two operations, the first operation, we minimize K-S statistic value using grid search technique for nine distributions; Rayleigh distribution, gamma distribution, Pareto distribution, log-logistic distribution, logistic distribution, normal distribution, Weibull distribution, lognormal distribution, and exponential distribution and the second operation, we improve K-S statistic using PHtransform. The result appears that PH-transform technique can improve the Minimum K-S estimator. The algorithms give better the Minimum K-S estimator for seven distributions; Rayleigh distribution, logistic distribution, gamma distribution, Pareto distribution, log-logistic distribution, normal distribution, Weibull distribution, log-normal distribution, and exponential distribution while the Minimum K-S estimators of normal distribution and logistic distribution are unchanged

  10. Lorentz covariant tempered distributions in two-dimensional space-time

    International Nuclear Information System (INIS)

    Zinov'ev, Yu.M.

    1989-01-01

    The problem of describing Lorentz covariant distributions without any spectral condition has hitherto remained unsolved even for two-dimensional space-time. Attempts to solve this problem have already been made. Zharinov obtained an integral representation for the Laplace transform of Lorentz invariant distributions with support in the product of two-dimensional future light cones. However, this integral representation does not make it possible to obtain a complete description of the corresponding Lorentz invariant distributions. In this paper the author gives a complete description of Lorentz covariant distributions for two-dimensional space-time. No spectral conditions is assumed

  11. Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation

    Science.gov (United States)

    Zhan, Hanyu; Voelz, David G.

    2016-12-01

    The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.

  12. OPT-TWO: Calculation code for two-dimensional MOX fuel models in the optimum concentration distribution

    International Nuclear Information System (INIS)

    Sato, Shohei; Okuno, Hiroshi; Sakai, Tomohiro

    2007-08-01

    OPT-TWO is a calculation code which calculates the optimum concentration distribution, i.e., the most conservative concentration distribution in the aspect of nuclear criticality safety, of MOX (mixed uranium and plutonium oxide) fuels in the two-dimensional system. To achieve the optimum concentration distribution, we apply the principle of flattened fuel importance distribution with which the fuel system has the highest reactivity. Based on this principle, OPT-TWO takes the following 3 calculation steps iteratively to achieve the optimum concentration distribution with flattened fuel importance: (1) the forward and adjoint neutron fluxes, and the neutron multiplication factor, with TWOTRAN code which is a two-dimensional neutron transport code based on the SN method, (2) the fuel importance, and (3) the quantity of the transferring fuel. In OPT-TWO, the components of MOX fuel are MOX powder, uranium dioxide powder and additive. This report describes the content of the calculation, the computational method, and the installation method of the OPT-TWO, and also describes the application method of the criticality calculation of OPT-TWO. (author)

  13. Characterization of 3-D particle distribution and effects on recrystallization studied by computer simulation

    International Nuclear Information System (INIS)

    Fridy, J.M.; Marthinsen, K.; Rouns, T.N.; Lippert, K.B.; Nes, E.; Richmond, O.

    1992-12-01

    Artificial particle distribution in three dimensions with different degree of clustering have been generated and used as nucleation sites for the simulation of particle stimulated recrystallization with site saturation nucleation kinetics. The clustering has a strong effect on both the Avrami exponent and the resulting sectioned grain size distributions. The Avrami exponent decreases rapidly from the expected value of 3 with the degree of clustering. A value of less than 1.5 is obtained for the Avrami exponent with a strongly clustered distribution of nucleation sites. The size distributions of sectioned grain areas are considerably broadened with clustering, but are still far from the log-normal distributions observed experimentally. A computer program has been developed to generate particle distributions whose pair correlation functions match experimentally measured functions. 15 refs., 6 figs

  14. Weibull Distribution for Estimating the Parameters and Application of Hilbert Transform in case of a Low Wind Speed at Kolaghat

    Directory of Open Access Journals (Sweden)

    P Bhattacharya

    2016-09-01

    Full Text Available The wind resource varies with of the day and the season of the year and even some extent from year to year. Wind energy has inherent variances and hence it has been expressed by distribution functions. In this paper, we present some methods for estimating Weibull parameters in case of a low wind speed characterization, namely, shape parameter (k, scale parameter (c and characterize the discrete wind data sample by the discrete Hilbert transform. We know that the Weibull distribution is an important distribution especially for reliability and maintainability analysis. The suitable values for both shape parameter and scale parameters of Weibull distribution are important for selecting locations of installing wind turbine generators. The scale parameter of Weibull distribution also important to determine whether a wind farm is good or not. Thereafter the use of discrete Hilbert transform (DHT for wind speed characterization provides a new era of using DHT besides its application in digital signal processing. Basically in this paper, discrete Hilbert transform has been applied to characterize the wind sample data measured on College of Engineering and Management, Kolaghat, East Midnapore, India in January 2011.

  15. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    DEFF Research Database (Denmark)

    Bacco, Davide; Christensen, Jesper Bjerge; Usuga Castaneda, Mario A.

    2016-01-01

    10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak......Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last...... coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable....

  16. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    Science.gov (United States)

    Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo

    2016-12-01

    Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.

  17. On the control of distributed parameter systems using a multidimensional systems setting

    Czech Academy of Sciences Publication Activity Database

    Cichy, B.; Augusta, Petr; Rogers, E.; Galkowski, K.; Hurák, Z.

    2008-01-01

    Roč. 22, č. 7 (2008), s. 1566-1581 ISSN 0888-3270 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : Distributed parameter systems * Modelling * Control law design Subject RIV: BC - Control Systems Theory Impact factor: 1.984, year: 2008

  18. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    OpenAIRE

    Degeling, Koen; IJzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by ...

  19. Reconstructing missing information on precipitation datasets: impact of tails on adopted statistical distributions.

    Science.gov (United States)

    Pedretti, Daniele; Beckie, Roger Daniel

    2014-05-01

    Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were

  20. MODELING OF WATER DISTRIBUTION SYSTEM PARAMETERS AND THEIR PARTICULAR IMPORTANCE IN ENVIRONMENT ENGINEERING PROCESSES

    Directory of Open Access Journals (Sweden)

    Agnieszka Trębicka

    2016-05-01

    Full Text Available The object of this study is to present a mathematical model of water-supply network and the analysis of basic parameters of water distribution system with a digital model. The reference area is Kleosin village, municipality Juchnowiec Kościelny in podlaskie province, located at the border with Białystok. The study focused on the significance of every change related to the quality and quantity of water delivered to WDS through modeling the basic parameters of water distribution system in different variants of work in order to specify new, more rational ways of exploitation (decrease in pressure value and to define conditions for development and modernization of the water-supply network, with special analysis of the scheme, in frames of specification of the most dangerous places in the network. The analyzed processes are based on copying and developing the existing state of water distribution sub-system (the WDS with the use of mathematical modeling that includes the newest accessible computer techniques.