WorldWideScience

Sample records for two-parameter lognormal distribution

  1. Neuronal variability during handwriting: lognormal distribution.

    Directory of Open Access Journals (Sweden)

    Valery I Rupasov

    Full Text Available We examined time-dependent statistical properties of electromyographic (EMG signals recorded from intrinsic hand muscles during handwriting. Our analysis showed that trial-to-trial neuronal variability of EMG signals is well described by the lognormal distribution clearly distinguished from the Gaussian (normal distribution. This finding indicates that EMG formation cannot be described by a conventional model where the signal is normally distributed because it is composed by summation of many random sources. We found that the variability of temporal parameters of handwriting--handwriting duration and response time--is also well described by a lognormal distribution. Although, the exact mechanism of lognormal statistics remains an open question, the results obtained should significantly impact experimental research, theoretical modeling and bioengineering applications of motor networks. In particular, our results suggest that accounting for lognormal distribution of EMGs can improve biomimetic systems that strive to reproduce EMG signals in artificial actuators.

  2. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  3. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  4. Lognormal Behavior of the Size Distributions of Animation Characters

    Science.gov (United States)

    Yamamoto, Ken

    This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.

  5. Estimation of expected value for lognormal and gamma distributions

    International Nuclear Information System (INIS)

    White, G.C.

    1978-01-01

    Concentrations of environmental pollutants tend to follow positively skewed frequency distributions. Two such density functions are the gamma and lognormal. Minimum variance unbiased estimators of the expected value for both densities are available. The small sample statistical properties of each of these estimators were compared for its own distribution, as well as the other distribution to check the robustness of the estimator. Results indicated that the arithmetic mean provides an unbiased estimator when the underlying density function of the sample is either lognormal or gamma, and that the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two. Further Monte Carlo simulations were conducted to study the robustness of the above estimators by simulating a lognormal or gamma distribution with the expected value of a particular observation selected from a uniform distribution before the lognormal or gamma observation is generated. Again, the arithmetic mean provides an unbiased estimate of expected value, and the coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two

  6. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  7. On the Laplace transform of the Lognormal distribution

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    -form approximation L˜(θ) of the Laplace transform L(θ) which is obtained via a modified version of Laplace's method. This approximation, given in terms of the Lambert W(⋅) function, is tractable enough for applications. We prove that L˜(θ) is asymptotically equivalent to L(θ) as θ→∞. We apply this result......Integral transforms of the lognormal distribution are of great importance in statistics and probability, yet closed-form expressions do not exist. A wide variety of methods have been employed to provide approximations, both analytical and numerical. In this paper, we analyze a closed...

  8. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  9. Dobinski-type relations and the log-normal distribution

    International Nuclear Information System (INIS)

    Blasiak, P; Penson, K A; Solomon, A I

    2003-01-01

    We consider sequences of generalized Bell numbers B(n), n = 1, 2, ..., which can be represented by Dobinski-type summation formulae, i.e. B(n) = 1/C Σ k =0 ∞ [P(k)] n /D(k), with P(k) a polynomial, D(k) a function of k and C = const. They include the standard Bell numbers (P(k) k, D(k) = k!, C = e), their generalizations B r,r (n), r = 2, 3, ..., appearing in the normal ordering of powers of boson monomials (P(k) (k+r)!/k!, D(k) = k!, C = e), variants of 'ordered' Bell numbers B o (p) (n) (P(k) = k, D(k) = (p+1/p) k , C = 1 + p, p = 1, 2 ...), etc. We demonstrate that for α, β, γ, t positive integers (α, t ≠ 0), [B(αn 2 + βn + γ)] t is the nth moment of a positive function on (0, ∞) which is a weighted infinite sum of log-normal distributions. (letter to the editor)

  10. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  11. The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts

    Directory of Open Access Journals (Sweden)

    Lorenzo Zaninetti

    2016-11-01

    Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  12. Handbook of tables for order statistics from lognormal distributions with applications

    CERN Document Server

    Balakrishnan, N

    1999-01-01

    Lognormal distributions are one of the most commonly studied models in the sta­ tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma­ cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...

  13. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    International Nuclear Information System (INIS)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.

    2016-01-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg"2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3–10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ"2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Lastly, our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  14. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Competition and fragmentation: a simple model generating lognormal-like distributions

    International Nuclear Information System (INIS)

    Schwaemmle, V; Queiros, S M D; Brigatti, E; Tchumatchenko, T

    2009-01-01

    The current distribution of language size in terms of speaker population is generally described using a lognormal distribution. Analyzing the original real data we show how the double-Pareto lognormal distribution can give an alternative fit that indicates the existence of a power law tail. A simple Monte Carlo model is constructed based on the processes of competition and fragmentation. The results reproduce the power law tails of the real distribution well and give better results for a poorly connected topology of interactions.

  16. Life prediction for white OLED based on LSM under lognormal distribution

    Science.gov (United States)

    Zhang, Jianping; Liu, Fang; Liu, Yu; Wu, Helen; Zhu, Wenqing; Wu, Wenli; Wu, Liang

    2012-09-01

    In order to acquire the reliability information of White Organic Light Emitting Display (OLED), three groups of OLED constant stress accelerated life tests (CSALTs) were carried out to obtain failure data of samples. Lognormal distribution function was applied to describe OLED life distribution, and the accelerated life equation was determined by Least square method (LSM). The Kolmogorov-Smirnov test was performed to verify whether the white OLED life meets lognormal distribution or not. Author-developed software was employed to predict the average life and the median life. The numerical results indicate that the white OLED life submits to lognormal distribution, and that the accelerated life equation meets inverse power law completely. The estimated life information of the white OLED provides manufacturers and customers with important guidelines.

  17. Species Abundance in a Forest Community in South China: A Case of Poisson Lognormal Distribution

    Institute of Scientific and Technical Information of China (English)

    Zuo-Yun YIN; Hai REN; Qian-Mei ZHANG; Shao-Lin PENG; Qin-Feng GUO; Guo-Yi ZHOU

    2005-01-01

    Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m×20 m, 5 m×5 m, and 1 m×1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal;(ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (σ andμ) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the σ and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/σ should be an alternative measure of diversity.

  18. MODELING PARTICLE SIZE DISTRIBUTION IN HETEROGENEOUS POLYMERIZATION SYSTEMS USING MULTIMODAL LOGNORMAL FUNCTION

    Directory of Open Access Journals (Sweden)

    J. C. Ferrari

    Full Text Available Abstract This work evaluates the usage of the multimodal lognormal function to describe Particle Size Distributions (PSD of emulsion and suspension polymerization processes, including continuous reactions with particle re-nucleation leading to complex multimodal PSDs. A global optimization algorithm, namely Particle Swarm Optimization (PSO, was used for parameter estimation of the proposed model, minimizing the objective function defined by the mean squared errors. Statistical evaluation of the results indicated that the multimodal lognormal function could describe distinctive features of different types of PSDs with accuracy and consistency.

  19. An empirical multivariate log-normal distribution representing uncertainty of biokinetic parameters for 137Cs

    International Nuclear Information System (INIS)

    Miller, G.; Martz, H.; Bertelli, L.; Melo, D.

    2008-01-01

    A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)

  20. Lognormal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of α-Particle Track Autoradiography

    Science.gov (United States)

    Neti, Prasad V.S.V.; Howell, Roger W.

    2010-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086

  1. Use of the lognormal distribution for the coefficients of friction and wear

    International Nuclear Information System (INIS)

    Steele, Clint

    2008-01-01

    To predict the reliability of a system, an engineer might allocate a distribution to each input. This raises a question: how to select the correct distribution? Siddall put forward an evolutionary approach that was intended to utilise both the understanding of the engineer and available data. However, this method requires a subjective initial distribution based on the engineer's understanding of the variable or parameter. If the engineer's understanding is limited, the initial distribution will be misrepresentative of the actual distribution, and application of the method will likely fail. To provide some assistance, the coefficients of friction and wear are considered here. Basic tribology theory, dimensional issues and the central limit theorem are used to argue that the distribution for each of the coefficients will typically be like a lognormal distribution. Empirical evidence from other sources is cited to lend support to this argument. It is concluded that the distributions for the coefficients of friction and wear would typically be lognormal in nature. It is therefore recommended that the engineer, without data or evidence to suggest differently, should allocate a lognormal distribution to the coefficients of friction and wear

  2. On Modelling Insurance Data by Using a Generalized Lognormal Distribution || Sobre la modelización de datos de seguros usando una distribución lognormal generalizada

    Directory of Open Access Journals (Sweden)

    García, Victoriano J.

    2014-12-01

    Full Text Available In this paper, a new heavy-tailed distribution is used to model data with a strong right tail, as often occurs in practical situations. The distribution proposed is derived from the lognormal distribution, by using the Marshall and Olkin procedure. Some basic properties of this new distribution are obtained and we present situations where this new distribution correctly reflects the sample behaviour for the right tail probability. An application of the model to dental insurance data is presented and analysed in depth. We conclude that the generalized lognormal distribution proposed is a distribution that should be taken into account among other possible distributions for insurance data in which the properties of a heavy-tailed distribution are present. || Presentamos una nueva distribución lognormal con colas pesadas que se adapta bien a muchas situaciones prácticas en el campo de los seguros. Utilizamos el procedimiento de Marshall y Olkin para generar tal distribución y estudiamos sus propiedades básicas. Se presenta una aplicación de la misma para datos de seguros dentales que es analizada en profundidad, concluyendo que tal distribución deberá formar parte del catálogo de distribuciones a tener cuenta para la modernización de datos en seguros cuando hay presencia de colas pesadas.

  3. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    Energy Technology Data Exchange (ETDEWEB)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu [Division of Science and Mathematics, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi (United Arab Emirates)

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  4. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  5. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    Science.gov (United States)

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of

  6. Aerosol Extinction Profile Mapping with Lognormal Distribution Based on MPL Data

    Science.gov (United States)

    Lin, T. H.; Lee, T. T.; Chang, K. E.; Lien, W. H.; Liu, G. R.; Liu, C. Y.

    2017-12-01

    This study intends to challenge the profile mapping of aerosol vertical distribution by mathematical function. With the similarity in distribution pattern, lognormal distribution is examined for mapping the aerosol extinction profile based on MPL (Micro Pulse LiDAR) in situ measurements. The variables of lognormal distribution are log mean (μ) and log standard deviation (σ), which will be correlated with the parameters of aerosol optical depht (AOD) and planetary boundary layer height (PBLH) associated with the altitude of extinction peak (Mode) defined in this study. On the base of 10 years MPL data with single peak, the mapping results showed that the mean error of Mode and σ retrievals are 16.1% and 25.3%, respectively. The mean error of σ retrieval can be reduced to 16.5% under the cases of larger distance between PBLH and Mode. The proposed method is further applied to MODIS AOD product in mapping extinction profile for the retrieval of PM2.5 in terms of satellite observations. The results indicated well agreement between retrievals and ground measurements when aerosols under 525 meters are well-mixed. The feasibility of proposed method to satellite remote sensing is also suggested by the case study. Keyword: Aerosol extinction profile, Lognormal distribution, MPL, Planetary boundary layer height (PBLH), Aerosol optical depth (AOD), Mode

  7. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, Gašper, E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, Andrej, E-mail: andrej.trkov@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Smith, Donald L., E-mail: donald.l.smith@anl.gov [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States); Capote, Roberto, E-mail: roberto.capotenoy@iaea.org [NAPC–Nuclear Data Section, International Atomic Energy Agency, PO Box 100, Vienna-A-1400 (Austria)

    2013-11-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue.

  8. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Smith, Donald L.; Capote, Roberto

    2013-01-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue

  9. An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand

    Science.gov (United States)

    Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.

    2005-01-01

    An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…

  10. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  11. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    International Nuclear Information System (INIS)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro

    2011-01-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≅10 -2.5 Mpc -1 with the upper limit B < or approx. 3 nG.

  12. The modelled raindrop size distribution of Skudai, Peninsular Malaysia, using exponential and lognormal distributions.

    Science.gov (United States)

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance.

  13. The Modelled Raindrop Size Distribution of Skudai, Peninsular Malaysia, Using Exponential and Lognormal Distributions

    Science.gov (United States)

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597

  14. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  15. Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities

    International Nuclear Information System (INIS)

    Waite, D.A.; Denham, D.H.

    1975-01-01

    The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these

  16. Lognormal firing rate distribution reveals prominent fluctuation-driven regime in spinal motor networks

    DEFF Research Database (Denmark)

    Petersen, Peter C.; Berg, Rune W.

    2016-01-01

    fraction that operates within either a ‘mean-driven’ or a ‘fluctuation–driven’ regime. Fluctuation-driven neurons have a ‘supralinear’ input-output curve, which enhances sensitivity, whereas the mean-driven regime reduces sensitivity. We find a rich diversity of firing rates across the neuronal population...... as reflected in a lognormal distribution and demonstrate that half of the neurons spend at least 50 %% of the time in the ‘fluctuation–driven’ regime regardless of behavior. Because of the disparity in input–output properties for these two regimes, this fraction may reflect a fine trade–off between stability...

  17. Generating log-normally distributed random numbers by using the Ziggurat algorithm

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2016-01-01

    Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method

  18. A numerical study of the segregation phenomenon of lognormal particle size distributions in the rotating drum

    Science.gov (United States)

    Yang, Shiliang; Sun, Yuhao; Zhao, Ya; Chew, Jia Wei

    2018-05-01

    Granular materials are mostly polydisperse, which gives rise to phenomena such as segregation that has no monodisperse counterpart. The discrete element method is applied to simulate lognormal particle size distributions (PSDs) with the same arithmetic mean particle diameter but different PSD widths in a three-dimensional rotating drum operating in the rolling regime. Despite having the same mean particle diameter, as the PSD width of the lognormal PSDs increases, (i) the steady-state mixing index, the total kinetic energy, the ratio of the active region depth to the total bed depth, the mass fraction in the active region, the steady-state active-passive mass-based exchanging rate, and the mean solid residence time (SRT) of the particles in the active region increase, while (ii) the steady-state gyration radius, the streamwise velocity, and the SRT in the passive region decrease. Collectively, these highlight the need for more understanding of the effect of PSD width on the granular flow behavior in the rotating drum operating in the rolling flow regime.

  19. Geomagnetic storms, the Dst ring-current myth and lognormal distributions

    Science.gov (United States)

    Campbell, W.H.

    1996-01-01

    The definition of geomagnetic storms dates back to the turn of the century when researchers recognized the unique shape of the H-component field change upon averaging storms recorded at low latitude observatories. A generally accepted modeling of the storm field sources as a magnetospheric ring current was settled about 30 years ago at the start of space exploration and the discovery of the Van Allen belt of particles encircling the Earth. The Dst global 'ring-current' index of geomagnetic disturbances, formulated in that period, is still taken to be the definitive representation for geomagnetic storms. Dst indices, or data from many world observatories processed in a fashion paralleling the index, are used widely by researchers relying on the assumption of such a magnetospheric current-ring depiction. Recent in situ measurements by satellites passing through the ring-current region and computations with disturbed magnetosphere models show that the Dst storm is not solely a main-phase to decay-phase, growth to disintegration, of a massive current encircling the Earth. Although a ring current certainly exists during a storm, there are many other field contributions at the middle-and low-latitude observatories that are summed to show the 'storm' characteristic behavior in Dst at these observatories. One characteristic of the storm field form at middle and low latitudes is that Dst exhibits a lognormal distribution shape when plotted as the hourly value amplitude in each time range. Such distributions, common in nature, arise when there are many contributors to a measurement or when the measurement is a result of a connected series of statistical processes. The amplitude-time displays of Dst are thought to occur because the many time-series processes that are added to form Dst all have their own characteristic distribution in time. By transforming the Dst time display into the equivalent normal distribution, it is shown that a storm recovery can be predicted with

  20. Testing the Pareto against the lognormal distributions with the uniformly most powerful unbiased test applied to the distribution of cities.

    Science.gov (United States)

    Malevergne, Yannick; Pisarenko, Vladilen; Sornette, Didier

    2011-03-01

    Fat-tail distributions of sizes abound in natural, physical, economic, and social systems. The lognormal and the power laws have historically competed for recognition with sometimes closely related generating processes and hard-to-distinguish tail properties. This state-of-affair is illustrated with the debate between Eeckhout [Amer. Econ. Rev. 94, 1429 (2004)] and Levy [Amer. Econ. Rev. 99, 1672 (2009)] on the validity of Zipf's law for US city sizes. By using a uniformly most powerful unbiased (UMPU) test between the lognormal and the power-laws, we show that conclusive results can be achieved to end this debate. We advocate the UMPU test as a systematic tool to address similar controversies in the literature of many disciplines involving power laws, scaling, "fat" or "heavy" tails. In order to demonstrate that our procedure works for data sets other than the US city size distribution, we also briefly present the results obtained for the power-law tail of the distribution of personal identity (ID) losses, which constitute one of the major emergent risks at the interface between cyberspace and reality.

  1. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  2. Determination of mean rainfall from the Special Sensor Microwave/Imager (SSM/I) using a mixed lognormal distribution

    Science.gov (United States)

    Berg, Wesley; Chase, Robert

    1992-01-01

    Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.

  3. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  4. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  5. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  6. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  7. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    Science.gov (United States)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been

  8. Lifetime characterization via lognormal distribution of transformers in smart grids: Design optimization

    International Nuclear Information System (INIS)

    Chiodo, Elio; Lauria, Davide; Mottola, Fabio; Pisani, Cosimo

    2016-01-01

    authors verified that the transformer’s lifetime is modeled as a lognormal, stochastic process. Hence, a novel, closed-form relationship was derived between the transformer’s lifetime and the distributional properties of the stochastic load. The usefulness of the closed-form expression is discussed for sake of design, even if a few of the considerations also are performed with respect to operating conditions. The aim of the numerical application was to demonstrate the feasibility and the easy applicability of the analytical methodology.

  9. Annual rainfall statistics for stations in the Top End of Australia: normal and log-normal distribution analysis

    International Nuclear Information System (INIS)

    Vardavas, I.M.

    1992-01-01

    A simple procedure is presented for the statistical analysis of measurement data where the primary concern is the determination of the value corresponding to a specified average exceedance probability. The analysis employs the normal and log-normal frequency distributions together with a χ 2 -test and an error analysis. The error analysis introduces the concept of a counting error criterion, or ζ-test, to test whether the data are sufficient to make the Z 2 -test reliable. The procedure is applied to the analysis of annual rainfall data recorded at stations in the tropical Top End of Australia where the Ranger uranium deposit is situated. 9 refs., 12 tabs., 9 figs

  10. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  11. Drug binding affinities and potencies are best described by a log-normal distribution and use of geometric means

    International Nuclear Information System (INIS)

    Stanisic, D.; Hancock, A.A.; Kyncl, J.J.; Lin, C.T.; Bush, E.N.

    1986-01-01

    (-)-Norepinephrine (NE) is used as an internal standard in their in vitro adrenergic assays, and the concentration of NE which produces a half-maximal inhibition of specific radioligand binding (affinity; K/sub I/), or half-maximal contractile response (potency; ED 50 ) has been measured numerous times. The goodness-of-fit test for normality was performed on both normal (Gaussian) or log 10 -normal frequency histograms of these data using the SAS Univariate procedure. Specific binding of 3 H-prazosin to rat liver (α 1 -), 3 H rauwolscine to rat cortex (α 2 -) and 3 H-dihydroalprenolol to rat ventricle (β 1 -) or rat lung (β 2 -receptors) was inhibited by NE; the distributions of NE K/sub I/'s at all these sites were skewed to the right, with highly significant (p 50 's of NE in isolated rabbit aorta (α 1 ), phenoxybenzamine-treated dog saphenous vein (α 2 ) and guinea pig atrium (β 1 ). The vasorelaxant potency of atrial natriuretic hormone in histamine-contracted rabbit aorta also was better described by a log-normal distribution, indicating that log-normalcy is probably a general phenomenon of drug-receptor interactions. Because data of this type appear to be log-normally distributed, geometric means should be used in parametric statistical analyses

  12. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    Science.gov (United States)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  13. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    International Nuclear Information System (INIS)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical data from the National Uranium Resource Evaluation Program

  14. Bayesian Estimation of Two-Parameter Weibull Distribution Using Extension of Jeffreys' Prior Information with Three Loss Functions

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.

  15. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  16. Statistical analysis of wind speed using two-parameter Weibull distribution in Alaçatı region

    International Nuclear Information System (INIS)

    Ozay, Can; Celiktas, Melih Soner

    2016-01-01

    Highlights: • Wind speed & direction data from September 2008 to March 2014 has been analyzed. • Mean wind speed for the whole data set has been found to be 8.11 m/s. • Highest wind speed is observed in July with a monthly mean value of 9.10 m/s. • Wind speed with the most energy has been calculated as 12.77 m/s. • Observed data has been fit to a Weibull distribution and k &c parameters have been calculated as 2.05 and 9.16. - Abstract: Weibull Statistical Distribution is a common method for analyzing wind speed measurements and determining wind energy potential. Weibull probability density function can be used to forecast wind speed, wind density and wind energy potential. In this study a two-parameter Weibull statistical distribution is used to analyze the wind characteristics of Alaçatı region, located in Çeşme, İzmir. The data used in the density function are acquired from a wind measurement station in Alaçatı. Measurements were gathered on three different heights respectively 70, 50 and 30 m between 10 min intervals for five and half years. As a result of this study; wind speed frequency distribution, wind direction trends, mean wind speed, and the shape and the scale (k&c) Weibull parameters have been calculated for the region. Mean wind speed for the entirety of the data set is found to be 8.11 m/s. k&c parameters are found as 2.05 and 9.16 in relative order. Wind direction analysis along with a wind rose graph for the region is also provided with the study. Analysis suggests that higher wind speeds which range from 6–12 m/s are prevalent between the sectors 340–360°. Lower wind speeds, from 3 to 6 m/s occur between sectors 10–29°. Results of this study contribute to the general knowledge about the regions wind energy potential and can be used as a source for investors and academics.

  17. Modelling the Skinner Thesis : Consequences of a Lognormal or a Bimodal Resource Base Distribution

    NARCIS (Netherlands)

    Auping, W.L.

    2014-01-01

    The copper case is often used as an example in resource depletion studies. Despite these studies, several profound uncertainties remain in the system. One of these uncertainties is the distribution of copper grades in the lithosphere. The Skinner thesis promotes the idea that copper grades may be

  18. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    Science.gov (United States)

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  19. Log-normal spray drop distribution...analyzed by two new computer programs

    Science.gov (United States)

    Gerald S. Walton

    1968-01-01

    Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...

  20. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  1. Log-Normal Distribution in a Growing System with Weighted and Multiplicatively Interacting Particles

    Science.gov (United States)

    Fujihara, Akihiro; Tanimoto, Satoshi; Yamamoto, Hiroshi; Ohtsuki, Toshiya

    2018-03-01

    A growing system with weighted and multiplicatively interacting particles is investigated. Each particle has a quantity that changes multiplicatively after a binary interaction, with its growth rate controlled by a weight parameter in a homogeneous symmetric kernel. We consider the system using moment inequalities and analytically derive the log-normal-type tail in the probability distribution function of quantities when the parameter is negative, which is different from the result for single-body multiplicative processes. We also find that the system approaches a winner-take-all state when the parameter is positive.

  2. Lognormal distribution of natural radionuclides in freshwater ecosystems and coal-ash repositories

    International Nuclear Information System (INIS)

    Drndarski, N.; Lavi, N.

    1997-01-01

    This study summarizes and analyses data for natural radionuclides, 40 K, 226 Ra and 'Th, measured by gamma spectrometry in water samples, sediments and coal-ash samples collected from regional freshwater ecosystems and near-by coal-ash repositories during the last decade, 1986-1996, respectively. The frequency plots of natural radionuclide data, for which the hypothesis of the regional scale log normality was accepted, exhibited single population groups with exception of 226 Ra and 232 Th data for waters. Thus the presence of break points in the frequency distribution plots indicated that 226 Ra and 232 Th data for waters do not come from a single statistical population. Thereafter the hypothesis of log normality was accepted for the separate population groups of 226 Ra and '-32 Th in waters. (authors)

  3. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    Science.gov (United States)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  4. The reliability assessment of the electromagnetic valve of high-speed electric multiple units braking system based on two-parameter exponential distribution

    Directory of Open Access Journals (Sweden)

    Jianwei Yang

    2016-06-01

    Full Text Available In order to solve the reliability assessment of braking system component of high-speed electric multiple units, this article, based on two-parameter exponential distribution, provides the maximum likelihood estimation and Bayes estimation under a type-I life test. First of all, we evaluate the failure probability value according to the classical estimation method and then obtain the maximum likelihood estimation of parameters of two-parameter exponential distribution by performing and using the modified likelihood function. On the other hand, based on Bayesian theory, this article also selects the beta and gamma distributions as the prior distribution, combines with the modified maximum likelihood function, and innovatively applies a Markov chain Monte Carlo algorithm to parameters assessment based on Bayes estimation method for two-parameter exponential distribution, so that two reliability mathematical models of the electromagnetic valve are obtained. Finally, through type-I life test, the failure rates according to maximum likelihood estimation and Bayes estimation method based on Markov chain Monte Carlo algorithm are, respectively, 2.650 × 10−5 and 3.037 × 10−5. Compared with the failure rate of a electromagnetic valve 3.005 × 10−5, it proves that the Bayes method can use a Markov chain Monte Carlo algorithm to estimate reliability for two-parameter exponential distribution and Bayes estimation is more closer to the value of electromagnetic valve. So, by fully integrating multi-source, Bayes estimation method can preferably modify and precisely estimate the parameters, which can provide a certain theoretical basis for the safety operation of high-speed electric multiple units.

  5. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  6. CELL AVERAGING CFAR DETECTOR WITH SCALE FACTOR CORRECTION THROUGH THE METHOD OF MOMENTS FOR THE LOG-NORMAL DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    José Raúl Machado Fernández

    2018-01-01

    Full Text Available Se presenta el nuevo detector LN-MoM-CA-CFAR que tiene una desviación reducida en la tasa de probabilidad de falsa alarma operacional con respecto al valor concebido de diseño. La solución corrige un problema fundamental de los procesadores CFAR que ha sido ignora-do en múltiples desarrollos. En efecto, la mayoría de los esquemas previamente propuestos tratan con los cambios bruscos del nivel del clutter mientras que la presente solución corrige los cambios lentos estadísticos de la señal de fondo. Se ha demostrado que estos tienen una influencia marcada en la selección del factor de ajuste multiplicativo CFAR, y consecuen-temente en el mantenimiento de la probabilidad de falsa alarma. Los autores aprovecharon la alta precisión que se alcanza en la estimación del parámetro de forma Log-Normal con el MoM, y la amplia aplicación de esta distribución en la modelación del clutter, para crear una arquitectura que ofrece resultados precisos y con bajo costo computacional. Luego de un procesamiento intensivo de 100 millones de muestras Log-Normal, se creó un esquema que, mejorando el desempeño del clásico CA-CFAR a través de la corrección continua de su fac-tor de ajuste, opera con una excelente estabilidad alcanzando una desviación de solamente 0,2884 % para la probabilidad de falsa alarma de 0,01.

  7. Beyond lognormal inequality: The Lorenz Flow Structure

    Science.gov (United States)

    Eliazar, Iddo

    2016-11-01

    Observed from a socioeconomic perspective, the intrinsic inequality of the lognormal law happens to manifest a flow generated by an underlying ordinary differential equation. In this paper we extend this feature of the lognormal law to a general ;Lorenz Flow Structure; of Lorenz curves-objects that quantify socioeconomic inequality. The Lorenz Flow Structure establishes a general framework of size distributions that span continuous spectra of socioeconomic states ranging from the pure-communism extreme to the absolute-monarchy extreme. This study introduces and explores the Lorenz Flow Structure, analyzes its statistical properties and its inequality properties, unveils the unique role of the lognormal law within this general structure, and presents various examples of this general structure. Beyond the lognormal law, the examples include the inverse-Pareto and Pareto laws-which often govern the tails of composite size distributions.

  8. Effects of Initial Values and Convergence Criterion in the Two-Parameter Logistic Model When Estimating the Latent Distribution in BILOG-MG 3.

    Directory of Open Access Journals (Sweden)

    Ingo W Nader

    Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.

  9. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    Science.gov (United States)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  10. Probability distribution of atmospheric pollutants: comparison among four methods for the determination of the log-normal distribution parameters; La distribuzione di probabilita` degli inquinanti atmosferici: confronto tra quattro metodi per la determinazione dei parametri della distribuzione log-normale

    Energy Technology Data Exchange (ETDEWEB)

    Bellasio, R [Enviroware s.r.l., Agrate Brianza, Milan (Italy). Centro Direzionale Colleoni; Lanzani, G; Ripamonti, M; Valore, M [Amministrazione Provinciale, Como (Italy)

    1998-04-01

    This work illustrates the possibility to interpolate the measured concentrations of CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} during one year (1995) at the 13 stations of the air quality monitoring station network of the Provinces of Como and Lecco (Italy) by means of a log-normal distribution. Particular attention was given in choosing the method for the determination of the log-normal distribution parameters among four possible methods: I natural, II percentiles, III moments, IV maximum likelihood. In order to evaluate the goodness of fit a ranking procedure was carried out over the values of four indices: absolute deviation, weighted absolute deviation, Kolmogorov-Smirnov index and Cramer-von Mises-Smirnov index. The capability of the log-normal distribution to fit the measured data is then discussed as a function of the pollutant and of the monitoring station. Finally an example of application is given: the effect of an emission reduction strategy in Lombardy Region (the so called `bollino blu`) is evaluated using a log-normal distribution. [Italiano] In questo lavoro si discute la possibilita` di interpolare le concentrazioni misurate di CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} durante un anno solare (il 1995) nelle 13 stazioni della Rete di Rilevamento della qualita` dell`aria delle Provincie di Como e di Lecco mediante una funzione log-normale. In particolare si discute quale metodo e` meglio usare per l`individuazione dei 2 parametri caratteristici della log-normale, tra 4 teoreticamente possibili: I naturale, II dei percentili, III dei momenti, IV della massima verosimiglianza. Per valutare i risultati ottenuti si usano: la deviazione assoluta, la deviazione pesata, il parametro di Kolmogorov-Smirnov e quello di Cramer-von Mises-Smirnov effettuando un ranking tra i metodi in funzione degli inquinanti e della stazione di misura. Ancora in funzione degli inquinanti e delle diverse stazioni di misura si discute poi la capacita` della funzione log-normale di

  11. Methodology for lognormal modelling of malignant pleural mesothelioma survival time distributions: a study of 5580 case histories from Europe and USA

    Energy Technology Data Exchange (ETDEWEB)

    Mould, Richard F [41 Ewhurst Avenue, South Croydon, Surrey CR2 0DH (United Kingdom); Lahanas, Michael [Klinikum Offenbach, Strahlenklinik, 66 Starkenburgring, 63069 Offenbach am Main (Germany); Asselain, Bernard [Institut Curie, Biostatistiques, 26 rue d' Ulm, 75231 Paris Cedex 05 (France); Brewster, David [Director, Scottish Cancer Registry, Information Services (NHS National Services Scotland) Area 155, Gyle Square, 1 South Gyle Crescent, Edinburgh EH12 9EB (United Kingdom); Burgers, Sjaak A [Department of Thoracic Oncology, Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam, The (Netherlands); Damhuis, Ronald A M [Rotterdam Cancer Registry, Rochussenstraat 125, PO Box 289, 3000 AG Rotterdam, The (Netherlands); Rycke, Yann De [Institut Curie, Biostatistiques, 26 rue d' Ulm, 75231 Paris Cedex 05 (France); Gennaro, Valerio [Liguria Mesothelioma Cancer Registry, Etiology and Epidemiology Department, National Cancer Research Institute, Pad. Maragliano, Largo R Benzi, 10-16132 Genoa (Italy); Szeszenia-Dabrowska, Neonila [Department of Occupational and Environmental Epidemiology, National Institute of Occupational Medicine, PO Box 199, Swietej Teresy od Dzieciatka Jezus 8, 91-348 Lodz (Poland)

    2004-09-07

    A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0.

  12. Methodology for lognormal modelling of malignant pleural mesothelioma survival time distributions: a study of 5580 case histories from Europe and USA

    International Nuclear Information System (INIS)

    Mould, Richard F; Lahanas, Michael; Asselain, Bernard; Brewster, David; Burgers, Sjaak A; Damhuis, Ronald A M; Rycke, Yann De; Gennaro, Valerio; Szeszenia-Dabrowska, Neonila

    2004-01-01

    A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0

  13. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  14. A physical explanation of the lognormality of pollutant concentrations

    International Nuclear Information System (INIS)

    Ott, W.R.

    1990-01-01

    Investigators in different environmental fields have reported that the concentrations of various measured substances have frequency distributions that are lognormal, or nearly so. That is, when the logarithms of the observed concentrations are plotted as a frequency distribution, the resulting distribution is approximately normal, or Gaussian, over much of the observed range. Examples include radionuclides in soil, pollutants in ambient air, indoor air quality, trace metals in streams, metals in biological tissue, calcium in human remains. The ubiquity of the lognormal distribution in environmental processes is surprising and has not been adequately explained, since common processes in nature (for example, computation of the mean and the analysis of error) usually give rise to distributions that are normal rather than lognormal. This paper takes the first step toward explaining why lognormal distributions can arise naturally from certain physical processes that are analogous to those found in the environment. In this paper, these processes are treated mathematically, and the results are illustrated in a laboratory beaker experiment that is simulated on the computer

  15. Asymptotics of sums of lognormal random variables with Gaussian copula

    DEFF Research Database (Denmark)

    Asmussen, Søren; Rojas-Nandayapa, Leonardo

    2008-01-01

    Let (Y1, ..., Yn) have a joint n-dimensional Gaussian distribution with a general mean vector and a general covariance matrix, and let Xi = eYi, Sn = X1 + ⋯ + Xn. The asymptotics of P (Sn > x) as n → ∞ are shown to be the same as for the independent case with the same lognormal marginals. In part...

  16. Evolution and mass extinctions as lognormal stochastic processes

    Science.gov (United States)

    Maccone, Claudio

    2014-10-01

    In a series of recent papers and in a book, this author put forward a mathematical model capable of embracing the search for extra-terrestrial intelligence (SETI), Darwinian Evolution and Human History into a single, unified statistical picture, concisely called Evo-SETI. The relevant mathematical tools are: (1) Geometric Brownian motion (GBM), the stochastic process representing evolution as the stochastic increase of the number of species living on Earth over the last 3.5 billion years. This GBM is well known in the mathematics of finances (Black-Sholes models). Its main features are that its probability density function (pdf) is a lognormal pdf, and its mean value is either an increasing or, more rarely, decreasing exponential function of the time. (2) The probability distributions known as b-lognormals, i.e. lognormals starting at a certain positive instant b>0 rather than at the origin. These b-lognormals were then forced by us to have their peak value located on the exponential mean-value curve of the GBM (Peak-Locus theorem). In the framework of Darwinian Evolution, the resulting mathematical construction was shown to be what evolutionary biologists call Cladistics. (3) The (Shannon) entropy of such b-lognormals is then seen to represent the `degree of progress' reached by each living organism or by each big set of living organisms, like historic human civilizations. Having understood this fact, human history may then be cast into the language of b-lognormals that are more and more organized in time (i.e. having smaller and smaller entropy, or smaller and smaller `chaos'), and have their peaks on the increasing GBM exponential. This exponential is thus the `trend of progress' in human history. (4) All these results also match with SETI in that the statistical Drake equation (generalization of the ordinary Drake equation to encompass statistics) leads just to the lognormal distribution as the probability distribution for the number of extra

  17. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    Science.gov (United States)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-11-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[S|[Lambda

  18. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    Science.gov (United States)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  19. The analysis of annual dose distributions for radiation workers

    International Nuclear Information System (INIS)

    Mill, A.J.

    1984-05-01

    The system of dose limitation recommended by the ICRP includes the requirement that no worker shall exceed the current dose limit of 50mSv/a. Continuous exposure at this limit corresponds to an annual death rate comparable with 'high risk' industries if all workers are continuously exposed at the dose limit. In practice, there is a distribution of doses with an arithmetic mean lower than the dose limit. In its 1977 report UNSCEAR defined a reference dose distribution for the purposes of comparison. However, this two parameter distribution does not show the departure from log-normality normally observed for actual distributions at doses which are a significant proportion of the annual limit. In this report an alternative model is suggested, based on a three parameter log-normal distribution. The third parameter is an ''effective dose limit'' and such a model fits very well the departure from log-normality observed in actual dose distributions. (author)

  20. Log-normality of indoor radon data in the Walloon region of Belgium

    International Nuclear Information System (INIS)

    Cinelli, Giorgia; Tondeur, François

    2015-01-01

    The deviations of the distribution of Belgian indoor radon data from the log-normal trend are examined. Simulated data are generated to provide a theoretical frame for understanding these deviations. It is shown that the 3-component structure of indoor radon (radon from subsoil, outdoor air and building materials) generates deviations in the low- and high-concentration tails, but this low-C trend can be almost completely compensated by the effect of measurement uncertainties and by possible small errors in background subtraction. The predicted low-C and high-C deviations are well observed in the Belgian data, when considering the global distribution of all data. The agreement with the log-normal model is improved when considering data organised in homogeneous geological groups. As the deviation from log-normality is often due to the low-C tail for which there is no interest, it is proposed to use the log-normal fit limited to the high-C half of the distribution. With this prescription, the vast majority of the geological groups of data are compatible with the log-normal model, the remaining deviations being mostly due to a few outliers, and rarely to a “fat tail”. With very few exceptions, the log-normal modelling of the high-concentration part of indoor radon data is expected to give reasonable results, provided that the data are organised in homogeneous geological groups. - Highlights: • Deviations of the distribution of Belgian indoor Rn data from the log-normal trend. • 3-component structure of indoor Rn: subsoil, outdoor air and building materials. • Simulated data generated to provide a theoretical frame for understanding deviations. • Data organised in homogeneous geological groups; better agreement with the log-normal

  1. The lognormal handwriter: learning, performing and declining.

    Directory of Open Access Journals (Sweden)

    Réjean ePlamondon

    2013-12-01

    Full Text Available The generation of handwriting is a complex neuromotor skill requiring the interaction of many cognitive processes. It aims at producing a message to be imprinted as an ink trace left on a writing medium. The generated trajectory of the pen tip is made up of strokes superimposed over time. The Kinematic Theory of rapid human movements and its family of lognormal models provide analytical representations of these strokes, often considered as the basic unit of handwriting. This paradigm has not only been experimentally confirmed in numerous predictive and physiologically significant tests but it has also been shown to be the ideal mathematical description for the impulse response of a neuromuscular system. This latter demonstration suggests that the lognormality of the velocity patterns can be interpreted as reflecting the behaviour of subjects who are in perfect control of their movements. To illustrate this interpretation, we present a short overview of the main concepts behind the Kinematic Theory and briefly describe how its models can be exploited, using various software tools, to investigate these ideal lognormal behaviors. We emphasize that the parameters extracted during various tasks can be used to analyze some underlying processes associated with their realization. To investigate the operational convergence hypothesis, we report on two original studies. First, we focus on the early steps of the motor learning process as seen as a converging behaviour toward the production of more precise lognormal patterns as young children practicing handwriting start to become more fluent writers. Second, we illustrate how aging affects handwriting by pointing out the increasing departure from the ideal lognormal behaviour as the control of the fine motricity begins to decline. Overall, the paper highlights this developmental process of merging toward a lognormal behaviour with learning, mastering this behaviour to succeed in performing a given task

  2. On Riemann zeroes, lognormal multiplicative chaos, and Selberg integral

    International Nuclear Information System (INIS)

    Ostrovsky, Dmitry

    2016-01-01

    Rescaled Mellin-type transforms of the exponential functional of the Bourgade–Kuan–Rodgers statistic of Riemann zeroes are conjecturally related to the distribution of the total mass of the limit lognormal stochastic measure of Mandelbrot–Bacry–Muzy. The conjecture implies that a non-trivial, log-infinitely divisible probability distribution is associated with Riemann zeroes. For application, integral moments, covariance structure, multiscaling spectrum, and asymptotics associated with the exponential functional are computed in closed form using the known meromorphic extension of the Selberg integral. (paper)

  3. Multilevel quadrature of elliptic PDEs with log-normal diffusion

    KAUST Repository

    Harbrecht, Helmut

    2015-01-07

    We apply multilevel quadrature methods for the moment computation of the solution of elliptic PDEs with lognormally distributed diffusion coefficients. The computation of the moments is a difficult task since they appear as high dimensional Bochner integrals over an unbounded domain. Each function evaluation corresponds to a deterministic elliptic boundary value problem which can be solved by finite elements on an appropriate level of refinement. The complexity is thus given by the number of quadrature points times the complexity for a single elliptic PDE solve. The multilevel idea is to reduce this complexity by combining quadrature methods with different accuracies with several spatial discretization levels in a sparse grid like fashion.

  4. Log-Normality and Multifractal Analysis of Flame Surface Statistics

    Science.gov (United States)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2013-11-01

    The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.

  5. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    Science.gov (United States)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  6. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  7. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    International Nuclear Information System (INIS)

    Agterberg, Frits

    2017-01-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  8. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    Energy Technology Data Exchange (ETDEWEB)

    Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)

    2017-07-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  9. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  10. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  11. Behaviour interpretation log-normal tenor of uranium in the context of intrusive rocks

    International Nuclear Information System (INIS)

    Valencia, Jacinto; Palacios, Andres; Maguina, Jose

    2015-01-01

    Analysis and processing of the results of the tenor of uranium obtained from a rock intrusive by the method of gamma spectrometry, which result in a better correlation between uranium and thorium when the logarithm of these analyzes is used is discussed and is represented in a thorium/uranium diagram obtaining a better response. This is provided that the expression of the lognormal distribution provides a closer relation to the spatial distribution of uranium in a mineral deposit. The representation of a normal distribution and a log-normal distribution is shown. In the interpretative part explained by diagrams the behavior of the thorium/uranium and relation to potassium from direct measurements of tenors obtained in the field of sampling points of a section of granite San Ramon (SR) relationship, and volcanic Mitu Group (GM) where it has identified the granite rock of this unit as a source of uranium. (author)

  12. Weibull and lognormal Taguchi analysis using multiple linear regression

    International Nuclear Information System (INIS)

    Piña-Monarrez, Manuel R.; Ortiz-Yañez, Jesús F.

    2015-01-01

    The paper provides to reliability practitioners with a method (1) to estimate the robust Weibull family when the Taguchi method (TM) is applied, (2) to estimate the normal operational Weibull family in an accelerated life testing (ALT) analysis to give confidence to the extrapolation and (3) to perform the ANOVA analysis to both the robust and the normal operational Weibull family. On the other hand, because the Weibull distribution neither has the normal additive property nor has a direct relationship with the normal parameters (µ, σ), in this paper, the issues of estimating a Weibull family by using a design of experiment (DOE) are first addressed by using an L_9 (3"4) orthogonal array (OA) in both the TM and in the Weibull proportional hazard model approach (WPHM). Then, by using the Weibull/Gumbel and the lognormal/normal relationships and multiple linear regression, the direct relationships between the Weibull and the lifetime parameters are derived and used to formulate the proposed method. Moreover, since the derived direct relationships always hold, the method is generalized to the lognormal and ALT analysis. Finally, the method’s efficiency is shown through its application to the used OA and to a set of ALT data. - Highlights: • It gives the statistical relations and steps to use the Taguchi Method (TM) to analyze Weibull data. • It gives the steps to determine the unknown Weibull family to both the robust TM setting and the normal ALT level. • It gives a method to determine the expected lifetimes and to perform its ANOVA analysis in TM and ALT analysis. • It gives a method to give confidence to the extrapolation in an ALT analysis by using the Weibull family of the normal level.

  13. Collision prediction models using multivariate Poisson-lognormal regression.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.

  14. Two-parameter fracture mechanics: Theory and applications

    International Nuclear Information System (INIS)

    O'Dowd, N.P.; Shih, C.F.

    1993-02-01

    A family of self-similar fields provides the two parameters required to characterize the full range of high- and low-triaxiality crack tip states. The two parameters, J and Q, have distinct roles: J sets the size scale of the process zone over which large stresses and strains develop, while Q scales the near-tip stress distribution relative to a high triaxiality reference stress state. An immediate consequence of the theory is this: it is the toughness values over a range of crack tip constraint that fully characterize the material's fracture resistance. It is shown that Q provides a common scale for interpreting cleavage fracture and ductile tearing data thus allowing both failure modes to be incorporated in a single toughness locus. The evolution of Q, as plasticity progresses from small scale yielding to fully yielded conditions, has been quantified for several crack geometries and for a wide range of material strain hardening properties. An indicator of the robustness of the J-Q fields is introduced; Q as a field parameter and as a pointwise measure of stress level is discussed

  15. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  16. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  17. Neutron dosimetry and spectrometry with Bonner spheres. Working out a log-normal reference matrix

    International Nuclear Information System (INIS)

    Zaborowski, Henrick.

    1981-11-01

    From the experimental and theoretical studies made upon the BONNER's spheres System with a I 6 Li(Eu) crystal and with a miniaturized 3 He counter we get the normalized energy response functions R*sub(i)(E). This normalization is obtained by the mathematization of the Resolution Function R*(i,E) in the Log-Normal distribution hypothesis to mono energetic neutrons given in April 1976 to the International Symposium on Californium 252. The fit of the Log-Normal Hypothesis with the experimental and Theoretical data is very satisfactory. The parameter's tabulated values allow a precise interpolation, at all energies between 0.4 eV and 15 MeV and for all spheres diameters between 2 and 12 inches, of the discretized R*sub(ij) Reference Matrix for the applications to neutron dosimetry and spectrometry [fr

  18. Lognormal switching times for titanium dioxide bipolar memristors: origin and resolution

    International Nuclear Information System (INIS)

    Medeiros-Ribeiro, Gilberto; Perner, Frederick; Carter, Richard; Abdalla, Hisham; Pickett, Matthew D; Williams, R Stanley

    2011-01-01

    We measured the switching time statistics for a TiO 2 memristor and found that they followed a lognormal distribution, which is a potentially serious problem for computer memory and data storage applications. We examined the underlying physical phenomena that determine the switching statistics and proposed a simple analytical model for the distribution based on the drift/diffusion equation and previously measured nonlinear drift behavior. We designed a closed-loop switching protocol that dramatically narrows the time distribution, which can significantly improve memory circuit performance and reliability.

  19. Schur Convexity of Generalized Heronian Means Involving Two Parameters

    Directory of Open Access Journals (Sweden)

    Bencze Mihály

    2008-01-01

    Full Text Available Abstract The Schur convexity and Schur-geometric convexity of generalized Heronian means involving two parameters are studied, the main result is then used to obtain several interesting and significantly inequalities for generalized Heronian means.

  20. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...

  1. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    [Xe−θX]/L(θ)=x. The asymptotic formulas involve the Lambert W function. The established relations are used to provide two different numerical methods for evaluating the left tail probability of lognormal sum Sn=X1+⋯+Xn: a saddlepoint approximation and an exponential twisting importance sampling estimator. For the latter we...

  2. Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio

    Science.gov (United States)

    Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.

    2017-12-01

    Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.

  3. Optimal Two Parameter Bounds for the Seiffert Mean

    Directory of Open Access Journals (Sweden)

    Hui Sun

    2013-01-01

    Full Text Available We obtain sharp bounds for the Seiffert mean in terms of a two parameter family of means. Our results generalize and extend the recent bounds presented in the Journal of Inequalities and Applications (2012 and Abstract and Applied Analysis (2012.

  4. Bubbling and bistability in two parameter discrete systems

    OpenAIRE

    Ambika, G.; Sujatha, N. V.

    2000-01-01

    We present a graphical analysis of the mechanisms underlying the occurrences of bubbling sequences and bistability regions in the bifurcation scenario of a special class of one dimensional two parameter maps. The main result of the analysis is that whether it is bubbling or bistability is decided by the sign of the third derivative at the inflection point of the map function.

  5. The size distributions of all Indian cities

    Science.gov (United States)

    Luckstead, Jeff; Devadoss, Stephen; Danforth, Diana

    2017-05-01

    We apply five distributions-lognormal, double-Pareto lognormal, lognormal-upper tail Pareto, Pareto tails-lognormal, and Pareto tails-lognormal with differentiability restrictions-to estimate the size distribution of all Indian cities. Since India contains numerous small cities, it is important to explicitly model the lower-tail behavior for studying the distribution of all Indian cities. Our results rigorously confirm, using both graphical and formal statistical tests, that among these five distributions, Pareto tails-lognormal is a better suited parametrization of the Indian city size data, verifying that the Indian city size distribution exhibits a strong reverse Pareto in the lower tail, lognormal in the mid-range body, and Pareto in the upper tail.

  6. Use of the truncated shifted Pareto distribution in assessing size distribution of oil and gas fields

    Science.gov (United States)

    Houghton, J.C.

    1988-01-01

    The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.

  7. Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels

    KAUST Repository

    Ansari, Imran Shafique

    2015-05-01

    Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.

  8. Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim

    2015-01-01

    Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.

  9. Unification of the Two-Parameter Equation of State and the Principle of Corresponding States

    DEFF Research Database (Denmark)

    Mollerup, Jørgen

    1998-01-01

    A two-parameter equation of state is a two-parameter corresponding states model. A two-parameter corresponding states model is composed of two scale factor correlations and a reference fluid equation of state. In a two-parameter equation of state the reference equation of state is the two-paramet...

  10. ORILAM, a three-moment lognormal aerosol scheme for mesoscale atmospheric model: Online coupling into the Meso-NH-C model and validation on the Escompte campaign

    Science.gov (United States)

    Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert

    2005-09-01

    Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.

  11. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  12. SYVAC3 parameter distribution package

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T; Skeet, A

    1995-01-01

    SYVAC3 (Systems Variability Analysis Code, generation 3) is a computer program that implements a method called systems variability analysis to analyze the behaviour of a system in the presence of uncertainty. This method is based on simulating the system many times to determine the variation in behaviour it can exhibit. SYVAC3 specializes in systems representing the transport of contaminants, and has several features to simplify the modelling of such systems. It provides a general tool for estimating environmental impacts from the dispersal of contaminants. This report describes a software object type (a generalization of a data type) called Parameter Distribution. This object type is used in SYVAC3, and can also be used independently. Parameter Distribution has the following subtypes: beta distribution; binomial distribution; constant distribution; lognormal distribution; loguniform distribution; normal distribution; piecewise uniform distribution; Triangular distribution; and uniform distribution. Some of these distributions can be altered by correlating two parameter distribution objects. This report provides complete specifications for parameter distributions, and also explains how to use them. It should meet the needs of casual users, reviewers, and programmers who wish to add their own subtypes. (author). 30 refs., 75 tabs., 56 figs.

  13. Two-parameter asymptotics in magnetic Weyl calculus

    International Nuclear Information System (INIS)

    Lein, Max

    2010-01-01

    This paper is concerned with small parameter asymptotics of magnetic quantum systems. In addition to a semiclassical parameter ε, the case of small coupling λ to the magnetic vector potential naturally occurs in this context. Magnetic Weyl calculus is adapted to incorporate both parameters, at least one of which needs to be small. Of particular interest is the expansion of the Weyl product which can be used to expand the product of operators in a small parameter, a technique which is prominent to obtain perturbation expansions. Three asymptotic expansions for the magnetic Weyl product of two Hoermander class symbols are proven as (i) ε<< 1 and λ<< 1, (ii) ε<< 1 and λ= 1, as well as (iii) ε= 1 and λ<< 1. Expansions (i) and (iii) are impossible to obtain with ordinary Weyl calculus. Furthermore, I relate the results derived by ordinary Weyl calculus with those obtained with magnetic Weyl calculus by one- and two-parameter expansions. To show the power and versatility of magnetic Weyl calculus, I derive the semirelativistic Pauli equation as a scaling limit from the Dirac equation up to errors of fourth order in 1/c.

  14. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  15. The effect of mis-specification on mean and selection between the Weibull and lognormal models

    Science.gov (United States)

    Jia, Xiang; Nadarajah, Saralees; Guo, Bo

    2018-02-01

    The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.

  16. Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence

    KAUST Repository

    Song, Xuegui; Yang, Fan; Cheng, Julian; Alouini, Mohamed-Slim

    2015-01-01

    Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate

  17. High SNR BER comparison of coherent and differentially coherent modulation schemes in lognormal fading channels

    KAUST Repository

    Song, Xuegui; Cheng, Julian; Alouini, Mohamed-Slim

    2014-01-01

    Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.

  18. High SNR BER comparison of coherent and differentially coherent modulation schemes in lognormal fading channels

    KAUST Repository

    Song, Xuegui

    2014-09-01

    Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.

  19. Discrete Lognormal Model as an Unbiased Quantitative Measure of Scientific Performance Based on Empirical Citation Data

    Science.gov (United States)

    Moreira, Joao; Zeng, Xiaohan; Amaral, Luis

    2013-03-01

    Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants

  20. Localized massive halo properties in BAHAMAS and MACSIS simulations: scalings, log-normality, and covariance

    Science.gov (United States)

    Farahi, Arya; Evrard, August E.; McCarthy, Ian; Barnes, David J.; Kay, Scott T.

    2018-05-01

    Using tens of thousands of halos realized in the BAHAMAS and MACSIS simulations produced with a consistent astrophysics treatment that includes AGN feedback, we validate a multi-property statistical model for the stellar and hot gas mass behavior in halos hosting groups and clusters of galaxies. The large sample size allows us to extract fine-scale mass-property relations (MPRs) by performing local linear regression (LLR) on individual halo stellar mass (Mstar) and hot gas mass (Mgas) as a function of total halo mass (Mhalo). We find that: 1) both the local slope and variance of the MPRs run with mass (primarily) and redshift (secondarily); 2) the conditional likelihood, p(Mstar, Mgas| Mhalo, z) is accurately described by a multivariate, log-normal distribution, and; 3) the covariance of Mstar and Mgas at fixed Mhalo is generally negative, reflecting a partially closed baryon box model for high mass halos. We validate the analytical population model of Evrard et al. (2014), finding sub-percent accuracy in the log-mean halo mass selected at fixed property, ⟨ln Mhalo|Mgas⟩ or ⟨ln Mhalo|Mstar⟩, when scale-dependent MPR parameters are employed. This work highlights the potential importance of allowing for running in the slope and scatter of MPRs when modeling cluster counts for cosmological studies. We tabulate LLR fit parameters as a function of halo mass at z = 0, 0.5 and 1 for two popular mass conventions.

  1. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    Science.gov (United States)

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  2. Maximum Likelihood Estimates of Parameters in Various Types of Distribution Fitted to Important Data Cases.

    OpenAIRE

    HIROSE,Hideo

    1998-01-01

    TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...

  3. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  4. Outage Performance Analysis of Cooperative Diversity with MRC and SC in Correlated Lognormal Channels

    Directory of Open Access Journals (Sweden)

    Skraparlis D

    2009-01-01

    Full Text Available Abstract The study of relaying systems has found renewed interest in the context of cooperative diversity for communication channels suffering from fading. This paper provides analytical expressions for the end-to-end SNR and outage probability of cooperative diversity in correlated lognormal channels, typically found in indoor and specific outdoor environments. The system under consideration utilizes decode-and-forward relaying and Selection Combining or Maximum Ratio Combining at the destination node. The provided expressions are used to evaluate the gains of cooperative diversity compared to noncooperation in correlated lognormal channels, taking into account the spectral and energy efficiency of the protocols and the half-duplex or full-duplex capability of the relay. Our analysis demonstrates that correlation and lognormal variances play a significant role on the performance gain of cooperative diversity against noncooperation.

  5. Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence

    KAUST Repository

    Song, Xuegui

    2015-03-01

    Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate BER expressions are presented. We demonstrate that the approximate BER, which is obtained by dividing the symbol error rate by the number of bits per symbol, can be used to estimate the BER performance with acceptable accuracy. Through our asymptotic analysis, we derive closed-form asymptotic BER performance loss expression for MDPSK with respect to MPSK in the lognormal turbulence channels. © 2015 IEEE.

  6. Virtual walks in spin space: A study in a family of two-parameter models

    Science.gov (United States)

    Mullick, Pratik; Sen, Parongama

    2018-05-01

    We investigate the dynamics of classical spins mapped as walkers in a virtual "spin" space using a generalized two-parameter family of spin models characterized by parameters y and z [de Oliveira et al., J. Phys. A 26, 2317 (1993), 10.1088/0305-4470/26/10/006]. The behavior of S (x ,t ) , the probability that the walker is at position x at time t , is studied in detail. In general S (x ,t ) ˜t-αf (x /tα) with α ≃1 or 0.5 at large times depending on the parameters. In particular, S (x ,t ) for the point y =1 ,z =0.5 corresponding to the Voter model shows a crossover in time; associated with this crossover, two timescales can be defined which vary with the system size L as L2logL . We also show that as the Voter model point is approached from the disordered regions along different directions, the width of the Gaussian distribution S (x ,t ) diverges in a power law manner with different exponents. For the majority Voter case, the results indicate that the the virtual walk can detect the phase transition perhaps more efficiently compared to other nonequilibrium methods.

  7. Nash equilibria in quantum games with generalized two-parameter strategies

    International Nuclear Information System (INIS)

    Flitney, Adrian P.; Hollenberg, Lloyd C.L.

    2007-01-01

    In the Eisert protocol for 2x2 quantum games [J. Eisert, et al., Phys. Rev. Lett. 83 (1999) 3077], a number of authors have investigated the features arising from making the strategic space a two-parameter subset of single qubit unitary operators. We argue that the new Nash equilibria and the classical-quantum transitions that occur are simply an artifact of the particular strategy space chosen. By choosing a different, but equally plausible, two-parameter strategic space we show that different Nash equilibria with different classical-quantum transitions can arise. We generalize the two-parameter strategies and also consider these strategies in a multiplayer setting

  8. Eigenstates of the higher power of the annihilation operator of two-parameter deformed harmonic oscillator

    International Nuclear Information System (INIS)

    Wang Jisuo; Sun Changyong; He Jinyu

    1996-01-01

    The eigenstates of the higher power of the annihilation operator a qs k (k≥3) of the two-parameter deformed harmonic oscillator are constructed. Their completeness is demonstrated in terms of the qs-integration

  9. On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications

    KAUST Repository

    Al-Quwaiee, Hessa; Alouini, Mohamed-Slim

    2015-01-01

    Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high

  10. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric; Haakon, Hoel; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

  11. Asymptotic Expansions of the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Cyril Grunspan

    2011-01-01

    We invert the Black-Scholes formula. We consider the cases low strike, large strike, short maturity and large maturity. We give explicitly the first 5 terms of the expansions. A method to compute all the terms by induction is also given. At the money, we have a closed form formula for implied lognormal volatility in terms of a power series in call price.

  12. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    Science.gov (United States)

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (Pdonation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  13. Pricing FX Options in the Heston/CIR Jump-Diffusion Model with Log-Normal and Log-Uniform Jump Amplitudes

    Directory of Open Access Journals (Sweden)

    Rehez Ahlip

    2015-01-01

    model for the exchange rate with log-normal jump amplitudes and the volatility model with log-uniformly distributed jump amplitudes. We assume that the domestic and foreign stochastic interest rates are governed by the CIR dynamics. The instantaneous volatility is correlated with the dynamics of the exchange rate return, whereas the domestic and foreign short-term rates are assumed to be independent of the dynamics of the exchange rate and its volatility. The main result furnishes a semianalytical formula for the price of the foreign exchange European call option.

  14. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    NARCIS (Netherlands)

    van Rijssel, Jozef; Kuipers, Bonny W M; Erne, Ben

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal

  15. EVIDENCE FOR TWO LOGNORMAL STATES IN MULTI-WAVELENGTH FLUX VARIATION OF FSRQ PKS 1510-089

    Energy Technology Data Exchange (ETDEWEB)

    Kushwaha, Pankaj; Misra, Ranjeev [Inter University Center for Astronomy and Astrophysics, Pune 411007 (India); Chandra, Sunil; Singh, K. P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Sahayanathan, S. [Astrophysical Sciences Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Baliyan, K. S., E-mail: pankajk@iucaa.in [Physical Research Laboratory, Ahmedabad 380009 (India)

    2016-05-01

    We present a systematic characterization of multi-wavelength emission from blazar PKS 1510-089 using well-sampled data at near-infrared (NIR), optical, X-ray, and γ -ray energies. The resulting flux distributions, except at X-rays, show two distinct lognormal profiles corresponding to a high and a low flux level. The dispersions exhibit energy-dependent behavior except in the LAT γ -ray and optical B-band. During the low level flux states, it is higher toward the peak of the spectral energy distribution, with γ -ray being intrinsically more variable followed by IR and then optical, consistent with mainly being a result of varying bulk Lorentz factor. On the other hand, the dispersions during the high state are similar in all bands except the optical B-band, where thermal emission still dominates. The centers of distributions are a factor of ∼4 apart, consistent with anticipation from studies of extragalactic γ -ray background with the high state showing a relatively harder mean spectral index compared to the low state.

  16. An Application of a Multidimensional Extension of the Two-Parameter Logistic Latent Trait Model.

    Science.gov (United States)

    McKinley, Robert L.; Reckase, Mark D.

    A latent trait model is described that is appropriate for use with tests that measure more than one dimension, and its application to both real and simulated test data is demonstrated. Procedures for estimating the parameters of the model are presented. The research objectives are to determine whether the two-parameter logistic model more…

  17. On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment

    KAUST Repository

    Rached, Nadhir B.

    2017-02-15

    The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.

  18. On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment

    KAUST Repository

    Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2017-01-01

    The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.

  19. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim; Rached, Nadhir B.; Kammoun, Abla; Tempone, Raul

    2018-01-01

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However

  20. Bending of an Infinite beam on a base with two parameters in the absence of a part of the base

    Directory of Open Access Journals (Sweden)

    Aleksandrovskiy Maxim

    2018-01-01

    Full Text Available Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.

  1. Bending of an Infinite beam on a base with two parameters in the absence of a part of the base

    Science.gov (United States)

    Aleksandrovskiy, Maxim; Zaharova, Lidiya

    2018-03-01

    Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.

  2. Possible Lognormal Distribution of Fermi-LAT Data of OJ 287 G. G. ...

    Indian Academy of Sciences (India)

    random noise is helpful in the search for periodicity and provides implication of the physical process in the jet or the accretion disk. OJ 287 was also monitored in the .... understanding the central engine of a blazar (Figures 1, 2 and 3). Acknowledgements. This work is partially supported by the National Natural Science ...

  3. A two-parameter family of double-power-law biorthonormal potential-density expansions

    Science.gov (United States)

    Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn

    2018-05-01

    We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. (2017a) expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.

  4. Thermodynamics of two-parameter quantum group Bose and Fermi gases

    International Nuclear Information System (INIS)

    Algin, A.

    2005-01-01

    The high and low temperature thermodynamic properties of the two-parameter deformed quantum group Bose and Fermi gases with SU p/q (2) symmetry are studied. Starting with a SU p/q (2)-invariant bosonic as well as fermionic Hamiltonian, several thermodynamic functions of the system such as the average number of particles, internal energy and equation of state are derived. The effects of two real independent deformation parameters p and q on the properties of the systems are discussed. Particular emphasis is given to a discussion of the Bose-Einstein condensation phenomenon for the two-parameter deformed quantum group Bose gas. The results are also compared with earlier undeformed and one-parameter deformed versions of Bose and Fermi gas models. (author)

  5. Band head spin assignment of superdeformed bands in 133Pr using two-parameter formulae

    Science.gov (United States)

    Sharma, Honey; Mittal, H. M.

    2018-03-01

    The two-parameter formulae viz. the power index formula, the nuclear softness formula and the VMI model are adopted to accredit the band head spin (I0) of four superdeformed rotational bands in 133Pr. The technique of least square fitting is used to accredit the band head spin for four superdeformed rotational bands in 133Pr. The root mean deviation among the computed transition energies and well-known experimental transition energies are attained by extracting the model parameters from the two-parameter formulae. The determined transition energies are in excellent agreement with the experimental transition energies, whenever exact spins are accredited. The power index formula coincides well with the experimental data and provides minimum root mean deviation. So, the power index formula is more efficient tool than the nuclear softness formula and the VMI model. The deviation of dynamic moment of inertia J(2) against the rotational frequency is also examined.

  6. Quantum-classical crossover of the escape rate in the two-parameter doubly periodic potential

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Bin [Department of Physics, Hubei University, Wuhan 430062, Hubei (China)]. E-mail: binzhoucn@yahoo.com

    2005-05-09

    The transition from the quantum tunneling to classical hopping for a two-parameter doubly periodic potential is investigated. According to the Chudnovsky's criterion for the first-order transition, it is shown that there is the first- or second-order transition depending on different parameters regions. The phase boundary lines between first- and second-order transitions are calculated, and a complete phase diagram is presented.

  7. Quantum-classical crossover of the escape rate in the two-parameter doubly periodic potential

    International Nuclear Information System (INIS)

    Zhou Bin

    2005-01-01

    The transition from the quantum tunneling to classical hopping for a two-parameter doubly periodic potential is investigated. According to the Chudnovsky's criterion for the first-order transition, it is shown that there is the first- or second-order transition depending on different parameters regions. The phase boundary lines between first- and second-order transitions are calculated, and a complete phase diagram is presented

  8. Quantum classical crossover of the escape rate in the two-parameter doubly periodic potential

    Science.gov (United States)

    Zhou, Bin

    2005-05-01

    The transition from the quantum tunneling to classical hopping for a two-parameter doubly periodic potential is investigated. According to the Chudnovsky's criterion for the first-order transition, it is shown that there is the first- or second-order transition depending on different parameters regions. The phase boundary lines between first- and second-order transitions are calculated, and a complete phase diagram is presented.

  9. The two-parameter deformation of GL(2), its differential calculus, and Lie algebra

    International Nuclear Information System (INIS)

    Schirrmacher, A.; Wess, J.

    1991-01-01

    The Yang-Baxter equation is solved in two dimensions giving rise to a two-parameter deformation of GL(2). The transformation properties of quantum planes are briefly discussed. Non-central determinant and inverse are constructed. A right-invariant differential calculus is presented and the role of the different deformation parameters investigated. While the corresponding Lie algebra relations are simply deformed, the comultiplication exhibits both quantization parameters. (orig.)

  10. Bending analysis of agglomerated carbon nanotube-reinforced beam resting on two parameters modified Vlasov model foundation

    Science.gov (United States)

    Ghorbanpour Arani, A.; Zamani, M. H.

    2018-06-01

    The present work deals with bending behavior of nanocomposite beam resting on two parameters modified Vlasov model foundation (MVMF), with consideration of agglomeration and distribution of carbon nanotubes (CNTs) in beam matrix. Equivalent fiber based on Eshelby-Mori-Tanaka approach is employed to determine influence of CNTs aggregation on elastic properties of CNT-reinforced beam. The governing equations are deduced using the principle of minimum potential energy under assumption of the Euler-Bernoulli beam theory. The MVMF required the estimation of γ parameter; to this purpose, unique iterative technique based on variational principles is utilized to compute value of the γ and subsequently fourth-order differential equation is solved analytically. Eventually, the transverse displacements and bending stresses are obtained and compared for different agglomeration parameters, various boundary conditions simultaneously and variant elastic foundation without requirement to instate values for foundation parameters.

  11. Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2012-08-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr−1, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m−2 respectively, which show good consistency with model estimates from other studies.

  12. Generating log-normal mock catalog of galaxies in redshift space

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun; Komatsu, Eiichiro [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Chiang, Chi-Ting [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Jeong, Donghui, E-mail: aniket@mpa-garching.mpg.de, E-mail: makiya@mpa-garching.mpg.de, E-mail: chi-ting.chiang@stonybrook.edu, E-mail: djeong@psu.edu, E-mail: ssaito@mpa-garching.mpg.de, E-mail: komatsu@mpa-garching.mpg.de [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  13. On the low SNR capacity of log-normal turbulence channels with full CSI

    KAUST Repository

    Benkhelifa, Fatma; Tall, Abdoulaye; Rezki, Zouheir; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.

  14. On the low SNR capacity of log-normal turbulence channels with full CSI

    KAUST Repository

    Benkhelifa, Fatma

    2014-09-01

    In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.

  15. Indirect estimation of the Convective Lognormal Transfer function model parameters for describing solute transport in unsaturated and undisturbed soil.

    Science.gov (United States)

    Mohammadi, Mohammad Hossein; Vanclooster, Marnik

    2012-05-01

    Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μ(t), increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ²(t) first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μ(t) estimated from the conceptual model performed much better as compared to predictions with μ(t) and σ²(t) estimated from calibration of solute transport at shallow soil depths. The use of μ(t) estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  17. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  18. A two-parameter nondiffusive heat conduction model for data analysis in pump-probe experiments

    Science.gov (United States)

    Ma, Yanbao

    2014-12-01

    Nondiffusive heat transfer has attracted intensive research interests in last 50 years because of its importance in fundamental physics and engineering applications. It has unique features that cannot be described by the Fourier law. However, current studies of nondiffusive heat transfer still focus on studying the effective thermal conductivity within the framework of the Fourier law due to a lack of a well-accepted replacement. Here, we show that nondiffusive heat conduction can be characterized by two inherent material properties: a diffusive thermal conductivity and a ballistic transport length. We also present a two-parameter heat conduction model and demonstrate its validity in different pump-probe experiments. This model not only offers new insights of nondiffusive heat conduction but also opens up new avenues for the studies of nondiffusive heat transfer outside the framework of the Fourier law.

  19. Vibrations And Stability Of Bernoulli-Euler And Timoshenko Beams On Two-Parameter Elastic Foundation

    Directory of Open Access Journals (Sweden)

    Obara P.

    2014-12-01

    Full Text Available The vibration and stability analysis of uniform beams supported on two-parameter elastic foundation are performed. The second foundation parameter is a function of the total rotation of the beam. The effects of axial force, foundation stiffness parameters, transverse shear deformation and rotatory inertia are incorporated into the accurate vibration analysis. The work shows very important question of relationships between the parameters describing the beam vibration, the compressive force and the foundation parameters. For the free supported beam, the exact formulas for the natural vibration frequencies, the critical forces and the formula defining the relationship between the vibration frequency and the compressive forces are derived. For other conditions of the beam support conditional equations were received. These equations determine the dependence of the frequency of vibration of the compressive force for the assumed parameters of elastic foundation and the slenderness of the beam.

  20. On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-05-01

    Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high signal-to-noise (SNR). In this work, we propose simple closed-form asymptotic expressions of the ergodic capacity of dual-branch correlated Log- Normal corresponding to selection combining, and switch-and-stay combining. Furthermore, we capitalize on these new results to find new asymptotic ergodic capacity of correlated dual- branch free-space optical communication system under the impact of pointing error with both heterodyne and intensity modulation/direct detection. © 2015 IEEE.

  1. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Two-parameter nonlinear spacetime perturbations: gauge transformations and gauge invariance

    International Nuclear Information System (INIS)

    Bruni, Marco; Gualtieri, Leonardo; Sopuerta, Carlos F

    2003-01-01

    An implicit fundamental assumption in relativistic perturbation theory is that there exists a parametric family of spacetimes that can be Taylor expanded around a background. The choice of the latter is crucial to obtain a manageable theory, so that it is sometime convenient to construct a perturbative formalism based on two (or more) parameters. The study of perturbations of rotating stars is a good example: in this case one can treat the stationary axisymmetric star using a slow rotation approximation (expansion in the angular velocity Ω), so that the background is spherical. Generic perturbations of the rotating star (say parametrized by λ) are then built on top of the axisymmetric perturbations in Ω. Clearly, any interesting physics requires nonlinear perturbations, as at least terms λΩ need to be considered. In this paper, we analyse the gauge dependence of nonlinear perturbations depending on two parameters, derive explicit higher-order gauge transformation rules and define gauge invariance. The formalism is completely general and can be used in different applications of general relativity or any other spacetime theory

  3. TWO-PARAMETER ISOTHERMS OF METHYL ORANGE SORPTION BY PINECONE DERIVED ACTIVATED CARBON

    Directory of Open Access Journals (Sweden)

    M. R. Samarghandi ، M. Hadi ، S. Moayedi ، F. Barjasteh Askari

    2009-10-01

    Full Text Available The adsorption of a mono azo dye methyl-orange (MeO onto granular pinecone derived activated carbon (GPAC, from aqueous solutions, was studied in a batch system. Seven two-parameter isotherm models Langmuir, Freundlich, Dubinin-Radushkevic, Temkin, Halsey, Jovanovic and Hurkins-Jura were used to fit the experimental data. The results revealed that the adsorption isotherm models fitted the data in the order of Jovanovic (X2=1.374 > Langmuir > Dubinin-Radushkevic > Temkin > Freundlich > Halsey > Hurkins-Jura isotherms. Adsorption isotherms modeling showed that the interaction of dye with activated carbon surface is localized monolayer adsorption. A comparison of kinetic models was evaluated for the pseudo-second order, Elovich and Lagergren kinetic models. Lagergren first order model was found to agree well with the experimental data (X2=9.231. In order to determine the best-fit isotherm and kinetic models, two error analysis methods of Residual Mean Square Error and Chi-square statistic (X2 were used to evaluate the data.

  4. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim

    2018-04-04

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.

  5. Statistical distributions as applied to environmental surveillance data

    International Nuclear Information System (INIS)

    Speer, D.R.; Waite, D.A.

    1976-01-01

    Application of normal, lognormal, and Weibull distributions to radiological environmental surveillance data was investigated for approximately 300 nuclide-medium-year-location combinations. The fit of data to distributions was compared through probability plotting (special graph paper provides a visual check) and W test calculations. Results show that 25% of the data fit the normal distribution, 50% fit the lognormal, and 90% fit the Weibull.Demonstration of how to plot each distribution shows that normal and lognormal distributions are comparatively easy to use while Weibull distribution is complicated and difficult to use. Although current practice is to use normal distribution statistics, normal fit the least number of data groups considered in this study

  6. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  7. Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-02-07

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.

  8. Energy-harvesting in cooperative AF relaying networks over log-normal fading channels

    KAUST Repository

    Rabie, Khaled M.; Salem, Abdelhamid; Alsusa, Emad; Alouini, Mohamed-Slim

    2016-01-01

    Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.

  9. Energy-harvesting in cooperative AF relaying networks over log-normal fading channels

    KAUST Repository

    Rabie, Khaled M.

    2016-07-26

    Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.

  10. Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.; Adebisi, Bamidele; Alouini, Mohamed-Slim

    2017-01-01

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.

  11. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio

    2016-03-18

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

  12. Two-parameter quantum affine algebra Ur,s(sln-circumflex), Drinfeld realization and quantum affine Lyndon basis

    International Nuclear Information System (INIS)

    Hu Naihong; Rosso, M.; Zhang Honglian

    2006-12-01

    We further find the defining structure of a two-parameter quantum affine algebra U r,s (sl n -circumflex) (n > 2) in the sense of Benkart-Witherspoon [BW1] after the work of [BGH1], [HS] and [BH], which turns out to be a Drinfeld double. Of more importance for the 'affine' cases is that we work out the compatible two-parameter version of the Drinfeld realization as a quantum affinization of U r,s (sl n ) and establish the Drinfeld isomorphism Theorem in the two-parameter setting via developing a new remarkable combinatorial approach - quantum 'affine' Lyndon basis with an explicit valid algorithm, based on the Drinfeld realization. (author)

  13. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  14. Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.

    Science.gov (United States)

    Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric

    2017-02-01

    In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and

  15. Investigation of time and weather effects on crash types using full Bayesian multivariate Poisson lognormal models.

    Science.gov (United States)

    El-Basyouny, Karim; Barua, Sudip; Islam, Md Tazul

    2014-12-01

    Previous research shows that various weather elements have significant effects on crash occurrence and risk; however, little is known about how these elements affect different crash types. Consequently, this study investigates the impact of weather elements and sudden extreme snow or rain weather changes on crash type. Multivariate models were used for seven crash types using five years of daily weather and crash data collected for the entire City of Edmonton. In addition, the yearly trend and random variation of parameters across the years were analyzed by using four different modeling formulations. The proposed models were estimated in a full Bayesian context via Markov Chain Monte Carlo simulation. The multivariate Poisson lognormal model with yearly varying coefficients provided the best fit for the data according to Deviance Information Criteria. Overall, results showed that temperature and snowfall were statistically significant with intuitive signs (crashes decrease with increasing temperature; crashes increase as snowfall intensity increases) for all crash types, while rainfall was mostly insignificant. Previous snow showed mixed results, being statistically significant and positively related to certain crash types, while negatively related or insignificant in other cases. Maximum wind gust speed was found mostly insignificant with a few exceptions that were positively related to crash type. Major snow or rain events following a dry weather condition were highly significant and positively related to three crash types: Follow-Too-Close, Stop-Sign-Violation, and Ran-Off-Road crashes. The day-of-the-week dummy variables were statistically significant, indicating a possible weekly variation in exposure. Transportation authorities might use the above results to improve road safety by providing drivers with information regarding the risk of certain crash types for a particular weather condition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A two-parameter family of exact asymptotically flat solutions to the Einstein-scalar field equations

    International Nuclear Information System (INIS)

    Nikonov, V V; Tchemarina, Ju V; Tsirulev, A N

    2008-01-01

    We consider a static spherically symmetric real scalar field, minimally coupled to Einstein gravity. A two-parameter family of exact asymptotically flat solutions is obtained by using the inverse problem method. This family includes non-singular solutions, black holes and naked singularities. For each of these solutions the respective potential is partially negative but positive near spatial infinity. (comments, replies and notes)

  17. Careful with Those Priors: A Note on Bayesian Estimation in Two-Parameter Logistic Item Response Theory Models

    Science.gov (United States)

    Marcoulides, Katerina M.

    2018-01-01

    This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…

  18. Expectation values of local fields for a two-parameter family of integrable models and related perturbed conformal field theories

    International Nuclear Information System (INIS)

    Baseilhac, P.; Fateev, V.A.

    1998-01-01

    We calculate the vacuum expectation values of local fields for the two-parameter family of integrable field theories introduced and studied by Fateev (1996). Using this result we propose an explicit expression for the vacuum expectation values of local operators in parafermionic sine-Gordon models and in integrable perturbed SU(2) coset conformal field theories. (orig.)

  19. Determining prescription durations based on the parametric waiting time distribution

    DEFF Research Database (Denmark)

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-01-01

    two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users...... in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies......-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide...

  20. An alternative factorization of the quantum harmonic oscillator and two-parameter family of self-adjoint operators

    International Nuclear Information System (INIS)

    Arcos-Olalla, Rafael; Reyes, Marco A.; Rosu, Haret C.

    2012-01-01

    We introduce an alternative factorization of the Hamiltonian of the quantum harmonic oscillator which leads to a two-parameter self-adjoint operator from which the standard harmonic oscillator, the one-parameter oscillators introduced by Mielnik, and the Hermite operator are obtained in certain limits of the parameters. In addition, a single Bernoulli-type parameter factorization, which is different from the one introduced by M.A. Reyes, H.C. Rosu, and M.R. Gutiérrez [Phys. Lett. A 375 (2011) 2145], is briefly discussed in the final part of this work. -- Highlights: ► Factorizations with operators which are not mutually adjoint are presented. ► New two-parameter and one-parameter self-adjoint oscillator operators are introduced. ► Their eigenfunctions are two- and one-parameter deformed Hermite functions.

  1. An alternative factorization of the quantum harmonic oscillator and two-parameter family of self-adjoint operators

    Energy Technology Data Exchange (ETDEWEB)

    Arcos-Olalla, Rafael, E-mail: olalla@fisica.ugto.mx [Departamento de Física, DCI Campus León, Universidad de Guanajuato, Apdo. Postal E143, 37150 León, Gto. (Mexico); Reyes, Marco A., E-mail: marco@fisica.ugto.mx [Departamento de Física, DCI Campus León, Universidad de Guanajuato, Apdo. Postal E143, 37150 León, Gto. (Mexico); Rosu, Haret C., E-mail: hcr@ipicyt.edu.mx [IPICYT, Instituto Potosino de Investigacion Cientifica y Tecnologica, Apdo. Postal 3-74 Tangamanga, 78231 San Luis Potosí, S.L.P. (Mexico)

    2012-10-01

    We introduce an alternative factorization of the Hamiltonian of the quantum harmonic oscillator which leads to a two-parameter self-adjoint operator from which the standard harmonic oscillator, the one-parameter oscillators introduced by Mielnik, and the Hermite operator are obtained in certain limits of the parameters. In addition, a single Bernoulli-type parameter factorization, which is different from the one introduced by M.A. Reyes, H.C. Rosu, and M.R. Gutiérrez [Phys. Lett. A 375 (2011) 2145], is briefly discussed in the final part of this work. -- Highlights: ► Factorizations with operators which are not mutually adjoint are presented. ► New two-parameter and one-parameter self-adjoint oscillator operators are introduced. ► Their eigenfunctions are two- and one-parameter deformed Hermite functions.

  2. Study on a resource allocation scheme in multi-hop MIMO-OFDM systems over lognormal-rayleigh compound channels

    Directory of Open Access Journals (Sweden)

    LIU Jun

    2015-10-01

    Full Text Available For new generation wireless communication networks,this paper studies the optimization of the capacity and end-to-end throughput of the MIMO-OFDM based multi-hop relay systems.A water-filling power allocation method is proposed to improve the channel capacity and the throughput of the MIMO-OFDM system based multi-hop relay system in the Lognormal-Rayleigh shadowing compound channels.Simulations on the capacity and throughput show that the water-filling algorithm can improve the system throughput effectively in the MIMO-OFDM multi-hop relay system.

  3. With timing options and heterogeneous costs, the lognormal diffusion is hardly an equilibrium price process for exhaustible resources

    International Nuclear Information System (INIS)

    Lund, D.

    1992-01-01

    The report analyses the possibility that the lognormal diffusion process should be an equilibrium spot price process for an exhaustible resource. A partial equilibrium model is used under the assumption that the resource deposits have different extraction costs. Two separate problems have been pointed out. Under full certainty, when the process reduces to an exponentially growing price, the equilibrium places a very strong restriction on a relationship between the demand function and the cost density function. Under uncertainty there is an additional problem that during periods in which the price is lower than its previously recorded high, no new deposits will start extraction. 30 refs., 1 fig

  4. On the generation of log-Levy distributions and extreme randomness

    International Nuclear Information System (INIS)

    Eliazar, Iddo; Klafter, Joseph

    2011-01-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Levy distributions. The log-Levy distributions are the Levy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Levy distributions emerge universally-the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot's extreme randomness. (paper)

  5. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    Science.gov (United States)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  6. Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-08-15

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels, which represent outdoor environments. In contrast, this paper is dedicated to analyze the performance of dual-hop relaying systems with EH over indoor channels characterized by log-normal fading. Both half-duplex (HD) and full-duplex (FD) relaying mechanisms are studied in this work with decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols. In addition, three EH schemes are investigated, namely, time switching relaying, power splitting relaying and ideal relaying receiver which serves as a lower bound. The system performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions. Monte Carlo simulations are provided throughout to validate the accuracy of our analysis. Results reveal that, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can generally outperform HD relaying schemes as long as the loop-back interference in FD is relatively small. Furthermore, increasing the variance of the log-normal channel has shown to deteriorate the performance in all the relaying and EH protocols considered.

  7. Perfect-fluid models admitting a non-Abelian and maximal two-parameter group of isometries

    International Nuclear Information System (INIS)

    Van den Bergh, N.

    1988-01-01

    A proof is given that, when a spacetime admits an invariant timelike congruence orthogonal to the orbits of a non-Abelian two-parameter group of isometries, the given congruence is vorticity-free provided the group is maximal. The result is used to derive a canonical coordinate form for perfect-fluid solutions satisfying the above condition. It is also shown that such a group of isometries cannot be orthogonally transitive and a brief discussion is given of the self-similar case. (author)

  8. A NEW STATISTICAL PERSPECTIVE TO THE COSMIC VOID DISTRIBUTION

    International Nuclear Information System (INIS)

    Pycke, J-R; Russell, E.

    2016-01-01

    In this study, we obtain the size distribution of voids as a three-parameter redshift-independent log-normal void probability function (VPF) directly from the Cosmic Void Catalog (CVC). Although many statistical models of void distributions are based on the counts in randomly placed cells, the log-normal VPF that we obtain here is independent of the shape of the voids due to the parameter-free void finder of the CVC. We use three void populations drawn from the CVC generated by the Halo Occupation Distribution (HOD) Mocks, which are tuned to three mock SDSS samples to investigate the void distribution statistically and to investigate the effects of the environments on the size distribution. As a result, it is shown that void size distributions obtained from the HOD Mock samples are satisfied by the three-parameter log-normal distribution. In addition, we find that there may be a relation between the hierarchical formation, skewness, and kurtosis of the log-normal distribution for each catalog. We also show that the shape of the three-parameter distribution from the samples is strikingly similar to the galaxy log-normal mass distribution obtained from numerical studies. This similarity between void size and galaxy mass distributions may possibly indicate evidence of nonlinear mechanisms affecting both voids and galaxies, such as large-scale accretion and tidal effects. Considering the fact that in this study, all voids are generated by galaxy mocks and show hierarchical structures in different levels, it may be possible that the same nonlinear mechanisms of mass distribution affect the void size distribution.

  9. An investigation into the population abundance distribution of mRNAs, proteins, and metabolites in biological systems.

    Science.gov (United States)

    Lu, Chuan; King, Ross D

    2009-08-15

    Distribution analysis is one of the most basic forms of statistical analysis. Thanks to improved analytical methods, accurate and extensive quantitative measurements can now be made of the mRNA, protein and metabolite from biological systems. Here, we report a large-scale analysis of the population abundance distributions of the transcriptomes, proteomes and metabolomes from varied biological systems. We compared the observed empirical distributions with a number of distributions: power law, lognormal, loglogistic, loggamma, right Pareto-lognormal (PLN) and double PLN (dPLN). The best-fit for mRNA, protein and metabolite population abundance distributions was found to be the dPLN. This distribution behaves like a lognormal distribution around the centre, and like a power law distribution in the tails. To better understand the cause of this observed distribution, we explored a simple stochastic model based on geometric Brownian motion. The distribution indicates that multiplicative effects are causally dominant in biological systems. We speculate that these effects arise from chemical reactions: the central-limit theorem then explains the central lognormal, and a number of possible mechanisms could explain the long tails: positive feedback, network topology, etc. Many of the components in the central lognormal parts of the empirical distributions are unidentified and/or have unknown function. This indicates that much more biology awaits discovery.

  10. Empirical analysis on the runners' velocity distribution in city marathons

    Science.gov (United States)

    Lin, Zhenquan; Meng, Fan

    2018-01-01

    In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.

  11. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal distribution of the magnetic dipole moment. Here, we test this assumption for different types of superparamagnetic iron oxide nanoparticles in the 5–20 nm range, by multimodal fitting of magnetization curves using the MINORIM inversion method. The particles are studied while in dilute colloidal dispersion in a liquid, thereby preventing hysteresis and diminishing the effects of magnetic anisotropy on the interpretation of the magnetization curves. For two different types of well crystallized particles, the magnetic distribution is indeed log-normal, as expected from the physical size distribution. However, two other types of particles, with twinning defects or inhomogeneous oxide phases, are found to have a bimodal magnetic distribution. Our qualitative explanation is that relatively low fields are sufficient to begin aligning the particles in the liquid on the basis of their net dipole moment, whereas higher fields are required to align the smaller domains or less magnetic phases inside the particles. - Highlights: • Multimodal fits of dilute ferrofluids reveal when the particles are multidomain. • No a priori shape of the distribution is assumed by the MINORIM inversion method. • Well crystallized particles have log-normal TEM and magnetic size distributions. • Defective particles can combine a monomodal size and a bimodal dipole moment

  12. Localization of the dynamic two-parameter subgrid-scale model and application to near-wall turbulent flows

    International Nuclear Information System (INIS)

    Wang, B.; Bergstrom, D.J.

    2002-01-01

    The dynamic two-parameter mixed model (DTPMM) has been recently introduced in the large eddy simulation (LES). However, current approaches in the literatures are mathematically inconsistent. In this paper, the DTPMM has been optimized using the functional variational method. The mathematical inconsistency has been removed and a governing system of two integral equations for the model coefficients of the DTPMM and some significant features have been obtained. Coherent structures relating to the vortex motion of large vortices have been investigated, using the vortex λ 2 -definition of Jeong and Hussain (1995). The numerical results agrees with the classical wall law of von Karman (1939) and experimental correlation of Aydin and Leutheusser (1991). (author)

  13. Statistical Evidence for the Preference of Frailty Distributions with Regularly-Varying-at-Zero Densities

    DEFF Research Database (Denmark)

    Missov, Trifon I.; Schöley, Jonas

    to this criterion admissible distributions are, for example, the gamma, the beta, the truncated normal, the log-logistic and the Weibull, while distributions like the log-normal and the inverse Gaussian do not satisfy this condition. In this article we show that models with admissible frailty distributions...... and a Gompertz baseline provide a better fit to adult human mortality data than the corresponding models with non-admissible frailty distributions. We implement estimation procedures for mixture models with a Gompertz baseline and frailty that follows a gamma, truncated normal, log-normal, or inverse Gaussian...

  14. Distribution

    Science.gov (United States)

    John R. Jones

    1985-01-01

    Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....

  15. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    Science.gov (United States)

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Book review: A new view on the species abundance distribution

    Science.gov (United States)

    DeAngelis, Donald L.

    2018-01-01

    The sampled relative abundances of species of a taxonomic group, whether birds, trees, or moths, in a natural community at a particular place vary in a way that suggests a consistent underlying pattern, referred to as the species abundance distribution (SAD). Preston [1] conjectured that the numbers of species, plotted as a histogram of logarithmic abundance classes called octaves, seemed to fit a lognormal distribution; that is, the histograms look like normal distributions, although truncated on the left-hand, or low-species-abundance, end. Although other specific curves for the SAD have been proposed in the literature, Preston’s lognormal distribution is widely cited in textbooks and has stimulated attempts at explanation. An important aspect of Preston’s lognormal distribution is the ‘veil line’, a vertical line drawn exactly at the point of the left-hand truncation in the distribution, to the left of which would be species missing from the sample. Dewdney rejects the lognormal conjecture. Instead, starting with the long-recognized fact that the number of species sampled from a community, when plotted as histograms against population abundance, resembles an inverted J, he presents a mathematical description of an alternative that he calls the ‘J distribution’, a hyperbolic density function truncated at both ends. When multiplied by species richness, R, it becomes the SAD of the sample.

  17. Firm Size Distribution in Fortune Global 500

    Science.gov (United States)

    Chen, Qinghua; Chen, Liujun; Liu, Kai

    By analyzing the data of Fortune Global 500 firms from 1996 to 2008, we found that their ranks and revenues always obey the same distribution, which implies that worldwide firm structure has been stable for a long time. The fitting results show that simple Zipf distribution is not an ideal model for global firms, while SCL, FSS have better fitting goodness, and lognormal fitting is the best. And then, we proposed a simple explanation.

  18. Two parameters Lie group analysis and numerical solution of unsteady free convective flow of non-Newtonian fluid

    Directory of Open Access Journals (Sweden)

    M.J. Uddin

    2016-09-01

    Full Text Available The two-dimensional unsteady laminar free convective heat and mass transfer fluid flow of a non-Newtonian fluid adjacent to a vertical plate has been analyzed numerically. The two parameters Lie group transformation method that transforms the three independent variables into a single variable is used to transform the continuity, the momentum, the energy and the concentration equations into a set of coupled similarity equations. The transformed equations have been solved by the Runge–Kutta–Fehlberg fourth-fifth order numerical method with shooting technique. Numerical calculations were carried out for the various parameters entering into the problem. The dimensionless velocity, temperature and concentration profiles were shown graphically and the skin friction, heat and mass transfer rates were given in tables. It is found that friction factor and heat transfer (mass transfer rate for methanol are higher (lower than those of hydrogen and water vapor. Friction factor decreases while heat and mass transfer rate increase as the Prandtl number increases. Friction (heat and mass transfer rate factor of Newtonian fluid is higher (lower than the dilatant fluid.

  19. Probabilistic distributions of wind velocity for the evaluation of the wind power potential; Distribuicoes probabilisticas de velocidades do vento para avaliacao do potencial energetico eolico

    Energy Technology Data Exchange (ETDEWEB)

    Vendramini, Elisa Zanuncio

    1986-10-01

    The theoretical model of wind speed distributions allow valuable information about the probability of events relative to the variable in study eliminating the necessity of a new experiment. The most used distributions has been the Weibull and the Rayleigh. These distributions are examined in the present investigation, as well as the exponential, gamma, chi square and lognormal distributions. Three years of hourly averages wind data recorded from a anemometer setting at the city of Ataliba Leonel, Sao Paulo State, Brazil, were used. Using wind speed distribution the theoretical relative frequency was calculated from the distributions which have been examined. Results from the Kolmogorov - Smirnov test allow to conclude that the lognormal distribution fit better the wind speed data, followed by the gamma and Rayleigh distributions. Using the lognormal probability density function the yearly energy output from a wind generator installed in the side was calculated. 30 refs, 4 figs, 14 tabs

  20. Forms and genesis of species abundance distributions

    Directory of Open Access Journals (Sweden)

    Evans O. Ochiaga

    2015-12-01

    Full Text Available Species abundance distribution (SAD is one of the most important metrics in community ecology. SAD curves take a hollow or hyperbolic shape in a histogram plot with many rare species and only a few common species. In general, the shape of SAD is largely log-normally distributed, although the mechanism behind this particular SAD shape still remains elusive. Here, we aim to review four major parametric forms of SAD and three contending mechanisms that could potentially explain this highly skewed form of SAD. The parametric forms reviewed here include log series, negative binomial, lognormal and geometric distributions. The mechanisms reviewed here include the maximum entropy theory of ecology, neutral theory and the theory of proportionate effect.

  1. Change of particle size distribution during Brownian coagulation

    International Nuclear Information System (INIS)

    Lee, K.W.

    1984-01-01

    Change in particle size distribution due to Brownian coagulation in the continuum regime has been stuied analytically. A simple analytic solution for the size distribution of an initially lognormal distribution is obtained based on the assumption that the size distribution during the coagulation process attains or can, at least, be represented by a time dependent lognormal function. The results are found to be in a form that corrects Smoluchowski's solution for both polydispersity and size-dependent kernel. It is further shown that regardless of whether the initial distribution is narrow or broad, the spread of the distribution is characterized by approaching a fixed value of the geometric standard deviation. This result has been compared with the self-preserving distribution obtained by similarity theory. (Author)

  2. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates.

    Science.gov (United States)

    Wagner, Peter J

    2012-02-23

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.

  3. Lognormal Kalman filter for assimilating phase space density data in the radiation belts

    Science.gov (United States)

    Kondrashov, D.; Ghil, M.; Shprits, Y.

    2011-11-01

    Data assimilation combines a physical model with sparse observations and has become an increasingly important tool for scientists and engineers in the design, operation, and use of satellites and other high-technology systems in the near-Earth space environment. Of particular importance is predicting fluxes of high-energy particles in the Van Allen radiation belts, since these fluxes can damage spaceborne platforms and instruments during strong geomagnetic storms. In transiting from a research setting to operational prediction of these fluxes, improved data assimilation is of the essence. The present study is motivated by the fact that phase space densities (PSDs) of high-energy electrons in the outer radiation belt—both simulated and observed—are subject to spatiotemporal variations that span several orders of magnitude. Standard data assimilation methods that are based on least squares minimization of normally distributed errors may not be adequate for handling the range of these variations. We propose herein a modification of Kalman filtering that uses a log-transformed, one-dimensional radial diffusion model for the PSDs and includes parameterized losses. The proposed methodology is first verified on model-simulated, synthetic data and then applied to actual satellite measurements. When the model errors are sufficiently smaller then observational errors, our methodology can significantly improve analysis and prediction skill for the PSDs compared to those of the standard Kalman filter formulation. This improvement is documented by monitoring the variance of the innovation sequence.

  4. Use of three-dimensional lognormal dose-response surfaces in lifetime studies of radiation-induced cancer

    International Nuclear Information System (INIS)

    Raabe, O.G.

    1986-01-01

    The three-dimensional lognormal cumulative probability power function was used to provide a unifying dose-response description of the lifetime cancer risk for chronic exposure of experimental animals and people, for risk evaluation, and for scaling between species. Bone tumor fatilities, primarily from alpha irradiation of the skeleton in lifetime studies of beagles injected with 226 Ra, were shown to be well described by this function. This function described cancer risk in lifetime studies as a curved smooth surface depending on radiation exposure rate and elapsed time, such that the principal risk at low dose rates occurred near the end of the normal life span without significant life shortening. Essentially identical functions with the median value of the power function displaced with respect to appropriate RBE values were shown to describe bone-cancer induction primarily from alpha irradiation of the skeleton in lifetime beagle studies with injected 226 Ra, 228 Th, 239 Pu and 241 Am, and with inhaled 238 Pu. Application of this model to human exposures to 226 Ra yielded a response ratio of 3.6; that is, the time required for development of bone cancer in people was 3.6 times longer than for beagles at the same average skeletal dose rate. It was suggested that similar techniques were appropriate to other carcinogens and other critical organs. 20 refs., 8 figs., 3 tabs

  5. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  6. TWO-PARAMETER IRT MODEL APPLICATION TO ASSESS PROBABILISTIC CHARACTERISTICS OF PROHIBITED ITEMS DETECTION BY AVIATION SECURITY SCREENERS

    Directory of Open Access Journals (Sweden)

    Alexander K. Volkov

    2017-01-01

    Full Text Available The modern approaches to the aviation security screeners’ efficiency have been analyzedand, certain drawbacks have been considered. The main drawback is the complexity of ICAO recommendations implementation concerning taking into account of shadow x-ray image complexity factors during preparation and evaluation of prohibited items detection efficiency by aviation security screeners. Х-ray image based factors are the specific properties of the x-ray image that in- fluence the ability to detect prohibited items by aviation security screeners. The most important complexity factors are: geometric characteristics of a prohibited item; view difficulty of prohibited items; superposition of prohibited items byother objects in the bag; bag content complexity; the color similarity of prohibited and usual items in the luggage.The one-dimensional two-parameter IRT model and the related criterion of aviation security screeners’ qualification have been suggested. Within the suggested model the probabilistic detection characteristics of aviation security screeners are considered as functions of such parameters as the difference between level of qualification and level of x-ray images com- plexity, and also between the aviation security screeners’ responsibility and structure of their professional knowledge. On the basis of the given model it is possible to consider two characteristic functions: first of all, characteristic function of qualifica- tion level which describes multi-complexity level of x-ray image interpretation competency of the aviation security screener; secondly, characteristic function of the x-ray image complexity which describes the range of x-ray image interpretation com- petency of the aviation security screeners having various training levels to interpret the x-ray image of a certain level of com- plexity. The suggested complex criterion to assess the level of the aviation security screener qualification allows to evaluate his or

  7. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  8. Effect of particle size distribution on sintering of tungsten

    International Nuclear Information System (INIS)

    Patterson, B.R.; Griffin, J.A.

    1984-01-01

    To date, very little is known about the effect of the nature of the particle size distribution on sintering. It is reasonable that there should be an effect of size distribution, and theory and prior experimental work examining the effects of variations in bimodal and continuous distributions have shown marked effects on sintering. Most importantly, even with constant mean particle size, variations in distribution width, or standard deviation, have been shown to produce marked variations in microstructure and sintering rate. In the latter work, in which spherical copper powders were blended to produce lognormal distributions of constant geometric mean particle size by weight frequency, blends with larger values of geometric standard deviation, 1nσ, sintered more rapidly. The goals of the present study were to examine in more detail the effects of variations in the width of lognormal particle size distributions of tungsten powder and determine the effects of 1nσ on the microstructural evolution during sintering

  9. Gaze Step Distributions Reflect Fixations and Saccades: A Comment on Stephen and Mirman (2010)

    Science.gov (United States)

    Bogartz, Richard S.; Staub, Adrian

    2012-01-01

    In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye…

  10. Rationalisation of distribution functions for models of nanoparticle magnetism

    International Nuclear Information System (INIS)

    El-Hilo, M.; Chantrell, R.W.

    2012-01-01

    A formalism is presented which reconciles the use of different distribution functions of particle diameter in analytical models of the magnetic properties of nanoparticle systems. For the lognormal distribution a transformation is derived which shows that a distribution of volume fraction transforms into a lognormal distribution of particle number albeit with a modified median diameter. This transformation resolves an apparent discrepancy reported in Tournus and Tamion [Journal of Magnetism and Magnetic Materials 323 (2011) 1118]. - Highlights: ► We resolve a problem resulting from the misunderstanding of the nature. ► The nature of dispersion functions in models of nanoparticle magnetism. ► The derived transformation between distributions will be of benefit in comparing models and experimental results.

  11. Testing the Beta-Lognormal Model in Amazonian Rainfall Fields Using the Generalized Space q-Entropy

    Directory of Open Access Journals (Sweden)

    Hernán D. Salas

    2017-12-01

    Full Text Available We study spatial scaling and complexity properties of Amazonian radar rainfall fields using the Beta-Lognormal Model (BL-Model with the aim to characterize and model the process at a broad range of spatial scales. The Generalized Space q-Entropy Function (GSEF, an entropic measure defined as a continuous set of power laws covering a broad range of spatial scales, S q ( λ ∼ λ Ω ( q , is used as a tool to check the ability of the BL-Model to represent observed 2-D radar rainfall fields. In addition, we evaluate the effect of the amount of zeros, the variability of rainfall intensity, the number of bins used to estimate the probability mass function, and the record length on the GSFE estimation. Our results show that: (i the BL-Model adequately represents the scaling properties of the q-entropy, S q, for Amazonian rainfall fields across a range of spatial scales λ from 2 km to 64 km; (ii the q-entropy in rainfall fields can be characterized by a non-additivity value, q s a t, at which rainfall reaches a maximum scaling exponent, Ω s a t; (iii the maximum scaling exponent Ω s a t is directly related to the amount of zeros in rainfall fields and is not sensitive to either the number of bins to estimate the probability mass function or the variability of rainfall intensity; and (iv for small-samples, the GSEF of rainfall fields may incur in considerable bias. Finally, for synthetic 2-D rainfall fields from the BL-Model, we look for a connection between intermittency using a metric based on generalized Hurst exponents, M ( q 1 , q 2 , and the non-extensive order (q-order of a system, Θ q, which relates to the GSEF. Our results do not exhibit evidence of such relationship.

  12. The mathematical formula of the intravaginal ejaculation latency time (IELT distribution of lifelong premature ejaculation differs from the IELT distribution formula of men in the general male population

    Directory of Open Access Journals (Sweden)

    Paddy K.C. Janssen

    2016-03-01

    Full Text Available Purpose: To find the most accurate mathematical description of the intravaginal ejaculation latency time (IELT distribution in the general male population. Materials and Methods: We compared the fitness of various well-known mathematical distributions with the IELT distribution of two previously published stopwatch studies of the Caucasian general male population and a stopwatch study of Dutch Caucasian men with lifelong premature ejaculation (PE. The accuracy of fitness is expressed by the Goodness of Fit (GOF. The smaller the GOF, the more accurate is the fitness. Results: The 3 IELT distributions are gamma distributions, but the IELT distribution of lifelong PE is another gamma distribution than the IELT distribution of men in the general male population. The Lognormal distribution of the gamma distributions most accurately fits the IELT distribution of 965 men in the general population, with a GOF of 0.057. The Gumbel Max distribution most accurately fits the IELT distribution of 110 men with lifelong PE with a GOF of 0.179. There are more men with lifelong PE ejaculating within 30 and 60 seconds than can be extrapolated from the probability density curve of the Lognormal IELT distribution of men in the general population. Conclusions: Men with lifelong PE have a distinct IELT distribution, e.g., a Gumbel Max IELT distribution, that can only be retrieved from the general male population Lognormal IELT distribution when thousands of men would participate in a IELT stopwatch study. The mathematical formula of the Lognormal IELT distribution is useful for epidemiological research of the IELT.

  13. The mathematical formula of the intravaginal ejaculation latency time (IELT) distribution of lifelong premature ejaculation differs from the IELT distribution formula of men in the general male population

    Science.gov (United States)

    Janssen, Paddy K.C.

    2016-01-01

    Purpose To find the most accurate mathematical description of the intravaginal ejaculation latency time (IELT) distribution in the general male population. Materials and Methods We compared the fitness of various well-known mathematical distributions with the IELT distribution of two previously published stopwatch studies of the Caucasian general male population and a stopwatch study of Dutch Caucasian men with lifelong premature ejaculation (PE). The accuracy of fitness is expressed by the Goodness of Fit (GOF). The smaller the GOF, the more accurate is the fitness. Results The 3 IELT distributions are gamma distributions, but the IELT distribution of lifelong PE is another gamma distribution than the IELT distribution of men in the general male population. The Lognormal distribution of the gamma distributions most accurately fits the IELT distribution of 965 men in the general population, with a GOF of 0.057. The Gumbel Max distribution most accurately fits the IELT distribution of 110 men with lifelong PE with a GOF of 0.179. There are more men with lifelong PE ejaculating within 30 and 60 seconds than can be extrapolated from the probability density curve of the Lognormal IELT distribution of men in the general population. Conclusions Men with lifelong PE have a distinct IELT distribution, e.g., a Gumbel Max IELT distribution, that can only be retrieved from the general male population Lognormal IELT distribution when thousands of men would participate in a IELT stopwatch study. The mathematical formula of the Lognormal IELT distribution is useful for epidemiological research of the IELT. PMID:26981594

  14. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  15. Environmental Transmission Electron Microscopy Study of the Origins of Anomalous Particle Size Distributions in Supported Metal Catalysts

    DEFF Research Database (Denmark)

    Benavidez, Angelica D.; Kovarik, Libor; Genc, Arda

    2012-01-01

    of the particle size distribution (PSD). The abundance of the larger particles did not fit the log-normal distribution. We can rule out sample nonuniformity as a cause for the growth of these large particles, since images were recorded prior to heat treatments. The anomalous growth of these particles may help...

  16. Reliability Implications in Wood Systems of a Bivariate Gaussian-Weibull Distribution and the Associated Univariate Pseudo-truncated Weibull

    Science.gov (United States)

    Steve P. Verrill; James W. Evans; David E. Kretschmann; Cherilyn A. Hatfield

    2014-01-01

    Two important wood properties are the modulus of elasticity (MOE) and the modulus of rupture (MOR). In the past, the statistical distribution of the MOE has often been modeled as Gaussian, and that of the MOR as lognormal or as a two- or three-parameter Weibull distribution. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior...

  17. Income Distribution and Consumption Deprivation: An Analytical Link

    OpenAIRE

    Sushanta K. Mallick

    2008-01-01

    This article conceives poverty in terms of the consumption of essential food, makes use of a new deprivation (or poverty) function, and examines the effects of changes in the mean and the variance of the income distribution on poverty, assuming a log-normal income distribution. The presence of a saturation level of consumption can be treated as a poverty-line threshold as opposed to an exogenous income-based poverty line. Within such a consumption deprivation approach, the article proves anal...

  18. Concentration distribution of trace elements: from normal distribution to Levy flights

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Banas, D.; Braziewicz, J.; Majewska, U.; Pajek, M.

    2003-01-01

    The paper discusses a nature of concentration distributions of trace elements in biomedical samples, which were measured by using the X-ray fluorescence techniques (XRF, TXRF). Our earlier observation, that the lognormal distribution well describes the measured concentration distribution is explained here on a more general ground. Particularly, the role of random multiplicative process, which models the concentration distributions of trace elements in biomedical samples, is discussed in detail. It is demonstrated that the lognormal distribution, appearing when the multiplicative process is driven by normal distribution, can be generalized to the so-called log-stable distribution. Such distribution describes the random multiplicative process, which is driven, instead of normal distribution, by more general stable distribution, being known as the Levy flights. The presented ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, obtained by using the conventional (XRF) and (TXRF) X-ray fluorescence methods. Particularly, the first observation of log-stable concentration distribution of trace elements is reported and discussed here in detail

  19. Stability of the laws for the distribution of the cumulative failures in railway transport

    OpenAIRE

    Kirill VOYNOV

    2008-01-01

    There are very many different laws of distribution (for example), bellshaped (Gaussian) distribution, lognormal, Weibull distribution, exponential, uniform, Poisson’s, Student’s distributions and so on, which help to describe the real picture of failures with elements in various mechanical systems, in locomotives and carriages, too. To diminish the possibility of getting the rough error in the output of maths data treatment the new method is demonstrated in this article. The task is solved bo...

  20. The Czech Wage Distribution and the Minimum Wage Impacts: the Empirical Analysis

    Directory of Open Access Journals (Sweden)

    Kateřina Duspivová

    2013-06-01

    Full Text Available A well-fi tting wage distribution is a crucial precondition for economic modeling of the labour market processes.In the fi rst part, this paper provides the evidence that – as for wages in the Czech Republic – the most oft enused log-normal distribution failed and the best-fi tting one is the Dagum distribution. Th en we investigatethe role of wage distribution in the process of the economic modeling. By way of an example of the minimumwage impacts on the Czech labour market, we examine the response of Meyer and Wise’s (1983 model to theDagum and log-normal distributions. Th e results suggest that the wage distribution has important implicationsfor the eff ects of the minimum wage on the shape of the lower tail of the measured wage distribution andis thus an important feature for interpreting the eff ects of minimum wages.

  1. THE DENSITY DISTRIBUTION IN TURBULENT BISTABLE FLOWS

    International Nuclear Information System (INIS)

    Gazol, Adriana; Kim, Jongsoo

    2013-01-01

    We numerically study the volume density probability distribution function (n-PDF) and the column density probability distribution function (Σ-PDF) resulting from thermally bistable turbulent flows. We analyze three-dimensional hydrodynamic models in periodic boxes of 100 pc by side, where turbulence is driven in the Fourier space at a wavenumber corresponding to 50 pc. At low densities (n ∼ –3 ), the n-PDF is well described by a lognormal distribution for an average local Mach number ranging from ∼0.2 to ∼5.5. As a consequence of the nonlinear development of thermal instability (TI), the logarithmic variance of the distribution of the diffuse gas increases with M faster than in the well-known isothermal case. The average local Mach number for the dense gas (n ∼> 7.1 cm –3 ) goes from ∼1.1 to ∼16.9 and the shape of the high-density zone of the n-PDF changes from a power law at low Mach numbers to a lognormal at high M values. In the latter case, the width of the distribution is smaller than in the isothermal case and grows slower with M. At high column densities, the Σ-PDF is well described by a lognormal for all of the Mach numbers we consider and, due to the presence of TI, the width of the distribution is systematically larger than in the isothermal case but follows a qualitatively similar behavior as M increases. Although a relationship between the width of the distribution and M can be found for each one of the cases mentioned above, these relations are different from those of the isothermal case.

  2. The particle size distribution of fragmented melt debris from molten fuel coolant interactions

    International Nuclear Information System (INIS)

    Fletcher, D.F.

    1984-04-01

    Results are presented of a study of the types of statistical distributions which arise when examining debris from Molten Fuel Coolant Interactions. The lognormal probability distribution and the modifications of this distribution which result from the mixing of two distributions or the removal of some debris are described. Methods of fitting these distributions to real data are detailed. A two stage fragmentation model has been developed in an attempt to distinguish between the debris produced by coarse mixing and fine scale fragmentation. However, attempts to fit this model to real data have proved unsuccessful. It was found that the debris particle size distributions from experiments at Winfrith with thermite generated uranium dioxide/molybdenum melts were Upper Limit Lognormal. (U.K.)

  3. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    Science.gov (United States)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  4. [A study on the departmental distribution of mortality by cause: some evidence concerning two populations].

    Science.gov (United States)

    Damiani, P; Masse, H; Aubenque, M

    1984-01-01

    The distributions of proportions of deaths by cause are analyzed for each department of France by sex for the age group 45 to 64. The data are official French departmental data on causes of death for the period 1968-1970. The authors conclude that these distributions are the sum of two log-normal distributions. They also identify the existence of two populations according to whether the cause of death was endogenous or exogenous. (summary in ENG)

  5. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  6. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  7. Distribution of runup heights of the December 26, 2004 tsunami in the Indian Ocean

    Science.gov (United States)

    Choi, Byung Ho; Hong, Sung Jin; Pelinovsky, Efim

    2006-07-01

    A massive earthquake with magnitude 9.3 occurred on December 26, 2004 off the northern Sumatra generated huge tsunami waves affected many coastal countries in the Indian Ocean. A number of field surveys have been performed after this tsunami event; in particular, several surveys in the south/east coast of India, Andaman and Nicobar Islands, Sri Lanka, Sumatra, Malaysia, and Thailand have been organized by the Korean Society of Coastal and Ocean Engineers from January to August 2005. Spatial distribution of the tsunami runup is used to analyze the distribution function of the wave heights on different coasts. Theoretical interpretation of this distribution is associated with random coastal bathymetry and coastline led to the log-normal functions. Observed data also are in a very good agreement with log-normal distribution confirming the important role of the variable ocean bathymetry in the formation of the irregular wave height distribution along the coasts.

  8. The magnetized sheath of a dusty plasma with grains size distribution

    International Nuclear Information System (INIS)

    Ou, Jing; Gan, Chunyun; Lin, Binbin; Yang, Jinhong

    2015-01-01

    The structure of a plasma sheath in the presence of dust grains size distribution (DGSD) is investigated in the multi-fluid framework. It is shown that effect of the dust grains with different sizes on the sheath structure is a collective behavior. The spatial distributions of electric potential, the electron and ion densities and velocities, and the dust grains surface potential are strongly affected by DGSD. The dynamics of dust grains with different sizes in the sheath depend on not only DGSD but also their radius. By comparison of the sheath structure, it is found that under the same expected value of DGSD condition, the sheath length is longer in the case of lognormal distribution than that in the case of uniform distribution. In two cases of normal and lognormal distributions, the sheath length is almost equal for the small variance of DGSD, and then the difference of sheath length increases gradually with increase in the variance

  9. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    International Nuclear Information System (INIS)

    Scarpelli, M; Eickhoff, J; Perlman, S; Jeraj, R

    2016-01-01

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test was used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log

  10. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Scarpelli, M; Eickhoff, J; Perlman, S; Jeraj, R

    2016-06-15

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test was used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log

  11. Estimating central tendency from a single spot measure: A closed-form solution for lognormally distributed biomarker data for risk assessment at the individual level.

    Science.gov (United States)

    Pleil, Joachim D; Sobus, Jon R

    2016-01-01

    Exposure-based risk assessment employs large cross-sectional data sets of environmental and biomarker measurements to predict population statistics for adverse health outcomes. The underlying assumption is that long-term (many years) latency health problems including cancer, autoimmune and cardiovascular disease, diabetes, and asthma are triggered by lifetime exposures to environmental stressors that interact with the genome. The aim of this study was to develop a specific predictive method that provides the statistical parameters for chronic exposure at the individual level based upon a single spot measurement and knowledge of global summary statistics as derived from large data sets. This is a profound shift in exposure and health statistics in that it begins to answer the question "How large is my personal risk?" rather than just providing an overall population-based estimate. This approach also holds value for interpreting exposure-based risks for small groups of individuals within a community in comparison to random individuals from the general population.

  12. Temporal Statistical Analysis of Degree Distributions in an Undirected Landline Phone Call Network Graph Series

    Directory of Open Access Journals (Sweden)

    Orgeta Gjermëni

    2017-10-01

    Full Text Available This article aims to provide new results about the intraday degree sequence distribution considering phone call network graph evolution in time. More specifically, it tackles the following problem. Given a large amount of landline phone call data records, what is the best way to summarize the distinct number of calling partners per client per day? In order to answer this question, a series of undirected phone call network graphs is constructed based on data from a local telecommunication source in Albania. All network graphs of the series are simplified. Further, a longitudinal temporal study is made on this network graphs series related to the degree distributions. Power law and log-normal distribution fittings on the degree sequence are compared on each of the network graphs of the series. The maximum likelihood method is used to estimate the parameters of the distributions, and a Kolmogorov–Smirnov test associated with a p-value is used to define the plausible models. A direct distribution comparison is made through a Vuong test in the case that both distributions are plausible. Another goal was to describe the parameters’ distributions’ shape. A Shapiro-Wilk test is used to test the normality of the data, and measures of shape are used to define the distributions’ shape. Study findings suggested that log-normal distribution models better the intraday degree sequence data of the network graphs. It is not possible to say that the distributions of log-normal parameters are normal.

  13. Effects of network topology on wealth distributions

    International Nuclear Information System (INIS)

    Garlaschelli, Diego; Loffredo, Maria I

    2008-01-01

    We focus on the problem of how the wealth is distributed among the units of a networked economic system. We first review the empirical results documenting that in many economies the wealth distribution is described by a combination of the log-normal and power-law behaviours. We then focus on the Bouchaud-Mezard model of wealth exchange, describing an economy of interacting agents connected through an exchange network. We report analytical and numerical results showing that the system self-organizes towards a stationary state whose associated wealth distribution depends crucially on the underlying interaction network. In particular, we show that if the network displays a homogeneous density of links, the wealth distribution displays either the log-normal or the power-law form. This means that the first-order topological properties alone (such as the scale-free property) are not enough to explain the emergence of the empirically observed mixed form of the wealth distribution. In order to reproduce this nontrivial pattern, the network has to be heterogeneously divided into regions with a variable density of links. We show new results detailing how this effect is related to the higher-order correlation properties of the underlying network. In particular, we analyse assortativity by degree and the pairwise wealth correlations, and discuss the effects that these properties have on each other

  14. Free vibration analysis of a cracked shear deformable beam on a two-parameter elastic foundation using a lattice spring model

    Science.gov (United States)

    Attar, M.; Karrech, A.; Regenauer-Lieb, K.

    2014-05-01

    The free vibration of a shear deformable beam with multiple open edge cracks is studied using a lattice spring model (LSM). The beam is supported by a so-called two-parameter elastic foundation, where normal and shear foundation stiffnesses are considered. Through application of Timoshenko beam theory, the effects of transverse shear deformation and rotary inertia are taken into account. In the LSM, the beam is discretised into a one-dimensional assembly of segments interacting via rotational and shear springs. These springs represent the flexural and shear stiffnesses of the beam. The supporting action of the elastic foundation is described also by means of normal and shear springs acting on the centres of the segments. The relationship between stiffnesses of the springs and the elastic properties of the one-dimensional structure are identified by comparing the homogenised equations of motion of the discrete system and Timoshenko beam theory.

  15. The PDF of fluid particle acceleration in turbulent flow with underlying normal distribution of velocity fluctuations

    International Nuclear Information System (INIS)

    Aringazin, A.K.; Mazhitov, M.I.

    2003-01-01

    We describe a formal procedure to obtain and specify the general form of a marginal distribution for the Lagrangian acceleration of fluid particle in developed turbulent flow using Langevin type equation and the assumption that velocity fluctuation u follows a normal distribution with zero mean, in accord to the Heisenberg-Yaglom picture. For a particular representation, β=exp[u], of the fluctuating parameter β, we reproduce the underlying log-normal distribution and the associated marginal distribution, which was found to be in a very good agreement with the new experimental data by Crawford, Mordant, and Bodenschatz on the acceleration statistics. We discuss on arising possibilities to make refinements of the log-normal model

  16. Outage and Capacity Performance Evaluation of Distributed MIMO Systems over a Composite Fading Channel

    Directory of Open Access Journals (Sweden)

    Wenjie Peng

    2014-01-01

    Full Text Available The exact closed-form expressions regarding the outage probability and capacity of distributed MIMO (DMIMO systems over a composite fading channel are derived. This is achieved firstly by using a lognormal approximation to a gamma-lognormal distribution when a mobile station (MS in the cell is in a fixed position, and the so-called maximum ratio transmission/selected combining (MRT-SC and selected transmission/maximum ratio combining (ST-MRC schemes are adopted in uplink and downlink, respectively. Then, based on a newly proposed nonuniform MS cell distribution model, which is more consistent with the MS cell hotspot distribution in an actual communication environment, the average outage probability and capacity formulas are further derived. Finally, the accuracy of the approximation method and the rationality of the corresponding theoretical analysis regarding the system performance are proven and illustrated by computer simulations.

  17. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  18. [A study on the distribution of the consumption of tobacco and alcohol].

    Science.gov (United States)

    Damiani, P; Masse, H; Aubenque, M

    1983-01-01

    An analysis of the distribution of tobacco consumption and alcohol-related mortality in France by sex and department is presented for the population aged 45 to 64. It is shown that the "population can be decomposed into two sets such that, for each of them, tobacco and alcohol consumption distributions are log-normal. [It is suggested] that consumption is normal for one set and an endogenous predisposition for the other." (summary in ENG) excerpt

  19. Reappraisal of the reference dose distribution in the UNSCEAR 1977 report

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    2008-01-01

    This paper provides the update of the reference dose distribution proposed by G.A.M. Web and D. Beninson in Annex E to the UNSCEAR 1977 Report. To demonstrate compliance with regulatory obligations regarding doses to individuals, they defined it with the following properties: 1) The distribution of annual doses is log-normal; 2) The mean of the annual dose distribution is 5 m Gy (10% of the ICRP 1977 dose limit); 3) The proportion of workers exceeding 50 m Gy is 0.1%. The concept of the reference dose distribution is still important to understand the inherent variation of individual doses to workers controlled by source-related and individual-related efforts of best dose reduction. In the commercial nuclear power plant, the dose distribution becomes the more apart from the log-normal due to the stronger ALARA efforts and the revised dose limits. The monitored workers show about 1 m Sv of annual mean and far less than 0.1% of workers above 20 m Sv. The updated models of dose distribution consist of log-normal (no feedback on dose X) ln(X)∼N(μ,σ 2 ), hybrid log-normal (feedback on higher X by ρ) hyb(ρX)=ρX+ln(ρX)∼N(μ,σ 2 ), hybrid S B (feedback on higher dose quotient X/(D-X) not close to D by ρ) hyb[ρX/(D.X)]∼N(μ,σ 2 ) and Johnson's S B (limit to D) ln[X/(D-X)]∼N(μ,σ 2 ). These models afford interpreting the degree of dose control including dose constraint/limit to the reference distribution. Some of distributions are examined to characterize the variation of doses to members of the public with uncertainty. (author)

  20. Frequency distribution of Radium-226, Thorium-228 and Potassium-40 concentration in ploughed soils

    International Nuclear Information System (INIS)

    Drichko, V.F.; Krisyuk, B.E.; Travnikova, I.G.; Lisachenko, E.P.; Dubenskaya, M.A.

    1977-01-01

    The results of studying Ra-226, Th-228 and K-40 concentration distribution laws in podsol, chernozem and saline soils are considered. Radionuclide concentrations were determined by gamma-spectrometric method in the samples chosen from arable soil layer according to the generally accepted agrotechnical procedure. Measuring procedure is described. The results show that frequency distributions of radionuclide concentrations transform from asymmetric form in normal coordinates into symmetric form in logarithmic coordinates. The usage of the lognormal law to describe frequency concentration distributions is substantiated. The values of concentration distribution parameters are given. The analysis of the data obtained permits to establish that Ra-226 and Th-228 concentrations in soils distribute lognormally and K-40 concentrations - normally and lognormally. According to the degree of decreasing mean concentrations of Ra-226 and Th-228, soils lie in line: chernozems=chernozem salterns > podsols; and according to the degree of decreasing mean quadratic deviation - in line: podsols>chernozems=salterns. It is necessary to determine the value of mean quadratic deviation and distribution type for full characteristics of the studied soil radioactivity

  1. Distributions of energy losses of electrons and pions in the CBM TRD

    International Nuclear Information System (INIS)

    Akishina, E.P.; Akishina, T.P.; Ivanov, V.V.; Denisova, O.Yu.

    2007-01-01

    The distributions of energy losses of electrons and pions in the TRD detector of the CBM experiment are considered. We analyze the measurements of the energy deposits in one-layer TRD prototype obtained during the test beam (GSI, Darmstadt, February 2006) and Monte Carlo simulations for the n-layered TRD realized with the help of GEANT in frames of the CBM ROOT. We show that 1) energy losses both for real measurements and GEANT simulations are approximated with a high accuracy by a log-normal distribution for π and a weighted sum of two log-normal distributions for e; 2) GEANT simulations noticeably differ from real measurements and, as a result, we have a significant loss in the efficiency of the e/π identification. A procedure to control and correct the process of the energy deposit of electrons in the TRD is developed

  2. A two-parameter model to predict fatigue life of high-strength steels in a very high cycle fatigue regime

    Science.gov (United States)

    Sun, Chengqi; Liu, Xiaolong; Hong, Youshi

    2015-06-01

    In this paper, ultrasonic (20 kHz) fatigue tests were performed on specimens of a high-strength steel in very high cycle fatigue (VHCF) regime. Experimental results showed that for most tested specimens failed in a VHCF regime, a fatigue crack originated from the interior of specimen with a fish-eye pattern, which contained a fine granular area (FGA) centered by an inclusion as the crack origin. Then, a two-parameter model is proposed to predict the fatigue life of high-strength steels with fish-eye mode failure in a VHCF regime, which takes into account the inclusion size and the FGA size. The model was verified by the data of present experiments and those in the literature. Furthermore, an analytic formula was obtained for estimating the equivalent crack growth rate within the FGA. The results also indicated that the stress intensity factor range at the front of the FGA varies within a small range, which is irrespective of stress amplitude and fatigue life.

  3. Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Gabriele Scheler

    2017-10-01

    Full Text Available In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum, neurotransmitter (GABA (striatum or glutamate (cortex or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.

  4. Football goal distributions and extremal statistics

    Science.gov (United States)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  5. Stability of the laws for the distribution of the cumulative failures in railway transport

    Directory of Open Access Journals (Sweden)

    Kirill VOYNOV

    2008-01-01

    Full Text Available There are very many different laws of distribution (for example, bellshaped (Gaussian distribution, lognormal, Weibull distribution, exponential, uniform, Poisson’s, Student’s distributions and so on, which help to describe the real picture of failures with elements in various mechanical systems, in locomotives and carriages, too. To diminish the possibility of getting the rough error in the output of maths data treatment the new method is demonstrated in this article. The task is solved both to the discrete, and to the continuous distributions.

  6. A revisited Johnson-Mehl-Avrami-Kolmogorov model and the evolution of grain-size distributions in steel

    OpenAIRE

    Hömberg, D.; Patacchini, F. S.; Sakamoto, K.; Zimmer, J.

    2016-01-01

    The classical Johnson-Mehl-Avrami-Kolmogorov approach for nucleation and growth models of diffusive phase transitions is revisited and applied to model the growth of ferrite in multiphase steels. For the prediction of mechanical properties of such steels, a deeper knowledge of the grain structure is essential. To this end, a Fokker-Planck evolution law for the volume distribution of ferrite grains is developed and shown to exhibit a log-normally distributed solution. Numerical parameter studi...

  7. Distribution functions for the linear region of the S-N curve

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Christian; Waechter, Michael; Masendorf, Rainer; Esderts, Alfons [TU Clausthal, Clausthal-Zellerfeld (Germany). Inst. for Plant Engineering and Fatigue Analysis

    2017-08-01

    This study establishes a database containing the results of fatigue tests from the linear region of the S-N curve using sources from the literature. Each set of test results originates from testing metallic components on a single load level. Eighty-nine test series with sample sizes of 14 ≤ n ≤ 500 are included in the database, resulting in a sum of 6,086 individual test results. The test series are tested in terms of the type of distribution function (log-normal or 2-parameter Weibull) using the Shapiro-Wilk test, the Anderson-Darling test and probability plots. The majority of the tested individual test results follows a log-normal distribution.

  8. Transformation of Bayesian posterior distribution into a basic analytical distribution

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2002-01-01

    Bayesian estimation is well-known approach that is widely used in Probabilistic Safety Analyses for the estimation of input model reliability parameters, such as component failure rates or probabilities of failure upon demand. In this approach, a prior distribution, which contains some generic knowledge about a parameter is combined with likelihood function, which contains plant-specific data about the parameter. Depending on the type of prior distribution, the resulting posterior distribution can be estimated numerically or analytically. In many instances only a numerical Bayesian integration can be performed. In such a case the posterior is provided in the form of tabular discrete distribution. On the other hand, it is much more convenient to have a parameter's uncertainty distribution that is to be input into a PSA model to be provided in the form of some basic analytical probability distribution, such as lognormal, gamma or beta distribution. One reason is that this enables much more convenient propagation of parameters' uncertainties through the model up to the so-called top events, such as plant system unavailability or core damage frequency. Additionally, software tools used to run PSA models often require that parameter's uncertainty distribution is defined in the form of one among the several allowed basic types of distributions. In such a case the posterior distribution that came as a product of Bayesian estimation needs to be transformed into an appropriate basic analytical form. In this paper, some approaches on transformation of posterior distribution to a basic probability distribution are proposed and discussed. They are illustrated by an example from NPP Krsko PSA model.(author)

  9. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  10. Probability Distribution Function of the Upper Equatorial Pacific Current Speeds

    National Research Council Canada - National Science Library

    Chu, Peter C

    2005-01-01

    ...), constructed from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events...

  11. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    Science.gov (United States)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  12. Multiplicity distributions in small phase-space domains in central nucleus-nucleus collisions

    International Nuclear Information System (INIS)

    Baechler, J.; Hoffmann, M.; Runge, K.; Schmoetten, E.; Bartke, J.; Gladysz, E.; Kowalski, M.; Stefanski, P.; Bialkowska, H.; Bock, R.; Brockmann, R.; Sandoval, A.; Buncic, P.; Ferenc, D.; Kadija, K.; Ljubicic, A. Jr.; Vranic, D.; Chase, S.I.; Harris, J.W.; Odyniec, G.; Pugh, H.G.; Rai, G.; Teitelbaum, L.; Tonse, S.; Derado, I.; Eckardt, V.; Gebauer, H.J.; Rauch, W.; Schmitz, N.; Seyboth, P.; Seyerlein, J.; Vesztergombi, G.; Eschke, J.; Heck, W.; Kabana, S.; Kuehmichel, A.; Lahanas, M.; Lee, Y.; Le Vine, M.; Margetis, S.; Renfordt, R.; Roehrich, D.; Rothard, H.; Schmidt, E.; Schneider, I.; Stock, R.; Stroebele, H.; Wenig, S.; Fleischmann, B.; Fuchs, M.; Gazdzicki, M.; Kosiec, J.; Skrzypczak, E.; Keidel, R.; Piper, A.; Puehlhofer, F.; Nappi, E.; Posa, F.; Paic, G.; Panagiotou, A.D.; Petridis, A.; Vassileiadis, G.; Pfenning, J.; Wosiek, B.

    1992-10-01

    Multiplicity distributions of negatively charged particles have been studied in restricted phase space intervals for central S + S, O + Au and S + Au collisions at 200 GeV/nucleon. It is shown that multiplicity distributions are well described by a negative binomial form irrespectively of the size and dimensionality of phase space domain. A clan structure analysis reveals interesting similarities between complex nuclear collisions and a simple partonic shower. The lognormal distribution agrees reasonably well with the multiplicity data in large domains, but fails in the case of small intervals. No universal scaling function was found to describe the shape of multiplicity distributions in phase space intervals of varying size. (orig.)

  13. Measurements of the charged particle multiplicity distribution in restricted rapidity intervals

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1995-01-01

    Charged particle multiplicity distributions have been measured with the ALEPH detector in restricted rapidity intervals |Y| \\leq 0.5,1.0, 1.5,2.0\\/ along the thrust axis and also without restriction on rapidity. The distribution for the full range can be parametrized by a log-normal distribution. For smaller windows one finds a more complicated structure, which is understood to arise from perturbative effects. The negative-binomial distribution fails to describe the data both with and without the restriction on rapidity. The JETSET model is found to describe all aspects of the data while the width predicted by HERWIG is in significant disagreement.

  14. Ajustes de funções de distribuição de probabilidade à radiação solar global no Estado do Rio Grande do Sul Adjustments of probability distribution functions to global solar radiation in Rio Grande do Sul State

    Directory of Open Access Journals (Sweden)

    Alberto Cargnelutti Filho

    2004-12-01

    Full Text Available O objetivo deste trabalho foi verificar o ajuste das séries de dados de radiação solar global média decendial, de 22 municípios do Estado do Rio Grande do Sul, às funções de distribuições de probabilidade normal, log-normal, gama, gumbel e weibull. Aplicou-se o teste de aderência de Kolmogorov-Smirnov, nas 792 séries de dados (22 municípios x 36 decêndios de radiação solar global média decendial, para verificar o ajuste dos dados às distribuições normal, log-normal, gama, gumbel e weibull, totalizando 3.960 testes. Os dados decendiais de radiação solar global média se ajustam às funções de distribuições de probabilidade normal, log-normal, gama, gumbel e weibull, e apresentam melhor ajuste à função de distribuição de probabilidade normal.The objective of this work was to verify the adjustment of data series for average global solar radiation to the normal, log-normal, gamma, gumbel and weibull probability distribution functions. Data were collected from 22 cities in Rio Grande do Sul State, Brazil. The Kolmogorov-Smirnov test was applied in the 792 series of data (22 localities x 36 periods of ten days of average global solar radiation to verify the adjustment of the data to the normal, log-normal, gamma, gumbel and weibull probability distribution functions, totalizing 3,960 tests. The data of average global solar radiation adjust to the normal, log-normal, gamma, gumbel and weibull probability distribution functions, and present a better adjustment to the normal probability function.

  15. The stochastic distribution of available coefficient of friction on quarry tiles for human locomotion.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2012-01-01

    The available coefficient of friction (ACOF) for human locomotion is the maximum coefficient of friction that can be supported without a slip at the shoe and floor interface. A statistical model was introduced to estimate the probability of slip by comparing the ACOF with the required coefficient of friction, assuming that both coefficients have stochastic distributions. This paper presents an investigation of the stochastic distributions of the ACOF of quarry tiles under dry, water and glycerol conditions. One hundred friction measurements were performed on a walkway under the surface conditions of dry, water and 45% glycerol concentration. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF appears to fit the normal and log-normal distributions better than the Weibull distribution for the water and glycerol conditions. However, no match was found between the distribution of ACOF under the dry condition and any of the three continuous distributions evaluated. Based on limited data, a normal distribution might be more appropriate due to its simplicity, practicality and familiarity among the three distributions evaluated.

  16. The effect of multi-directional nanocomposite materials on the vibrational response of thick shell panels with finite length and rested on two-parameter elastic foundations

    Science.gov (United States)

    Tahouneh, Vahid; Naei, Mohammad Hasan

    2016-03-01

    The main purpose of this paper is to investigate the effect of bidirectional continuously graded nanocomposite materials on free vibration of thick shell panels rested on elastic foundations. The elastic foundation is considered as a Pasternak model after adding a shear layer to the Winkler model. The panels reinforced by randomly oriented straight single-walled carbon nanotubes are considered. The volume fractions of SWCNTs are assumed to be graded not only in the radial direction, but also in axial direction of the curved panel. This study presents a 2-D six-parameter power-law distribution for CNTs volume fraction of 2-D continuously graded nanocomposite that gives designers a powerful tool for flexible designing of structures under multi-functional requirements. The benefit of using generalized power-law distribution is to illustrate and present useful results arising from symmetric, asymmetric and classic profiles. The material properties are determined in terms of local volume fractions and material properties by Mori-Tanaka scheme. The 2-D differential quadrature method as an efficient numerical tool is used to discretize governing equations and to implement boundary conditions. The fast rate of convergence of the method is shown and results are compared against existing results in literature. Some new results for natural frequencies of the shell are prepared, which include the effects of elastic coefficients of foundation, boundary conditions, material and geometrical parameters. The interesting results indicate that a graded nanocomposite volume fraction in two directions has a higher capability to reduce the natural frequency than conventional 1-D functionally graded nanocomposite materials.

  17. GROWTH RATE DISTRIBUTION OF BORAX SINGLE CRYSTALS ON THE (001 FACE UNDER VARIOUS FLOW RATES

    Directory of Open Access Journals (Sweden)

    Suharso Suharso

    2010-06-01

    Full Text Available The growth rates of borax single crystals from aqueous solutions at various flow rates in the (001 direction were measured using in situ cell method. From the growth rate data obtained, the growth rate distribution of borax crystals was investigated using Minitab Software and SPSS Software at relative supersaturation of 0807 and temperature of 25 °C. The result shows that normal, gamma, and log-normal distribution give a reasonably good fit to GRD. However, there is no correlation between growth rate distribution and flow rate of solution.   Keywords: growth rate dispersion (GRD, borax, flow rate

  18. Log-concavity property for some well-known distributions

    Directory of Open Access Journals (Sweden)

    G. R. Mohtashami Borzadaran

    2011-12-01

    Full Text Available Interesting properties and propositions, in many branches of science such as economics have been obtained according to the property of cumulative distribution function of a random variable as a concave function. Caplin and Nalebuff (1988,1989, Bagnoli and Khanna (1989 and Bagnoli and Bergstrom (1989 , 1989, 2005 have discussed the log-concavity property of probability distributions and their applications, especially in economics. Log-concavity concerns twice differentiable real-valued function g whose domain is an interval on extended real line. g as a function is said to be log-concave on the interval (a,b if the function ln(g is a concave function on (a,b. Log-concavity of g on (a,b is equivalent to g'/g being monotone decreasing on (a,b or (ln(g" 6] have obtained log-concavity for distributions such as normal, logistic, extreme-value, exponential, Laplace, Weibull, power function, uniform, gamma, beta, Pareto, log-normal, Student's t, Cauchy and F distributions. We have discussed and introduced the continuous versions of the Pearson family, also found the log-concavity for this family in general cases, and then obtained the log-concavity property for each distribution that is a member of Pearson family. For the Burr family these cases have been calculated, even for each distribution that belongs to Burr family. Also, log-concavity results for distributions such as generalized gamma distributions, Feller-Pareto distributions, generalized Inverse Gaussian distributions and generalized Log-normal distributions have been obtained.

  19. Changes of firm size distribution: The case of Korea

    Science.gov (United States)

    Kang, Sang Hoon; Jiang, Zhuhua; Cheong, Chongcheul; Yoon, Seong-Min

    2011-01-01

    In this paper, the distribution and inequality of firm sizes is evaluated for the Korean firms listed on the stock markets. Using the amount of sales, total assets, capital, and the number of employees, respectively, as a proxy for firm sizes, we find that the upper tail of the Korean firm size distribution can be described by power-law distributions rather than lognormal distributions. Then, we estimate the Zipf parameters of the firm sizes and assess the changes in the magnitude of the exponents. The results show that the calculated Zipf exponents over time increased prior to the financial crisis, but decreased after the crisis. This pattern implies that the degree of inequality in Korean firm sizes had severely deepened prior to the crisis, but lessened after the crisis. Overall, the distribution of Korean firm sizes changes over time, and Zipf’s law is not universal but does hold as a special case.

  20. The shape of terrestrial abundance distributions

    Science.gov (United States)

    Alroy, John

    2015-01-01

    Ecologists widely accept that the distribution of abundances in most communities is fairly flat but heavily dominated by a few species. The reason for this is that species abundances are thought to follow certain theoretical distributions that predict such a pattern. However, previous studies have focused on either a few theoretical distributions or a few empirical distributions. I illustrate abundance patterns in 1055 samples of trees, bats, small terrestrial mammals, birds, lizards, frogs, ants, dung beetles, butterflies, and odonates. Five existing theoretical distributions make inaccurate predictions about the frequencies of the most common species and of the average species, and most of them fit the overall patterns poorly, according to the maximum likelihood–related Kullback-Leibler divergence statistic. Instead, the data support a low-dominance distribution here called the “double geometric.” Depending on the value of its two governing parameters, it may resemble either the geometric series distribution or the lognormal series distribution. However, unlike any other model, it assumes both that richness is finite and that species compete unequally for resources in a two-dimensional niche landscape, which implies that niche breadths are variable and that trait distributions are neither arrayed along a single dimension nor randomly associated. The hypothesis that niche space is multidimensional helps to explain how numerous species can coexist despite interacting strongly. PMID:26601249

  1. STOCHASTIC MODEL OF THE SPIN DISTRIBUTION OF DARK MATTER HALOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Juhan [Center for Advanced Computation, Korea Institute for Advanced Study, Heogiro 85, Seoul 130-722 (Korea, Republic of); Choi, Yun-Young [Department of Astronomy and Space Science, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of); Kim, Sungsoo S.; Lee, Jeong-Eun [School of Space Research, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of)

    2015-09-15

    We employ a stochastic approach to probing the origin of the log-normal distributions of halo spin in N-body simulations. After analyzing spin evolution in halo merging trees, it was found that a spin change can be characterized by a stochastic random walk of angular momentum. Also, spin distributions generated by random walks are fairly consistent with those directly obtained from N-body simulations. We derived a stochastic differential equation from a widely used spin definition and measured the probability distributions of the derived angular momentum change from a massive set of halo merging trees. The roles of major merging and accretion are also statistically analyzed in evolving spin distributions. Several factors (local environment, halo mass, merging mass ratio, and redshift) are found to influence the angular momentum change. The spin distributions generated in the mean-field or void regions tend to shift slightly to a higher spin value compared with simulated spin distributions, which seems to be caused by the correlated random walks. We verified the assumption of randomness in the angular momentum change observed in the N-body simulation and detected several degrees of correlation between walks, which may provide a clue for the discrepancies between the simulated and generated spin distributions in the voids. However, the generated spin distributions in the group and cluster regions successfully match the simulated spin distribution. We also demonstrated that the log-normality of the spin distribution is a natural consequence of the stochastic differential equation of the halo spin, which is well described by the Geometric Brownian Motion model.

  2. Particle size distributions of radioactive aerosols measured in workplaces

    International Nuclear Information System (INIS)

    Dorrian, M.-D.; Bailey, M.R.

    1995-01-01

    A survey of published values of Activity Median Aerodynamic Diameter (AMAD) measured in working environments was conducted to assist in the selection of a realistic default AMAD for occupational exposures. Results were compiled from 52 publications covering a wide variety of industries and workplaces. Reported values of AMAD from all studies ranged from 0.12 μm to 25 μm, and most were well fitted by a log-normal distribution with a median value of 4.4 μm. This supports the choice of a 5 μm default AMAD, as a realistic rounded value for occupational exposures, by the ICRP Task Group on Human Respiratory Tract Models for Radiological Protection and its acceptance by ICRP. Both the nuclear power and nuclear fuel handling industries gave median values of approximately 4 μm. Uranium mills gave a median value of 6.8 μm with AMADs frequently greater than 10 μm. High temperature and arc saw cutting operations generated submicron particles and occasionally, biomodal log-normal particle size distributions. It is concluded that in view of the wide range of AMADs found in the surveyed literature, greater emphasis should be placed on air sampling to characterise aerosol particle size distributions for individual work practices, especially as doses estimated with the new 5 μm default AMAD will not always be conservative. (author)

  3. Evaluation of the probability distribution of intake from a single measurement on a personal air sampler

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1988-01-01

    An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)

  4. THE IMPACT OF SPATIAL AND TEMPORAL RESOLUTIONS IN TROPICAL SUMMER RAINFALL DISTRIBUTION: PRELIMINARY RESULTS

    Directory of Open Access Journals (Sweden)

    Q. Liu

    2017-10-01

    Full Text Available The abundance or lack of rainfall affects peoples’ life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007, accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG. However, the models’ resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days and monthly resolutions. The probability distributions (PDF and cumulative distribution functions(CDF of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.

  5. The Impact of Spatial and Temporal Resolutions in Tropical Summer Rainfall Distribution: Preliminary Results

    Science.gov (United States)

    Liu, Q.; Chiu, L. S.; Hao, X.

    2017-10-01

    The abundance or lack of rainfall affects peoples' life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007), accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG). However, the models' resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling) procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA) at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days) and monthly resolutions. The probability distributions (PDF) and cumulative distribution functions(CDF) of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS) test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.

  6. The thermal pressure distribution of a simulated cold neutral medium

    Energy Technology Data Exchange (ETDEWEB)

    Gazol, Adriana, E-mail: a.gazol@crya.unam.mx [Centro de Radioastronomía y Astrofísica, UNAM, A. P. 3-72, c.p. 58089 Morelia, Michoacán (Mexico)

    2014-07-01

    We numerically study the thermal pressure distribution in a gas with thermal properties similar to those of the cold neutral interstellar gas by analyzing three-dimensional hydrodynamic models in boxes with sides of 100 pc with turbulent compressible forcing at 50 pc and different Mach numbers. We find that at high pressures and for large Mach numbers, both the volume-weighted and the density-weighted distributions can be appropriately described by a log-normal distribution, whereas for small Mach numbers they are better described by a power law. Thermal pressure distributions resulting from similar simulations but with self-gravity differ only for low Mach numbers; in this case, they develop a high pressure tail.

  7. Probability distributions of placental morphological measurements and origins of variability of placental shapes.

    Science.gov (United States)

    Yampolsky, M; Salafia, C M; Shlakhter, O

    2013-06-01

    While the mean shape of human placenta is round with centrally inserted umbilical cord, significant deviations from this ideal are fairly common, and may be clinically meaningful. Traditionally, they are explained by trophotropism. We have proposed a hypothesis explaining typical variations in placental shape by randomly determined fluctuations in the growth process of the vascular tree. It has been recently reported that umbilical cord displacement in a birth cohort has a log-normal probability distribution, which indicates that the displacement between an initial point of origin and the centroid of the mature shape is a result of accumulation of random fluctuations of the dynamic growth of the placenta. To confirm this, we investigate statistical distributions of other features of placental morphology. In a cohort of 1023 births at term digital photographs of placentas were recorded at delivery. Excluding cases with velamentous cord insertion, or missing clinical data left 1001 (97.8%) for which placental surface morphology features were measured. Best-fit statistical distributions for them were obtained using EasyFit. The best-fit distributions of umbilical cord displacement, placental disk diameter, area, perimeter, and maximal radius calculated from the cord insertion point are of heavy-tailed type, similar in shape to log-normal distributions. This is consistent with a stochastic origin of deviations of placental shape from normal. Deviations of placental shape descriptors from average have heavy-tailed distributions similar in shape to log-normal. This evidence points away from trophotropism, and towards a spontaneous stochastic evolution of the variants of placental surface shape features. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Statistical study of spatio-temporal distribution of precursor solar flares associated with major flares

    Science.gov (United States)

    Gyenge, N.; Ballai, I.; Baranyi, T.

    2016-07-01

    The aim of the present investigation is to study the spatio-temporal distribution of precursor flares during the 24 h interval preceding M- and X-class major flares and the evolution of follower flares. Information on associated (precursor and follower) flares is provided by Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Flare list, while the major flares are observed by the Geostationary Operational Environmental Satellite (GOES) system satellites between 2002 and 2014. There are distinct evolutionary differences between the spatio-temporal distributions of associated flares in about one-day period depending on the type of the main flare. The spatial distribution was characterized by the normalized frequency distribution of the quantity δ (the distance between the major flare and its precursor flare normalized by the sunspot group diameter) in four 6 h time intervals before the major event. The precursors of X-class flares have a double-peaked spatial distribution for more than half a day prior to the major flare, but it changes to a lognormal-like distribution roughly 6 h prior to the event. The precursors of M-class flares show lognormal-like distribution in each 6 h subinterval. The most frequent sites of the precursors in the active region are within a distance of about 0.1 diameter of sunspot group from the site of the major flare in each case. Our investigation shows that the build-up of energy is more effective than the release of energy because of precursors.

  9. On the distribution of the stochastic component in SUE traffic assignment models

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper discuss the use of different distributions of the stochastic component in SUE. A main conclusion is that they generally gave reasonable similar results, except for the LogNormal distribution which use is dissuaded. However, in cases with low link-costs (e.g. in dense urban areas, ramps...... and modelling of intersections and inter-changes), distributions with long tails (Gumbel and Normal) gave biased results com-pared with the Rectangular distribution. The Triangular distribution gave results somewhat between. Besides giving the most reasonable results, the Rectangular dis-tribution is the most...... calculation effective.All distributions gave a unique solution at link level after a sufficient large number of iterations (up to 1,000 at full-scale networks) while the usual aggregated measures of convergence converged quite fast (under 50 iterations). The tests also showed, that the distributions must...

  10. Crystallite size distribution of clay minerals from selected Serbian clay deposits

    Directory of Open Access Journals (Sweden)

    Simić Vladimir

    2006-01-01

    Full Text Available The BWA (Bertaut-Warren-Averbach technique for the measurement of the mean crystallite thickness and thickness distributions of phyllosilicates was applied to a set of kaolin and bentonite minerals. Six samples of kaolinitic clays, one sample of halloysite, and five bentonite samples from selected Serbian deposits were analyzed. These clays are of sedimentary volcano-sedimentary (diagenetic, and hydrothermal origin. Two different types of shape of thickness distribution were found - lognormal, typical for bentonite and halloysite, and polymodal, typical for kaolinite. The mean crystallite thickness (T BWA seams to be influenced by the genetic type of the clay sample.

  11. Modification of the natural radionuclide distribution by some human activities in Canada

    International Nuclear Information System (INIS)

    Knight, G.B.; Makepeace, C.E.

    1980-01-01

    Examples are presented of three types of human activity that have resulted in elevated natural radiation levels. Investigations carried out by a Federal-Provincial Task Force are described. The distributions of grab sample measurements of radon and radon daughter concentrations are compared for the Bancroft area, Cobourg, Deloro, Elliot Lake, and Port Hope in Ontario, and Uranium City in Saskatchewan; it is concluded that the major point of difference between the communities that were investigated and the reference community of Cobourg is the departure from a symmetrical lognormal distribution at the higher concentrations

  12. Frequency distribution analysis of the long-lived beta-activity of air dust

    International Nuclear Information System (INIS)

    Bunzl, K.; Hoetzl, H.; Winkler, R.

    1977-01-01

    In order to compare the average annual beta activities of air dust a frequency distribution analysis of data has been carried out in order to select a representative quantity for the average value of the data group. It was found that the data to be analysed were consistent with a log-normal frequency distribution and therefore calculations were made of, as the representative average, the median of the beta activity of each year as the antilog of the arithmetric mean of the logarithms, log x, of the analytical values x. The 95% confidence limits were also obtained. The quantities thus calculated are summarized in tabular form. (U.K.)

  13. THE INTRINSIC EDDINGTON RATIO DISTRIBUTION OF ACTIVE GALACTIC NUCLEI IN STAR-FORMING GALAXIES FROM THE SLOAN DIGITAL SKY SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Mackenzie L.; Hickox, Ryan C.; Black, Christine S.; Hainline, Kevin N.; DiPompeo, Michael A. [Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755 (United States); Goulding, Andy D. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2016-07-20

    An important question in extragalactic astronomy concerns the distribution of black hole accretion rates of active galactic nuclei (AGNs). Based on observations at X-ray wavelengths, the observed Eddington ratio distribution appears as a power law, while optical studies have often yielded a lognormal distribution. There is increasing evidence that these observed discrepancies may be due to contamination by star formation and other selection effects. Using a sample of galaxies from the Sloan Digital Sky Survey Data Release 7, we test whether or not an intrinsic Eddington ratio distribution that takes the form of a Schechter function is consistent with previous work suggesting that young galaxies in optical surveys have an observed lognormal Eddington ratio distribution. We simulate the optical emission line properties of a population of galaxies and AGNs using a broad, instantaneous luminosity distribution described by a Schechter function near the Eddington limit. This simulated AGN population is then compared to observed galaxies via their positions on an emission line excitation diagram and Eddington ratio distributions. We present an improved method for extracting the AGN distribution using BPT diagnostics that allows us to probe over one order of magnitude lower in Eddington ratio, counteracting the effects of dilution by star formation. We conclude that for optically selected AGNs in young galaxies, the intrinsic Eddington ratio distribution is consistent with a possibly universal, broad power law with an exponential cutoff, as this distribution is observed in old, optically selected galaxies and X-rays.

  14. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  15. Robust D-optimal designs under correlated error, applicable invariantly for some lifetime distributions

    International Nuclear Information System (INIS)

    Das, Rabindra Nath; Kim, Jinseog; Park, Jeong-Soo

    2015-01-01

    In quality engineering, the most commonly used lifetime distributions are log-normal, exponential, gamma and Weibull. Experimental designs are useful for predicting the optimal operating conditions of the process in lifetime improvement experiments. In the present article, invariant robust first-order D-optimal designs are derived for correlated lifetime responses having the above four distributions. Robust designs are developed for some correlated error structures. It is shown that robust first-order D-optimal designs for these lifetime distributions are always robust rotatable but the converse is not true. Moreover, it is observed that these designs depend on the respective error covariance structure but are invariant to the above four lifetime distributions. This article generalizes the results of Das and Lin [7] for the above four lifetime distributions with general (intra-class, inter-class, compound symmetry, and tri-diagonal) correlated error structures. - Highlights: • This paper presents invariant robust first-order D-optimal designs under correlated lifetime responses. • The results of Das and Lin [7] are extended for the four lifetime (log-normal, exponential, gamma and Weibull) distributions. • This paper also generalizes the results of Das and Lin [7] to more general correlated error structures

  16. Tumour control probability (TCP) for non-uniform activity distribution in radionuclide therapy

    International Nuclear Information System (INIS)

    Uusijaervi, Helena; Bernhardt, Peter; Forssell-Aronsson, Eva

    2008-01-01

    Non-uniform radionuclide distribution in tumours will lead to a non-uniform absorbed dose. The aim of this study was to investigate how tumour control probability (TCP) depends on the radionuclide distribution in the tumour, both macroscopically and at the subcellular level. The absorbed dose in the cell nuclei of tumours was calculated for 90 Y, 177 Lu, 103m Rh and 211 At. The radionuclides were uniformly distributed within the subcellular compartment and they were uniformly, normally or log-normally distributed among the cells in the tumour. When all cells contain the same amount of activity, the cumulated activities required for TCP = 0.99 (A-tilde TCP=0.99 ) were 1.5-2 and 2-3 times higher when the activity was distributed on the cell membrane compared to in the cell nucleus for 103m Rh and 211 At, respectively. TCP for 90 Y was not affected by different radionuclide distributions, whereas for 177 Lu, it was slightly affected when the radionuclide was in the nucleus. TCP for 103m Rh and 211 At were affected by different radionuclide distributions to a great extent when the radionuclides were in the cell nucleus and to lesser extents when the radionuclides were distributed on the cell membrane or in the cytoplasm. When the activity was distributed in the nucleus, A-tilde TCP=0.99 increased when the activity distribution became more heterogeneous for 103m Rh and 211 At, and the increase was large when the activity was normally distributed compared to log-normally distributed. When the activity was distributed on the cell membrane, A-tilde TCP=0.99 was not affected for 103m Rh and 211 At when the activity distribution became more heterogeneous. A-tilde TCP=0.99 for 90 Y and 177 Lu were not affected by different activity distributions, neither macroscopic nor subcellular

  17. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    Science.gov (United States)

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  18. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)

    2017-03-20

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  19. Notes on representing grain size distributions obtained by electron backscatter diffraction

    International Nuclear Information System (INIS)

    Toth, Laszlo S.; Biswas, Somjeet; Gu, Chengfan; Beausir, Benoit

    2013-01-01

    Grain size distributions measured by electron backscatter diffraction are commonly represented by histograms using either number or area fraction definitions. It is shown here that they should be presented in forms of density distribution functions for direct quantitative comparisons between different measurements. Here we make an interpretation of the frequently seen parabolic tales of the area distributions of bimodal grain structures and a transformation formula between the two distributions are given in this paper. - Highlights: • Grain size distributions are represented by density functions. • The parabolic tales corresponds to equal number of grains in a bin of the histogram. • A simple transformation formula is given to number and area weighed distributions. • The particularities of uniform and lognormal distributions are examined

  20. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  1. Power laws in citation distributions: evidence from Scopus.

    Science.gov (United States)

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  2. A data science based standardized Gini index as a Lorenz dominance preserving measure of the inequality of distributions.

    Science.gov (United States)

    Ultsch, Alfred; Lötsch, Jörn

    2017-01-01

    The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World's countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World's counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World's countries' income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.

  3. A data science based standardized Gini index as a Lorenz dominance preserving measure of the inequality of distributions.

    Directory of Open Access Journals (Sweden)

    Alfred Ultsch

    Full Text Available The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World's countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World's counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World's countries' income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.

  4. Aerosol formation from high-velocity uranium drops: Comparison of number and mass distributions. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rader, D.J.; Benson, D.A.

    1995-05-01

    This report presents the results of an experimental study of the aerosol produced by the combustion of high-velocity molten-uranium droplets produced by the simultaneous heating and electromagnetic launch of uranium wires. These tests are intended to simulate the reduction of high-velocity fragments into aerosol in high-explosive detonations or reactor accidents involving nuclear materials. As reported earlier, the resulting aerosol consists mainly of web-like chain agglomerates. A condensation nucleus counter was used to investigate the decay of the total particle concentration due to coagulation and losses. Number size distributions based on mobility equivalent diameter obtained soon after launch with a Differential Mobility Particle Sizer showed lognormal distributions with an initial count median diameter (CMD) of 0.3 {mu}m and a geometric standard deviation, {sigma}{sub g} of about 2; the CMD was found to increase and {sigma}{sub g} decrease with time due to coagulation. Mass size distributions based on aerodynamic diameter were obtained for the first time with a Microorifice Uniform Deposit Impactor, which showed lognormal distributions with mass median aerodynamic diameters of about 0.5 {mu}m and an aerodynamic geometric standard deviation of about 2. Approximate methods for converting between number and mass distributions and between mobility and aerodynamic equivalent diameters are presented.

  5. Explaining the power-law distribution of human mobility through transportation modality decomposition

    Science.gov (United States)

    Zhao, Kai; Musolesi, Mirco; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu

    2015-03-01

    Human mobility has been empirically observed to exhibit Lévy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we propose to explain the Lévy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation to the emergence of Lévy Walk patterns that characterize human mobility patterns.

  6. A Dual Power Law Distribution for the Stellar Initial Mass Function

    Science.gov (United States)

    Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett

    2018-05-01

    We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.

  7. Aerosol formation from high-velocity uranium drops: Comparison of number and mass distributions. Final report

    International Nuclear Information System (INIS)

    Rader, D.J.; Benson, D.A.

    1995-05-01

    This report presents the results of an experimental study of the aerosol produced by the combustion of high-velocity molten-uranium droplets produced by the simultaneous heating and electromagnetic launch of uranium wires. These tests are intended to simulate the reduction of high-velocity fragments into aerosol in high-explosive detonations or reactor accidents involving nuclear materials. As reported earlier, the resulting aerosol consists mainly of web-like chain agglomerates. A condensation nucleus counter was used to investigate the decay of the total particle concentration due to coagulation and losses. Number size distributions based on mobility equivalent diameter obtained soon after launch with a Differential Mobility Particle Sizer showed lognormal distributions with an initial count median diameter (CMD) of 0.3 μm and a geometric standard deviation, σ g of about 2; the CMD was found to increase and σ g decrease with time due to coagulation. Mass size distributions based on aerodynamic diameter were obtained for the first time with a Microorifice Uniform Deposit Impactor, which showed lognormal distributions with mass median aerodynamic diameters of about 0.5 μm and an aerodynamic geometric standard deviation of about 2. Approximate methods for converting between number and mass distributions and between mobility and aerodynamic equivalent diameters are presented

  8. The Italian primary school-size distribution and the city-size: a complex nexus

    Science.gov (United States)

    Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.

    2014-06-01

    We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.

  9. Characterization of 3-D particle distribution and effects on recrystallization studied by computer simulation

    International Nuclear Information System (INIS)

    Fridy, J.M.; Marthinsen, K.; Rouns, T.N.; Lippert, K.B.; Nes, E.; Richmond, O.

    1992-12-01

    Artificial particle distribution in three dimensions with different degree of clustering have been generated and used as nucleation sites for the simulation of particle stimulated recrystallization with site saturation nucleation kinetics. The clustering has a strong effect on both the Avrami exponent and the resulting sectioned grain size distributions. The Avrami exponent decreases rapidly from the expected value of 3 with the degree of clustering. A value of less than 1.5 is obtained for the Avrami exponent with a strongly clustered distribution of nucleation sites. The size distributions of sectioned grain areas are considerably broadened with clustering, but are still far from the log-normal distributions observed experimentally. A computer program has been developed to generate particle distributions whose pair correlation functions match experimentally measured functions. 15 refs., 6 figs

  10. Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions

    International Nuclear Information System (INIS)

    Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.

    2010-01-01

    A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  11. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  12. Generalised extreme value distributions provide a natural hypothesis for the shape of seed mass distributions.

    Directory of Open Access Journals (Sweden)

    Will Edwards

    Full Text Available Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed "for normality" but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs, a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species' life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm.

  13. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    Science.gov (United States)

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  14. A two-parameter approach for the analysis of the effect of the weld metal on the constraint; Una enfoque de dos parametros para el analisis del efecto del cordon de soldadura sobre el constrenimiento

    Energy Technology Data Exchange (ETDEWEB)

    Leiva, R.; Donoso, J. R.; Muehlich, U.; Labbe, F.

    2004-07-01

    The effect of the mismatched weld metal on the stress field close to the crack tip in an idealized weld joint made up of base metal (BM) and weld metal (WM), with the crack located in WM, parallel to the BM/WM interface, was numerically analyzed. The analysis was performed with a J-Q type two-parameter approach with a Modified Boundary Layer, MBL, model subject to a remote displacement field solely controlled by K{sub 1} in order to eliminate the effect of the geometry constraint. The numerical results show that the constraint level decreases for overmatched welds (yield stress of WM higher than that of BM), and increases for under matched welds (yield stress of WM lower than that BM). The constraint level depends on the degree of the mismatch, on the width of the weld, and on the applied load level. (Author) 21 refs.

  15. Particles size distribution effect on 3D packing of nanoparticles in to a bounded region

    International Nuclear Information System (INIS)

    Farzalipour Tabriz, M.; Salehpoor, P.; Esmaielzadeh Kandjani, A.; Vaezi, M. R.; Sadrnezhaad, S. K.

    2007-01-01

    In this paper, the effects of two different Particle Size Distributions on packing behavior of ideal rigid spherical nanoparticles using a novel packing model based on parallel algorithms have been reported. A mersenne twister algorithm was used to generate pseudo random numbers for the particles initial coordinates. Also, for this purpose a nano sized tetragonal confined container with a square floor (300 * 300 nm) were used in this work. The Andreasen and the Lognormal Particle Size Distributions were chosen to investigate the packing behavior in a 3D bounded region. The effects of particle numbers on packing behavior of these two Particle Size Distributions have been investigated. Also the reproducibility and the distribution of packing factor of these Particle Size Distributions were compared

  16. Econophysical anchoring of unimodal power-law distributions

    International Nuclear Information System (INIS)

    Eliazar, Iddo I; Cohen, Morrel H

    2013-01-01

    The sciences are abundant with size distributions whose densities have a unimodal shape and power-law tails both at zero and at infinity. The quintessential examples of such unimodal and power-law (UPL) distributions are the sizes of income and wealth in human societies. While the tails of UPL distributions are precisely quantified by their corresponding power-law exponents, their bulks are only qualitatively characterized as unimodal. Consequently, different statistical models of UPL distributions exist, the most popular considering lognormal bulks. In this paper we present a general econophysical framework for UPL distributions termed ‘the anchoring method’. This method: (i) universally approximates UPL distributions via three ‘anchors’ set at zero, at infinity, and at an intermediate point between zero and infinity (e.g. the mode); (ii) is highly versatile and broadly applicable; (iii) encompasses the existing statistical models of UPL distributions as special cases; (iv) facilitates the introduction of new statistical models of UPL distributions and (v) yields a socioeconophysical analysis of UPL distributions. (paper)

  17. Ventilation-perfusion distribution in normal subjects.

    Science.gov (United States)

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  18. Modeling wind speed and wind power distributions in Rwanda

    Energy Technology Data Exchange (ETDEWEB)

    Safari, Bonfils [Department of Physics, National University of Rwanda, P.O. Box 117, Huye District, South Province (Rwanda)

    2011-02-15

    Utilization of wind energy as an alternative energy source may offer many environmental and economical advantages compared to fossil fuels based energy sources polluting the lower layer atmosphere. Wind energy as other forms of alternative energy may offer the promise of meeting energy demand in the direct, grid connected modes as well as stand alone and remote applications. Wind speed is the most significant parameter of the wind energy. Hence, an accurate determination of probability distribution of wind speed values is very important in estimating wind speed energy potential over a region. In the present study, parameters of five probability density distribution functions such as Weibull, Rayleigh, lognormal, normal and gamma were calculated in the light of long term hourly observed data at four meteorological stations in Rwanda for the period of the year with fairly useful wind energy potential (monthly hourly mean wind speed anti v{>=}2 m s{sup -1}). In order to select good fitting probability density distribution functions, graphical comparisons to the empirical distributions were made. In addition, RMSE and MBE have been computed for each distribution and magnitudes of errors were compared. Residuals of theoretical distributions were visually analyzed graphically. Finally, a selection of three good fitting distributions to the empirical distribution of wind speed measured data was performed with the aid of a {chi}{sup 2} goodness-of-fit test for each station. (author)

  19. Size distributions and failure initiation of submarine and subaerial landslides

    Science.gov (United States)

    ten Brink, Uri S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.

    2009-01-01

    Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area

  20. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    International Nuclear Information System (INIS)

    Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.

    2012-01-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.

  1. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition.

    Directory of Open Access Journals (Sweden)

    Xiliang Zheng

    2015-04-01

    Full Text Available We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity, the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.

  2. Moduli stabilization, large-volume dS minimum without D3-bar branes, (non-)supersymmetric black hole attractors and two-parameter Swiss cheese Calabi-Yau's

    CERN Document Server

    Misra, Aalok

    2008-01-01

    We consider issues of moduli stabilization and "area codes" for type II flux compactifications, and the "Inverse Problem" and "Fake Superpotentials" for extremal (non)supersymmetric black holes in type II compactifications on (orientifold of) a compact two-parameter Calabi-Yau expressed as a degree-18 hypersurface in WCP^4[1,1,1,6,9] which has multiple singular loci in its moduli space. We argue the existence of extended "area codes" [1] wherein for the same set of large NS-NS and RR fluxes, one can stabilize all the complex structure moduli and the axion-dilaton modulus (to different sets of values) for points in the moduli space away as well as near the different singular conifold loci leading to the existence of domain walls. By including non-perturbative alpha' and instanton corrections in the Kaehler potential and superpotential [2], we show the possibility of getting a large-volume non-supersymmetric (A)dS minimum. Further, using techniques of [3] we explicitly show that given a set of moduli and choice...

  3. Product of Ginibre matrices: Fuss-Catalan and Raney distributions

    Science.gov (United States)

    Penson, Karol A.; Życzkowski, Karol

    2011-06-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions Ps(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions Ps(x) in terms of a combination of s hypergeometric functions of the type sFs-1. The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.

  4. Escort entropies and divergences and related canonical distribution

    International Nuclear Information System (INIS)

    Bercher, J.-F.

    2011-01-01

    We discuss two families of two-parameter entropies and divergences, derived from the standard Renyi and Tsallis entropies and divergences. These divergences and entropies are found as divergences or entropies of escort distributions. Exploiting the nonnegativity of the divergences, we derive the expression of the canonical distribution associated to the new entropies and a observable given as an escort-mean value. We show that this canonical distribution extends, and smoothly connects, the results obtained in nonextensive thermodynamics for the standard and generalized mean value constraints. -- Highlights: → Two-parameter entropies are derived from q-entropies and escort distributions. → The related canonical distribution is derived. → This connects and extends known results in nonextensive statistics.

  5. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  6. Global patterns of city size distributions and their fundamental drivers.

    Directory of Open Access Journals (Sweden)

    Ethan H Decker

    2007-09-01

    Full Text Available Urban areas and their voracious appetites are increasingly dominating the flows of energy and materials around the globe. Understanding the size distribution and dynamics of urban areas is vital if we are to manage their growth and mitigate their negative impacts on global ecosystems. For over 50 years, city size distributions have been assumed to universally follow a power function, and many theories have been put forth to explain what has become known as Zipf's law (the instance where the exponent of the power function equals unity. Most previous studies, however, only include the largest cities that comprise the tail of the distribution. Here we show that national, regional and continental city size distributions, whether based on census data or inferred from cluster areas of remotely-sensed nighttime lights, are in fact lognormally distributed through the majority of cities and only approach power functions for the largest cities in the distribution tails. To explore generating processes, we use a simple model incorporating only two basic human dynamics, migration and reproduction, that nonetheless generates distributions very similar to those found empirically. Our results suggest that macroscopic patterns of human settlements may be far more constrained by fundamental ecological principles than more fine-scale socioeconomic factors.

  7. Degree and wealth distribution in a network induced by wealth

    Science.gov (United States)

    Lee, Gyemin; Kim, Gwang Il

    2007-09-01

    A network induced by wealth is a social network model in which wealth induces individuals to participate as nodes, and every node in the network produces and accumulates wealth utilizing its links. More specifically, at every time step a new node is added to the network, and a link is created between one of the existing nodes and the new node. Innate wealth-producing ability is randomly assigned to every new node, and the node to be connected to the new node is chosen randomly, with odds proportional to the accumulated wealth of each existing node. Analyzing this network using the mean value and continuous flow approaches, we derive a relation between the conditional expectations of the degree and the accumulated wealth of each node. From this relation, we show that the degree distribution of the network induced by wealth is scale-free. We also show that the wealth distribution has a power-law tail and satisfies the 80/20 rule. We also show that, over the whole range, the cumulative wealth distribution exhibits the same topological characteristics as the wealth distributions of several networks based on the Bouchaud-Mèzard model, even though the mechanism for producing wealth is quite different in our model. Further, we show that the cumulative wealth distribution for the poor and middle class seems likely to follow by a log-normal distribution, while for the richest, the cumulative wealth distribution has a power-law behavior.

  8. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Simulation of depth distribution of geological strata. HQSW program

    International Nuclear Information System (INIS)

    Czubek, J.A.; Kolakowski, L.

    1987-01-01

    The method of simulation of the layered geological formation for a given geological parameter is presented. The geological formation contains at least two types of layers and is given with the depth resolution Δh corresponding to the thickness of the hypothetical elementary layer. Two types of geostatistical distributions of the rock parameters are considered: modified normal and modified lognormal for which the input data are expected value and the variance. The HQSW simulation program given in the paper generates in a random way (but in a given repeatable sequence) the thicknesses of a given type of strata, their average specific radioactivity and the variance of specific radioactivity within a given layer. 8 refs., 14 figs., 1 tab. (author)

  10. Distribution of Total Depressive Symptoms Scores and Each Depressive Symptom Item in a Sample of Japanese Employees.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Yamada, Hiroshi; Miyake, Hirotsugu; Furukawa, Toshiaki A; Furukaw, Toshiaki A

    2016-01-01

    In a previous study, we reported that the distribution of total depressive symptoms scores according to the Center for Epidemiologic Studies Depression Scale (CES-D) in a general population is stable throughout middle adulthood and follows an exponential pattern except for at the lowest end of the symptom score. Furthermore, the individual distributions of 16 negative symptom items of the CES-D exhibit a common mathematical pattern. To confirm the reproducibility of these findings, we investigated the distribution of total depressive symptoms scores and 16 negative symptom items in a sample of Japanese employees. We analyzed 7624 employees aged 20-59 years who had participated in the Northern Japan Occupational Health Promotion Centers Collaboration Study for Mental Health. Depressive symptoms were assessed using the CES-D. The CES-D contains 20 items, each of which is scored in four grades: "rarely," "some," "much," and "most of the time." The descriptive statistics and frequency curves of the distributions were then compared according to age group. The distribution of total depressive symptoms scores appeared to be stable from 30-59 years. The right tail of the distribution for ages 30-59 years exhibited a linear pattern with a log-normal scale. The distributions of the 16 individual negative symptom items of the CES-D exhibited a common mathematical pattern which displayed different distributions with a boundary at "some." The distributions of the 16 negative symptom items from "some" to "most" followed a linear pattern with a log-normal scale. The distributions of the total depressive symptoms scores and individual negative symptom items in a Japanese occupational setting show the same patterns as those observed in a general population. These results show that the specific mathematical patterns of the distributions of total depressive symptoms scores and individual negative symptom items can be reproduced in an occupational population.

  11. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  12. Grain-size distributions and grain boundaries of chalcopyrite-type thin films

    International Nuclear Information System (INIS)

    Abou-Ras, D.; Schorr, S.; Schock, H.W.

    2007-01-01

    CuInSe 2 , CuGaSe 2 , Cu(In,Ga)Se 2 and CuInS 2 thin-film solar absorbers in completed solar cells were studied in cross section by means of electronbackscatter diffraction. From the data acquired, grain-size distributions were extracted, and also the most frequent grain boundaries were determined. The grain-size distributions of all chalcopyrite-type thin films studied can be described well by lognormal distribution functions. The most frequent grainboundary types in these thin films are 60 - left angle 221 right angle tet and 71 - left angle 110 right angle tet (near) Σ3 twin boundaries. These results can be related directly to the importance of {112} tet planes during the topotactical growth of chalcopyrite-type thin films. Based on energetic considerations, it is assumed that the most frequent twin boundaries exhibit a 180 - left angle 221 right angle tet constellation. (orig.)

  13. Comparison of the Gini and Zenga Indexes using Some Theoretical Income Distributions Abstract

    Directory of Open Access Journals (Sweden)

    Katarzyna Ostasiewicz

    2013-01-01

    Full Text Available The most common measure of inequality used in scientific research is the Gini index. In 2007, Zenga proposed a new index of inequality that has all the appropriate properties of an measure of equality. In this paper, we compared the Gini and Zenga indexes, calculating these quantities for the few distributions frequently used for approximating distributions of income, that is, the lognormal, gamma, inverse Gauss, Weibull and Burr distributions. Within this limited examination, we have observed three main differences. First, the Zenga index increases more rapidly for low values of the variation and decreases more slowly when the variation approaches intermediate values from above. Second, the Zenga index seems to be better predicted by the variation. Third, although the Zenga index is always higher than the Gini one, the ordering of some pairs of cases may be inverted. (original abstract

  14. Do wealth distributions follow power laws? Evidence from ‘rich lists’

    Science.gov (United States)

    Brzezinski, Michal

    2014-07-01

    We use data on the wealth of the richest persons taken from the 'rich lists' provided by business magazines like Forbes to verify if the upper tails of wealth distributions follow, as often claimed, a power-law behaviour. The data sets used cover the world's richest persons over 1996-2012, the richest Americans over 1988-2012, the richest Chinese over 2006-2012, and the richest Russians over 2004-2011. Using a recently introduced comprehensive empirical methodology for detecting power laws, which allows for testing the goodness of fit as well as for comparing the power-law model with rival distributions, we find that a power-law model is consistent with data only in 35% of the analysed data sets. Moreover, even if wealth data are consistent with the power-law model, they are usually also consistent with some rivals like the log-normal or stretched exponential distributions.

  15. Multiplicity distributions of charged hadrons in vp and charged current interactions

    Science.gov (United States)

    Jones, G. T.; Jones, R. W. L.; Kennedy, B. W.; Morrison, D. R. O.; Mobayyen, M. M.; Wainstein, S.; Aderholz, M.; Hantke, D.; Katz, U. F.; Kern, J.; Schmitz, N.; Wittek, W.; Borner, H. P.; Myatt, G.; Radojicic, D.; Burke, S.

    1992-03-01

    Using data on vp andbar vp charged current interactions from a bubble chamber experiment with BEBC at CERN, the multiplicity distributions of charged hadrons are investigated. The analysis is based on ˜20000 events with incident v and ˜10000 events with incidentbar v. The invariant mass W of the total hadronic system ranges from 3 GeV to ˜14 GeV. The experimental multiplicity distributions are fitted by the binomial function (for different intervals of W and in different intervals of the rapidity y), by the Levy function and the lognormal function. All three parametrizations give acceptable values for X 2. For fixed W, forward and backward multiplicities are found to be uncorrelated. The normalized moments of the charged multiplicity distributions are measured as a function of W. They show a violation of KNO scaling.

  16. Scaling theory of quantum resistance distributions in disordered systems

    International Nuclear Information System (INIS)

    Jayannavar, A.M.

    1991-01-01

    The large scale distribution of quantum Ohmic resistance of a disorderd one-dimensional conductor is derived explicitly. It is shown that in the thermodynamic limit this distribution is characterized by two independent parameters for strong disorder, leading to a two-parameter scaling theory of localization. Only in the limit of weak disorder single parameter scaling consistent with existing theoretical treatments is recovered. (author). 33 refs., 4 figs

  17. Scaling theory of quantum resistance distributions in disordered systems

    International Nuclear Information System (INIS)

    Jayannavar, A.M.

    1990-05-01

    We have derived explicitly, the large scale distribution of quantum Ohmic resistance of a disordered one-dimensional conductor. We show that in the thermodynamic limit this distribution is characterized by two independent parameters for strong disorder, leading to a two-parameter scaling theory of localization. Only in the limit of weak disorder we recover single parameter scaling, consistent with existing theoretical treatments. (author). 32 refs, 4 figs

  18. On the link between column density distribution and density scaling relation in star formation regions

    Science.gov (United States)

    Veltchev, Todor; Donkov, Sava; Stanchev, Orlin

    2017-07-01

    We present a method to derive the density scaling relation ∝ L^{-α} in regions of star formation or in their turbulent vicinities from straightforward binning of the column-density distribution (N-pdf). The outcome of the method is studied for three types of N-pdf: power law (7/5≤α≤5/3), lognormal (0.7≲α≲1.4) and combination of lognormals. In the last case, the method of Stanchev et al. (2015) was also applied for comparison and a very weak (or close to zero) correlation was found. We conclude that the considered `binning approach' reflects rather the local morphology of the N-pdf with no reference to the physical conditions in a considered region. The rough consistency of the derived slopes with the widely adopted Larson's (1981) value α˜1.1 is suggested to support claims that the density-size relation in molecular clouds is indeed an artifact of the observed N-pdf.

  19. EARLY GUIDANCE FOR ASSIGNING DISTRIBUTION PARAMETERS TO GEOCHEMICAL INPUT TERMS TO STOCHASTIC TRANSPORT MODELS

    International Nuclear Information System (INIS)

    Kaplan, D; Margaret Millings, M

    2006-01-01

    Stochastic modeling is being used in the Performance Assessment program to provide a probabilistic estimate of the range of risk that buried waste may pose. The objective of this task was to provide early guidance for stochastic modelers for the selection of the range and distribution (e.g., normal, log-normal) of distribution coefficients (K d ) and solubility values (K sp ) to be used in modeling subsurface radionuclide transport in E- and Z-Area on the Savannah River Site (SRS). Due to the project's schedule, some modeling had to be started prior to collecting the necessary field and laboratory data needed to fully populate these models. For the interim, the project will rely on literature values and some statistical analyses of literature data as inputs. Based on statistical analyses of some literature sorption tests, the following early guidance was provided: (1) Set the range to an order of magnitude for radionuclides with K d values >1000 mL/g and to a factor of two for K d values of sp values -6 M and to a factor of two for K d values of >10 -6 M. This decision is based on the literature. (3) The distribution of K d values with a mean >1000 mL/g will be log-normally distributed. Those with a K d value <1000 mL/g will be assigned a normal distribution. This is based on statistical analysis of non-site-specific data. Results from on-going site-specific field/laboratory research involving E-Area sediments will supersede this guidance; these results are expected in 2007

  20. X-ray diffraction microstructural analysis of bimodal size distribution MgO nano powder

    International Nuclear Information System (INIS)

    Suminar Pratapa; Budi Hartono

    2009-01-01

    Investigation on the characteristics of x-ray diffraction data for MgO powdered mixture of nano and sub-nano particles has been carried out to reveal the crystallite-size-related microstructural information. The MgO powders were prepared by co-precipitation method followed by heat treatment at 500 degree Celsius and 1200 degree Celsius for 1 hour, being the difference in the temperature was to obtain two powders with distinct crystallite size and size-distribution. The powders were then blended in air to give the presumably bimodal-size- distribution MgO nano powder. High-quality laboratory X-ray diffraction data for the powders were collected and then analysed using Rietveld-based MAUD software using the lognormal size distribution. Results show that the single-mode powders exhibit spherical crystallite size (R) of 20(1) nm and 160(1) nm for the 500 degree Celsius and 1200 degree Celsius data respectively with the nano metric powder displays narrower crystallite size distribution character, indicated by lognormal dispersion parameter of 0.21 as compared to 0.01 for the sub-nano metric powder. The mixture exhibits relatively more asymmetric peak broadening. Analysing the x-ray diffraction data for the latter specimen using single phase approach give unrealistic results. Introducing two phase models for the double-phase mixture to accommodate the bimodal-size-distribution characteristics give R = 100(6) and σ = 0.62 for the nano metric phase and R = 170(5) and σ= 0.12 for the σ sub-nano metric phase. (author)

  1. Distribution of 226Ra, 232Th, and 40K in soils of Rio Grande do Norte (Brazil)

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1996-01-01

    A survey programme aimed at studying the environmental radioactivity in the Brazilian state of Rio Grande do Norte was undertaken. Fifty-two soil samples, together with two rock and two uraniferrous ore samples were collected from the eastern and central regions of this state. Concentrations of radioelements in samples were determined by γ-ray spectrometry. The average concentrations of 226 Ra, 232 Th, and 40 K in the surveyed soils were 29.2 ± 19.5 (SD), 47.8 ± 37.3, and 704 ± 437 Bq kg -1 , respectively. Higher values were found in the rock samples. The distributions of 226 Ra and 232 Th were fitted by log-normal curves. Radiological measurements carried out with a portable scintillometer at the sampled sites revealed an average absorbed dose rate of 55 ± 27 (SD) nGy h -1 . Computed dose rates obtained through the Beck formula ranged from 15-179 nGy h -1 , with a mean value of 72.6 ± 38.7 (SD) nGy h -1 , and their distribution fitted a log-normal curve. An annual average effective dose equivalent of 552 μSν (range: 117-1361 μSν) was estimated for 51 sites in Rio Grande do Norte. (author)

  2. Distributional Inference

    NARCIS (Netherlands)

    Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.

    1995-01-01

    The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is

  3. Multiplicity distributions of charged hadrons in νp and anti νp charged current interactions

    International Nuclear Information System (INIS)

    Jones, G.T.; Jones, R.W.L.; Kennedy, B.W.; Morrison, D.R.O.; Mobayyen, M.M.; Wainstein, S.; Borner, H.P.; Myatt, G.; Radojicic, D.; Burke, S.; Aderholz, M.; Hantke, D.; Katz, U.F.; Kern, J.; Schmitz, N.; Wittek, W.

    1991-10-01

    Using data on νp and anti νp charged current interactions from a bubble chamber experiment with BEBC at CERN, the multiplicity distributions of charged hadrons are investigated. The analysis is based on ∝ 20 000 events with incident ν and ∝ 10 000 events with incident anti ν. The invariant mass W of the total hadronic system ranges from 3 GeV to ∝ 14 GeV. The experimental multiplicity distributions are fitted by the binomial function (for different intervals of W and in different intervals of the rapidity y), by the Levy function and the lognormal function. All three parametrizations give acceptable values for χ 2 /NDF. For fixed W, forward and backward multiplicities are found to be uncorrelated. The normalized moments of the charged multiplicity distributions are measured as a function of W. They show a violation of KNO scaling. (orig.)

  4. Multiplicity distributions of charged hadrons in νp and anti νp charged current interactions

    International Nuclear Information System (INIS)

    Jones, G.T.; Jones, R.W.L.; Kennedy, B.W.; Morrison, D.R.O.; Mobayyen, M.M.; Wainstein, S.; Aderholz, M.; Hantke, D.; Katz, U.F.; Kern, J.; Schmitz, N.; Wittek, W.; Borner, H.P.; Myatt, G.; Radojicic, D.; Burke, S.

    1992-01-01

    Using data on νp and anti νp charged current interactions from a bubble chamber experiment with BEBC at CERN, the multiplicity distributions of charged hadrons are investigated. The analysis is based on ∝20 000 events with incident ν and ∝10 000 events with incident anti ν. The invariant mass W of the total hadronic system ranges from 3 GeV to ∝14 GeV. The experimental multiplicity distributions are fitted by the binomial function (for different intervals of W and in different intervals of the rapidity y), by the Levy function and the lognormal function. All three parametrizations give acceptable values for χ 2 /NDF. For fixed W, forward and backward multiplicities are found to be uncorrelated. The normalized moments of the charged multiplicity distributions are measured as a function of W. They show a violation of KNO scaling. (orig.)

  5. An appraisal of wind speed distribution prediction by soft computing methodologies: A comparative study

    International Nuclear Information System (INIS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Saboohi, Hadi; Abdul Wahab, Ainuddin Wahid; Protić, Milan; Zalnezhad, Erfan; Mirhashemi, Seyed Mohammad Amin

    2014-01-01

    Highlights: • Probabilistic distribution functions of wind speed. • Two parameter Weibull probability distribution. • To build an effective prediction model of distribution of wind speed. • Support vector regression application as probability function for wind speed. - Abstract: The probabilistic distribution of wind speed is among the more significant wind characteristics in examining wind energy potential and the performance of wind energy conversion systems. When the wind speed probability distribution is known, the wind energy distribution can be easily obtained. Therefore, the probability distribution of wind speed is a very important piece of information required in assessing wind energy potential. For this reason, a large number of studies have been established concerning the use of a variety of probability density functions to describe wind speed frequency distributions. Although the two-parameter Weibull distribution comprises a widely used and accepted method, solving the function is very challenging. In this study, the polynomial and radial basis functions (RBF) are applied as the kernel function of support vector regression (SVR) to estimate two parameters of the Weibull distribution function according to previously established analytical methods. Rather than minimizing the observed training error, SVR p oly and SVR r bf attempt to minimize the generalization error bound, so as to achieve generalized performance. According to the experimental results, enhanced predictive accuracy and capability of generalization can be achieved using the SVR approach compared to other soft computing methodologies

  6. Recurrent frequency-size distribution of characteristic events

    Directory of Open Access Journals (Sweden)

    S. G. Abaimov

    2009-04-01

    Full Text Available Statistical frequency-size (frequency-magnitude properties of earthquake occurrence play an important role in seismic hazard assessments. The behavior of earthquakes is represented by two different statistics: interoccurrent behavior in a region and recurrent behavior at a given point on a fault (or at a given fault. The interoccurrent frequency-size behavior has been investigated by many authors and generally obeys the power-law Gutenberg-Richter distribution to a good approximation. It is expected that the recurrent frequency-size behavior should obey different statistics. However, this problem has received little attention because historic earthquake sequences do not contain enough events to reconstruct the necessary statistics. To overcome this lack of data, this paper investigates the recurrent frequency-size behavior for several problems. First, the sequences of creep events on a creeping section of the San Andreas fault are investigated. The applicability of the Brownian passage-time, lognormal, and Weibull distributions to the recurrent frequency-size statistics of slip events is tested and the Weibull distribution is found to be the best-fit distribution. To verify this result the behaviors of numerical slider-block and sand-pile models are investigated and the Weibull distribution is confirmed as the applicable distribution for these models as well. Exponents β of the best-fit Weibull distributions for the observed creep event sequences and for the slider-block model are found to have similar values ranging from 1.6 to 2.2 with the corresponding aperiodicities CV of the applied distribution ranging from 0.47 to 0.64. We also note similarities between recurrent time-interval statistics and recurrent frequency-size statistics.

  7. Distribuição de frequência da chuva para região Centro-Sul do Ceará, Brasil Frequency distribution of rainfall for the South-Central region of Ceará, Brazil

    Directory of Open Access Journals (Sweden)

    Ítalo Nunes Silva

    2013-09-01

    Full Text Available Foram analisadas sete distribuições de probabilidade Exponencial, Gama, Log-normal, Normal, Weibull, Gumbel e Beta para a chuva mensal e anual na região Centro-Sul do Ceará, Brasil. Para verificação dos ajustes dos dados às funções densidade de probabilidade foi utilizado o teste não-paramétrico de Kolmogorov-Smirnov com nível de 5% de significância. Os dados de chuva foram obtidos da base de dados da SUDENE registrados durante o período de 1913 a 1989. Para a chuva total anual teve ajuste satisfatório dos dados às distribuições Gama, Gumbel, Normal e Weibull e não ocorreu ajuste às distribuições Exponencial, Log-normal e Beta. Recomenda-se o uso da distribuição Normal para estimar valores de chuva provável anual para a região, por ser um procedimento de fácil aplicação e também pelo bom desempenho nos testes. A distribuição de frequência Gumbel foi a que melhor representou os dados de chuva para o período mensal, com o maior número de ajustes no período chuvoso. No período seco os dados de chuva foram melhores representados pela distribuição Exponencial.Seven probability distributions were analysed: Exponential, Gamma, Log-Normal, Normal, Weibull, Gumbel and Beta, for monthly and annual rainfall in the south-central region of Ceará, Brazil. In order to verify the adjustments of the data to the probability density functions, the non-parametric Kolmogorov-Smirnov test was used with a 5% level of significance. The rainfall data were obtained from the database at SUDENE, recorded from 1913 to 1989. For the total annual rainfall, adjustment of the data to the Gamma, Gumbel, Normal and Weibull distributions was satisfactory, and there was no adjustment to the Exponential, Log-normal and Beta distributions. Use of Normal distribution is recommended to estimate the values of probable annual rainfall in the region, this being a procedure of easy application, performing well in the tests. The Gumbel frequency

  8. Coupling of mass and charge distributions for low excited nuclear fission

    International Nuclear Information System (INIS)

    Salamatin, V.S.; )

    2000-01-01

    The simple model for calculation of charge distributions of fission fragments for low exited nuclear fission from experimental mass distributions is offered. The model contains two parameters, determining amplitude of even-odd effect of charge distributions and its dependence on excitation energy. Results for reactions 233 U(n th ,f), 235 U(n th ,f), 229 Th(n th ,f), 249 Cf(n th ,f) are spent [ru

  9. Distribution of the active liquid waste discharge concentration

    International Nuclear Information System (INIS)

    Chan, A.H.C.

    1985-03-01

    In assessing the proposal for removing the on-line liquid effluent monitor (LEM) from the Darlington NGS-A design, it was required to estimate the probability that the concentration of β-y emitters in the active liquid waste (ALW) tank discharges exceeds a specified level. To achieve this, it was necessary to know the underlying distribution of the ALW discharge concentration. Since the distribution could only be estimated from the historical data, it was also important to provide the confidence interval for the estimated probability. Using the ALW discharge records of Pickering and Bruce NGS-A, it was found that the log-normal distribution provided the best fit for the data. The frequency of the tank concentration exceeding the specified level of 24000μCi/m 3 was estimated to be 1 in 200,000 years at Bruce NGS-A and 1 in 100,000 years at Pickering. The 99% upper confidence limits are 1 in 2777 years and 1 in 77 years, respectively

  10. Cell-size distribution in epithelial tissue formation and homeostasis.

    Science.gov (United States)

    Puliafito, Alberto; Primo, Luca; Celani, Antonio

    2017-03-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. © 2017 The Author(s).

  11. Poisson distribution

    NARCIS (Netherlands)

    Hallin, M.; Piegorsch, W.; El Shaarawi, A.

    2012-01-01

    The random variable X taking values 0,1,2,…,x,… with probabilities pλ(x) = e−λλx/x!, where λ∈R0+ is called a Poisson variable, and its distribution a Poisson distribution, with parameter λ. The Poisson distribution with parameter λ can be obtained as the limit, as n → ∞ and p → 0 in such a way that

  12. Distributed Visualization

    Data.gov (United States)

    National Aeronautics and Space Administration — Distributed Visualization allows anyone, anywhere, to see any simulation, at any time. Development focuses on algorithms, software, data formats, data systems and...

  13. Comprehensive evaluation of wind speed distribution models: A case study for North Dakota sites

    International Nuclear Information System (INIS)

    Zhou Junyi; Erdem, Ergin; Li Gong; Shi Jing

    2010-01-01

    Accurate analysis of long term wind data is critical to the estimation of wind energy potential for a candidate location and its nearby area. Investigating the wind speed distribution is one critical task for this purpose. This paper presents a comprehensive evaluation on probability density functions for the wind speed data from five representative sites in North Dakota. Besides the popular Weibull and Rayleigh distributions, we also include other distributions such as gamma, lognormal, inverse Gaussian, and maximum entropy principle (MEP) derived probability density functions (PDFs). Six goodness-of-fit (GOF) statistics are used to determine the appropriate distributions for the wind speed data for each site. It is found that no particular distribution outperforms others for all five sites, while Rayleigh distribution performs poorly for most of the sites. Similar to other models, the performances of MEP-derived PDFs in fitting wind speed data varies from site to site. Also, the results demonstrate that MEP-derived PDFs are flexible and have the potential to capture other possible distribution patterns of wind speed data. Meanwhile, different GOF statistics may generate inconsistent ranking orders of fit performance among the candidate PDFs. In addition, one comprehensive metric that combines all individual statistics is proposed to rank the overall performance for the chosen statistical distributions.

  14. Distribution functions to estimate radionuclide solid-liquid distribution coefficients in soils: the case of Cs

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)

    2014-07-01

    In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for

  15. Stochastic distribution of the required coefficient of friction for level walking--an in-depth study.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2012-01-01

    This study investigated the stochastic distribution of the required coefficient of friction (RCOF) which is a critical element for estimating slip probability. Fifty participants walked under four walking conditions. The results of the Kolmogorov-Smirnov two-sample test indicate that 76% of the RCOF data showed a difference in distribution between both feet for the same participant under each walking condition; the data from both feet were kept separate. The results of the Kolmogorov-Smirnov goodness-of-fit test indicate that most of the distribution of the RCOF appears to have a good match with the normal (85.5%), log-normal (84.5%) and Weibull distributions (81.5%). However, approximately 7.75% of the cases did not have a match with any of these distributions. It is reasonable to use the normal distribution for representation of the RCOF distribution due to its simplicity and familiarity, but each foot had a different distribution from the other foot in 76% of cases. The stochastic distribution of the required coefficient of friction (RCOF) was investigated for use in a statistical model to improve the estimate of slip probability in risk assessment. The results indicate that 85.5% of the distribution of the RCOF appears to have a good match with the normal distribution.

  16. Dyadic distributions

    International Nuclear Information System (INIS)

    Golubov, B I

    2007-01-01

    On the basis of the concept of pointwise dyadic derivative dyadic distributions are introduced as continuous linear functionals on the linear space D d (R + ) of infinitely differentiable functions compactly supported by the positive half-axis R + together with all dyadic derivatives. The completeness of the space D' d (R + ) of dyadic distributions is established. It is shown that a locally integrable function on R + generates a dyadic distribution. In addition, the space S d (R + ) of infinitely dyadically differentiable functions on R + rapidly decreasing in the neighbourhood of +∞ is defined. The space S' d (R + ) of dyadic distributions of slow growth is introduced as the space of continuous linear functionals on S d (R + ). The completeness of the space S' d (R + ) is established; it is proved that each integrable function on R + with polynomial growth at +∞ generates a dyadic distribution of slow growth. Bibliography: 25 titles.

  17. On the capacity of FSO links under lognormal and Rician-lognormal turbulences

    KAUST Repository

    Ansari, Imran Shafique

    2014-09-01

    A unified capacity analysis under weak and composite turbulences of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection as well as heterodyne detection) is addressed in this work. More specifically, a unified exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing on these new moments expressions, unified approximate and simple closed- form results are offered for the ergodic capacity at high SNR regime as well as at low SNR regime. All the presented results are verified via computer- based Monte-Carlo simulations.

  18. On the capacity of FSO links under lognormal and Rician-lognormal turbulences

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim; Cheng, Julian

    2014-01-01

    ) is addressed in this work. More specifically, a unified exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing

  19. Oxide particle size distribution from shearing irradiated and unirradiated LWR fuels in Zircaloy and stainless steel cladding: significance for risk assessment

    International Nuclear Information System (INIS)

    Davis, W. Jr.; West, G.A.; Stacy, R.G.

    1979-01-01

    Sieve fractionation was performed with oxide particles dislodged during shearing of unirradiated or irradiated fuel bundles or single rods of UO 2 or 96 to 97% ThO 2 --3 to 4% UO 2 . Analyses of these data by nonlinear least-squares techniques demonstrated that the particle size distribution is lognormal. Variables involved in the numerical analyses include lognormal median size, lognormal standard deviation, and shear cut length. Sieve-fractionation data are presented for unirradiated bundles of stainless-steel-clad or Zircaloy-2-clad UO 2 or ThO 2 --UO 2 sheared into lengths from 0.5 to 2.0 in. Data are also presented for irradiated single rods (sheared into lengths of 0.25 to 2.0 in.) of Zircaloy-2-clad UO 2 from BWRs and of Zircaloy-4-clad UO 2 from PWRs. Median particle sizes of UO 2 from shearing irradiated stainless-steel-clad fuel ranged from 103 to 182 μm; particle sizes of ThO 2 --UO 2 , under these same conditions, ranged from 137 to 202 μm. Similarly, median particle sizes of UO 2 from shearing unirradiated Zircaloy-2-clad fuel ranged from 230 to 957 μm. Irradiation levels of fuels from reactors ranged from 9,000 to 28,000 MWd/MTU. In general, particle sizes from shearing these irradiated fuels are larger than those from the unirradiated fuels. In addition, variations in particle size parameters pertaining to samples of a single vendor varied as much as those between different vendors. The fraction of fuel dislodged from the cladding is nearly proportional to the reciprocal of the shear cut length, until the cut length attains some minimum value below which all fuel is dislodged. Particles of fuel are generally elongated with a long-to-short axis ratio usually less than 3. Using parameters of the lognormal distribution deduced from experimental data, realistic estimates can be made of fractions of dislodged fuel having dimensions less than specified values

  20. Oxide particle size distribution from shearing irradiated and unirradiated LWR fuels in Zircaloy and stainless steel cladding: significance for risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Davis, W. Jr.; West, G.A.; Stacy, R.G.

    1979-03-22

    Sieve fractionation was performed with oxide particles dislodged during shearing of unirradiated or irradiated fuel bundles or single rods of UO/sub 2/ or 96 to 97% ThO/sub 2/--3 to 4% UO/sub 2/. Analyses of these data by nonlinear least-squares techniques demonstrated that the particle size distribution is lognormal. Variables involved in the numerical analyses include lognormal median size, lognormal standard deviation, and shear cut length. Sieve-fractionation data are presented for unirradiated bundles of stainless-steel-clad or Zircaloy-2-clad UO/sub 2/ or ThO/sub 2/--UO/sub 2/ sheared into lengths from 0.5 to 2.0 in. Data are also presented for irradiated single rods (sheared into lengths of 0.25 to 2.0 in.) of Zircaloy-2-clad UO/sub 2/ from BWRs and of Zircaloy-4-clad UO/sub 2/ from PWRs. Median particle sizes of UO/sub 2/ from shearing irradiated stainless-steel-clad fuel ranged from 103 to 182 ..mu..m; particle sizes of ThO/sub 2/--UO/sub 2/, under these same conditions, ranged from 137 to 202 ..mu..m. Similarly, median particle sizes of UO/sub 2/ from shearing unirradiated Zircaloy-2-clad fuel ranged from 230 to 957 ..mu..m. Irradiation levels of fuels from reactors ranged from 9,000 to 28,000 MWd/MTU. In general, particle sizes from shearing these irradiated fuels are larger than those from the unirradiated fuels; however, unirradiated fuel from vendors was not available for performing comparative shearing experiments. In addition, variations in particle size parameters pertaining to samples of a single vendor varied as much as those between different vendors. The fraction of fuel dislodged from the cladding is nearly proportional to the reciprocal of the shear cut length, until the cut length attains some minimum value below which all fuel is dislodged. Particles of fuel are generally elongated with a long-to-short axis ratio usually less than 3. Using parameters of the lognormal distribution estimates can be made of fractions of dislodged fuel having

  1. Sensitivity Weaknesses in Application of some Statistical Distribution in First Order Reliability Methods

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Enevoldsen, I.

    1993-01-01

    It has been observed and shown that in some examples a sensitivity analysis of the first order reliability index results in increasing reliability index, when the standard deviation for a stochastic variable is increased while the expected value is fixed. This unfortunate behaviour can occur when...... a stochastic variable is modelled by an asymmetrical density function. For lognormally, Gumbel and Weibull distributed stochastic variables it is shown for which combinations of the/3-point, the expected value and standard deviation the weakness can occur. In relation to practical application the behaviour...... is probably rather infrequent. A simple example is shown as illustration and to exemplify that for second order reliability methods and for exact calculations of the probability of failure this behaviour is much more infrequent....

  2. Simulation study of effects of initial particle size distribution on dissolution

    International Nuclear Information System (INIS)

    Wang, G.; Xu, D.S.; Ma, N.; Zhou, N.; Payton, E.J.; Yang, R.; Mills, M.J.; Wang, Y.

    2009-01-01

    Dissolution kinetics of γ' particles in binary Ni-Al alloys with different initial particle size distributions (PSD) is studied using a three-dimensional (3D) quantitative phase field model. By linking model inputs directly to thermodynamic and atomic mobility databases, microstructural evolution during dissolution is simulated in real time and length scales. The model is first validated against analytical solution for dissolution of a single γ' particle in 1D and numerical solution in 3D before it is applied to investigate the effects of initial PSD on dissolution kinetics. Four different types of PSD, uniform, normal, log-normal and bimodal, are considered. The simulation results show that the volume fraction of γ' particles decreases exponentially with time, while the temporal evolution of average particle size depends strongly on the initial PSD

  3. Supply-cost curves for geographically distributed renewable-energy resources

    International Nuclear Information System (INIS)

    Izquierdo, Salvador; Dopazo, Cesar; Fueyo, Norberto

    2010-01-01

    The supply-cost curves of renewable-energy sources are an essential tool to synthesize and analyze large-scale energy-policy scenarios, both in the short and long terms. Here, we suggest and test a parametrization of such curves that allows their representation for modeling purposes with a minimal set of information. In essence, an economic potential is defined based on the mode of the marginal supply-cost curves; and, using this definition, a normalized log-normal distribution function is used to model these curves. The feasibility of this proposal is assessed with data from a GIS-based analysis of solar, wind and biomass technologies in Spain. The best agreement is achieved for solar energy.

  4. Asymmetric Bimodal Exponential Power Distribution on the Real Line

    Directory of Open Access Journals (Sweden)

    Mehmet Niyazi Çankaya

    2018-01-01

    Full Text Available The asymmetric bimodal exponential power (ABEP distribution is an extension of the generalized gamma distribution to the real line via adding two parameters that fit the shape of peakedness in bimodality on the real line. The special values of peakedness parameters of the distribution are a combination of half Laplace and half normal distributions on the real line. The distribution has two parameters fitting the height of bimodality, so capacity of bimodality is enhanced by using these parameters. Adding a skewness parameter is considered to model asymmetry in data. The location-scale form of this distribution is proposed. The Fisher information matrix of these parameters in ABEP is obtained explicitly. Properties of ABEP are examined. Real data examples are given to illustrate the modelling capacity of ABEP. The replicated artificial data from maximum likelihood estimates of parameters of ABEP and other distributions having an algorithm for artificial data generation procedure are provided to test the similarity with real data. A brief simulation study is presented.

  5. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity.

    Directory of Open Access Journals (Sweden)

    James D Englehardt

    Full Text Available Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a toxicokinetic models, (b biologically-based network models, (c scholastic and psychological test score data for children with prenatal mercury exposure, and (d time-to-tumor data of the ED01 study.

  6. Spatial distribution

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe; Hendrichsen, Ditte Katrine; Nachman, Gøsta Støger

    2008-01-01

    , depending on the nature of intraspecific interactions between them: while the individuals of some species repel each other and partition the available area, others form groups of varying size, determined by the fitness of each group member. The spatial distribution pattern of individuals again strongly......Living organisms are distributed over the entire surface of the planet. The distribution of the individuals of each species is not random; on the contrary, they are strongly dependent on the biology and ecology of the species, and vary over different spatial scale. The structure of whole...... populations reflects the location and fragmentation pattern of the habitat types preferred by the species, and the complex dynamics of migration, colonization, and population growth taking place over the landscape. Within these, individuals are distributed among each other in regular or clumped patterns...

  7. Distribution automation

    International Nuclear Information System (INIS)

    Gruenemeyer, D.

    1991-01-01

    This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements

  8. Asymptotic distribution of products of sums of independent random ...

    Indian Academy of Sciences (India)

    integrable random variables (r.v.) are asymptotically log-normal. This fact ... the product of the partial sums of i.i.d. positive random variables as follows. .... Now define ..... by Henan Province Foundation and Frontier Technology Research Plan.

  9. Effect of particle size distribution on permeability in the randomly packed porous media

    Science.gov (United States)

    Markicevic, Bojan

    2017-11-01

    An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.

  10. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    Science.gov (United States)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  11. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  12. Distributed creativity

    DEFF Research Database (Denmark)

    Glaveanu, Vlad Petre

    This book challenges the standard view that creativity comes only from within an individual by arguing that creativity also exists ‘outside’ of the mind or more precisely, that the human mind extends through the means of action into the world. The notion of ‘distributed creativity’ is not commonly...... used within the literature and yet it has the potential to revolutionise the way we think about creativity, from how we define and measure it to what we can practically do to foster and develop creativity. Drawing on cultural psychology, ecological psychology and advances in cognitive science......, this book offers a basic framework for the study of distributed creativity that considers three main dimensions of creative work: sociality, materiality and temporality. Starting from the premise that creativity is distributed between people, between people and objects and across time, the book reviews...

  13. Distributed systems

    CERN Document Server

    Van Steen, Maarten

    2017-01-01

    For this third edition of "Distributed Systems," the material has been thoroughly revised and extended, integrating principles and paradigms into nine chapters: 1. Introduction 2. Architectures 3. Processes 4. Communication 5. Naming 6. Coordination 7. Replication 8. Fault tolerance 9. Security A separation has been made between basic material and more specific subjects. The latter have been organized into boxed sections, which may be skipped on first reading. To assist in understanding the more algorithmic parts, example programs in Python have been included. The examples in the book leave out many details for readability, but the complete code is available through the book's Website, hosted at www.distributed-systems.net.

  14. Adhesively bonded joints composed of pultruded adherends: Considerations at the upper tail of the material strength statistical distribution

    Energy Technology Data Exchange (ETDEWEB)

    Vallee, T.; Keller, Th. [Ecole Polytech Fed Lausanne, CCLab, CH-1015 Lausanne, (Switzerland); Fourestey, G. [Ecole Polytech Fed Lausanne, IACS, Chair Modeling and Sci Comp, CH-1015 Lausanne, (Switzerland); Fournier, B. [CEA SACLAY ENSMP, DEN, DANS, DMN, SRMA, LC2M, F-91191 Gif Sur Yvette, (France); Correia, J.R. [Univ Tecn Lisbon, Inst Super Tecn, Civil Engn and Architecture Dept, P-1049001 Lisbon, (Portugal)

    2009-07-01

    The Weibull distribution, used to describe the scaling of strength of materials, has been verified on a wide range of materials and geometries: however, the quality of the fitting tended to be less good towards the upper tail. Based on a previously developed probabilistic strength prediction method for adhesively bonded joints composed of pultruded glass fiber-reinforced polymer (GFRP) adherends, where it was verified that a two-parameter Weibull probabilistic distribution was not able to model accurately the upper tail of a material strength distribution, different improved probabilistic distributions were compared to enhance the quality of strength predictions. The following probabilistic distributions were examined: a two-parameter Weibull (as a reference), m-fold Weibull, a Grafted Distribution, a Birnbaum-Saunders Distribution and a Generalized Lambda Distribution. The Generalized Lambda Distribution turned out to be the best analytical approximation for the strength data, providing a good fit to the experimental data, and leading to more accurate joint strength predictions than the original two-parameter Weibull distribution. It was found that a proper modeling of the upper tail leads to a noticeable increase of the quality of the predictions. (authors)

  15. Adhesively bonded joints composed of pultruded adherends: Considerations at the upper tail of the material strength statistical distribution

    International Nuclear Information System (INIS)

    Vallee, T.; Keller, Th.; Fourestey, G.; Fournier, B.; Correia, J.R.

    2009-01-01

    The Weibull distribution, used to describe the scaling of strength of materials, has been verified on a wide range of materials and geometries: however, the quality of the fitting tended to be less good towards the upper tail. Based on a previously developed probabilistic strength prediction method for adhesively bonded joints composed of pultruded glass fiber-reinforced polymer (GFRP) adherends, where it was verified that a two-parameter Weibull probabilistic distribution was not able to model accurately the upper tail of a material strength distribution, different improved probabilistic distributions were compared to enhance the quality of strength predictions. The following probabilistic distributions were examined: a two-parameter Weibull (as a reference), m-fold Weibull, a Grafted Distribution, a Birnbaum-Saunders Distribution and a Generalized Lambda Distribution. The Generalized Lambda Distribution turned out to be the best analytical approximation for the strength data, providing a good fit to the experimental data, and leading to more accurate joint strength predictions than the original two-parameter Weibull distribution. It was found that a proper modeling of the upper tail leads to a noticeable increase of the quality of the predictions. (authors)

  16. A two-parameter extension of classical nucleation theory

    Science.gov (United States)

    Lutsko, James F.; Durán-Olivencia, Miguel A.

    2015-06-01

    A two-variable stochastic model for diffusion-limited nucleation is developed using a formalism derived from fluctuating hydrodynamics. The model is a direct generalization of the standard classical nucleation theory (CNT). The nucleation rate and pathway are calculated in the weak-noise approximation and are shown to be in good agreement with direct numerical simulations for the weak-solution/strong-solution transition in globular proteins. We find that CNT underestimates the time needed for the formation of a critical cluster by two orders of magnitude and that this discrepancy is due to the more complex dynamics of the two variable model and not, as often is assumed, a result of errors in the estimation of the free energy barrier.

  17. A two-parameter extension of classical nucleation theory

    International Nuclear Information System (INIS)

    Lutsko, James F; Durán-Olivencia, Miguel A

    2015-01-01

    A two-variable stochastic model for diffusion-limited nucleation is developed using a formalism derived from fluctuating hydrodynamics. The model is a direct generalization of the standard classical nucleation theory (CNT). The nucleation rate and pathway are calculated in the weak-noise approximation and are shown to be in good agreement with direct numerical simulations for the weak-solution/strong-solution transition in globular proteins. We find that CNT underestimates the time needed for the formation of a critical cluster by two orders of magnitude and that this discrepancy is due to the more complex dynamics of the two variable model and not, as often is assumed, a result of errors in the estimation of the free energy barrier. (paper)

  18. Chiral Recognition by Fluorescence: One Measurement for Two Parameters

    Directory of Open Access Journals (Sweden)

    Shanshan Yu

    2014-01-01

    Full Text Available This outlook describes two strategies to simultaneously determine the enantiomeric composition and concentration of a chiral substrate by a single fluorescent measurement. One strategy utilizes a pseudoenantiomeric sensor pair that is composed of a 1,1′-bi-2-naphthol-based amino alcohol and a partially hydrogenated 1,1′-bi-2-naphthol-based amino alcohol. These two molecules have the opposite chiral configuration with fluorescent enhancement at two different emitting wavelengths when treated with the enantiomers of mandelic acid. Using the sum and difference of the fluorescent intensity at the two wavelengths allows simultaneous determination of both concentration and enantiomeric composition of the chiral acid. The other strategy employs a 1,1′-bi-2-naphthol-based trifluoromethyl ketone that exhibits fluorescent enhancement at two emission wavelengths upon interaction with a chiral diamine. One emission responds mostly to the concentration of the chiral diamine and the ratio of the two emissions depends on the chiral configuration of the enantiomer but independent of the concentration, allowing both the concentration and enantiomeric composition of the chiral diamine to be simultaneously determined. These strategies would significantly simplify the practical application of the enantioselective fluorescent sensors in high-throughput chiral assay.

  19. Bubbling and bistability in two parameter discrete systems

    Indian Academy of Sciences (India)

    The birth of X *. · is concurrent with the ... for bistability viz. a½, and the higher order bistability points a¾, etc. are marked. The quadrilateral marked as ... The characteristics of 2 parameter 1-d maps that exhibit bubbling/bistability related to their ...

  20. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  1. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2016-10-01

    Full Text Available Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items. The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an

  2. Distribution of genotype network sizes in sequence-to-structure genotype-phenotype maps.

    Science.gov (United States)

    Manrubia, Susanna; Cuesta, José A

    2017-04-01

    An essential quantity to ensure evolvability of populations is the navigability of the genotype space. Navigability, understood as the ease with which alternative phenotypes are reached, relies on the existence of sufficiently large and mutually attainable genotype networks. The size of genotype networks (e.g. the number of RNA sequences folding into a particular secondary structure or the number of DNA sequences coding for the same protein structure) is astronomically large in all functional molecules investigated: an exhaustive experimental or computational study of all RNA folds or all protein structures becomes impossible even for moderately long sequences. Here, we analytically derive the distribution of genotype network sizes for a hierarchy of models which successively incorporate features of increasingly realistic sequence-to-structure genotype-phenotype maps. The main feature of these models relies on the characterization of each phenotype through a prototypical sequence whose sites admit a variable fraction of letters of the alphabet. Our models interpolate between two limit distributions: a power-law distribution, when the ordering of sites in the prototypical sequence is strongly constrained, and a lognormal distribution, as suggested for RNA, when different orderings of the same set of sites yield different phenotypes. Our main result is the qualitative and quantitative identification of those features of sequence-to-structure maps that lead to different distributions of genotype network sizes. © 2017 The Author(s).

  3. Fitting Statistical Distributions Functions on Ozone Concentration Data at Coastal Areas

    International Nuclear Information System (INIS)

    Muhammad Yazid Nasir; Nurul Adyani Ghazali; Muhammad Izwan Zariq Mokhtar; Norhazlina Suhaimi

    2016-01-01

    Ozone is known as one of the pollutant that contributes to the air pollution problem. Therefore, it is important to carry out the study on ozone. The objective of this study is to find the best statistical distribution for ozone concentration. There are three distributions namely Inverse Gaussian, Weibull and Lognormal were chosen to fit one year hourly average ozone concentration data in 2010 at Port Dickson and Port Klang. Maximum likelihood estimation (MLE) method was used to estimate the parameters to develop the probability density function (PDF) graph and cumulative density function (CDF) graph. Three performance indicators (PI) that are normalized absolute error (NAE), prediction accuracy (PA), and coefficient of determination (R 2 ) were used to determine the goodness-of-fit criteria of the distribution. Result shows that Weibull distribution is the best distribution with the smallest error measure value (NAE) at Port Klang and Port Dickson is 0.08 and 0.31, respectively. The best score for highest adequacy measure (PA: 0.99) with the value of R 2 is 0.98 (Port Klang) and 0.99 (Port Dickson). These results provide useful information to local authorities for prediction purpose. (author)

  4. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    Science.gov (United States)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  5. Distributed propulsion.

    OpenAIRE

    Lindström, Robin; Rosvall, Tobias

    2013-01-01

    En prestandaanalys utfördes på en SAAB 2000 som referensobjekt. Olika metoder för att driva flygplan på ett miljövänligare sätt utvärderades tillsammans med distributed propulsion. Efter undersökningar valdes elmotorer tillsammans med Zink-luft batterier för att driva SAAB 2000 med distributed propulsion. En prestandaanalys utfördes på detta plan på samma sätt som för den ursprungliga SAAB 2000. Resultaten jämfördes och slutsatsen blev att räckvidden var för kort för att konfigurationen skull...

  6. Quasihomogeneous distributions

    CERN Document Server

    von Grudzinski, O

    1991-01-01

    This is a systematic exposition of the basics of the theory of quasihomogeneous (in particular, homogeneous) functions and distributions (generalized functions). A major theme is the method of taking quasihomogeneous averages. It serves as the central tool for the study of the solvability of quasihomogeneous multiplication equations and of quasihomogeneous partial differential equations with constant coefficients. Necessary and sufficient conditions for solvability are given. Several examples are treated in detail, among them the heat and the Schrödinger equation. The final chapter is devoted to quasihomogeneous wave front sets and their application to the description of singularities of quasihomogeneous distributions, in particular to quasihomogeneous fundamental solutions of the heat and of the Schrödinger equation.

  7. Distributed SLAM

    Science.gov (United States)

    Binns, Lewis A.; Valachis, Dimitris; Anderson, Sean; Gough, David W.; Nicholson, David; Greenway, Phil

    2002-07-01

    Previously, we have developed techniques for Simultaneous Localization and Map Building based on the augmented state Kalman filter. Here we report the results of experiments conducted over multiple vehicles each equipped with a laser range finder for sensing the external environment, and a laser tracking system to provide highly accurate ground truth. The goal is simultaneously to build a map of an unknown environment and to use that map to navigate a vehicle that otherwise would have no way of knowing its location, and to distribute this process over several vehicles. We have constructed an on-line, distributed implementation to demonstrate the principle. In this paper we describe the system architecture, the nature of the experimental set up, and the results obtained. These are compared with the estimated ground truth. We show that distributed SLAM has a clear advantage in the sense that it offers a potential super-linear speed-up over single vehicle SLAM. In particular, we explore the time taken to achieve a given quality of map, and consider the repeatability and accuracy of the method. Finally, we discuss some practical implementation issues.

  8. Multiscale probability distribution of pressure fluctuations in fluidized beds

    International Nuclear Information System (INIS)

    Ghasemi, Fatemeh; Sahimi, Muhammad; Reza Rahimi Tabar, M; Peinke, Joachim

    2012-01-01

    Analysis of flow in fluidized beds, a common chemical reactor, is of much current interest due to its fundamental as well as industrial importance. Experimental data for the successive increments of the pressure fluctuations time series in a fluidized bed are analyzed by computing a multiscale probability density function (PDF) of the increments. The results demonstrate the evolution of the shape of the PDF from the short to long time scales. The deformation of the PDF across time scales may be modeled by the log-normal cascade model. The results are also in contrast to the previously proposed PDFs for the pressure fluctuations that include a Gaussian distribution and a PDF with a power-law tail. To understand better the properties of the pressure fluctuations, we also construct the shuffled and surrogate time series for the data and analyze them with the same method. It turns out that long-range correlations play an important role in the structure of the time series that represent the pressure fluctuation. (paper)

  9. THE MASS DISTRIBUTION OF STELLAR-MASS BLACK HOLES

    International Nuclear Information System (INIS)

    Farr, Will M.; Sravan, Niharika; Kalogera, Vicky; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya

    2011-01-01

    We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a 'gap' between the most massive neutron stars and the least massive black holes by considering the value, M 1% , of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1% ; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1% >4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1% >4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass

  10. The aerosol distribution in Europe derived with the Community Multiscale Air Quality (CMAQ model: comparison to near surface in situ and sunphotometer measurements

    Directory of Open Access Journals (Sweden)

    V. Matthias

    2008-09-01

    Full Text Available The aerosol distribution in Europe was simulated with the Community Multiscale Air Quality (CMAQ model system version 4.5 for the years 2000 and 2001. The results were compared with daily averages of PM10 measurements taken in the framework of EMEP and with aerosol optical depth (AOD values measured within AERONET. The modelled total aerosol mass is typically about 30–60% lower than the corresponding measurements. However a comparison of the chemical composition of the aerosol revealed a considerably better agreement between the modelled and the measured aerosol components for ammonium, nitrate and sulfate, which are on average only 15–20% underestimated. Sligthly worse agreement was determined for sea salt, that was only avaliable at two sites. The largest discrepancies result from the aerosol mass which was not chemically specified by the measurements. The agreement between measurements and model is better in winter than in summer. The modelled organic aerosol mass is higher in summer than in winter but it is significantly underestimated by the model. This could be one of the main reasons for the discrepancies between measurements and model results. The other is that primary coarse particles are underestimated in the emissions. The probability distribution function of the PM10 measurements follows a log-normal distribution at most sites. The model is only able to reproduce this distribution function at non-coastal low altitude stations. The AOD derived from the model results is 20–70% lower than the values observed within AERONET. This is mainly attributed to the missing aerosol mass in the model. The day-to-day variability of the AOD and the log-normal distribution functions are quite well reproduced by the model. The seasonality on the other hand is underestimated by the model results because better agreement is achieved in winter.

  11. Distributional Replication

    OpenAIRE

    Beare, Brendan K.

    2009-01-01

    Suppose that X and Y are random variables. We define a replicating function to be a function f such that f(X) and Y have the same distribution. In general, the set of replicating functions for a given pair of random variables may be infinite. Suppose we have some objective function, or cost function, defined over the set of replicating functions, and we seek to estimate the replicating function with the lowest cost. We develop an approach to estimating the cheapest replicating function that i...

  12. Mail distribution

    CERN Multimedia

    2007-01-01

    Please note that starting from 1 March 2007, the mail distribution and collection times will be modified for the following buildings: 6, 8, 9, 10, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 29, 69, 40, 70, 101, 102, 109, 118, 152, 153, 154, 155, 166, 167, 169, 171, 174, 261, 354, 358, 576, 579 and 580. Complementary Information on the new times will be posted on the entry doors and left in the mail boxes of each building. TS/FM Group

  13. Distribution switchgear

    CERN Document Server

    Stewart, Stan

    2004-01-01

    Switchgear plays a fundamental role within the power supply industry. It is required to isolate faulty equipment, divide large networks into sections for repair purposes, reconfigure networks in order to restore power supplies and control other equipment.This book begins with the general principles of the Switchgear function and leads on to discuss topics such as interruption techniques, fault level calculations, switching transients and electrical insulation; making this an invaluable reference source. Solutions to practical problems associated with Distribution Switchgear are also included.

  14. Transition in the waiting-time distribution of price-change events in a global socioeconomic system

    Science.gov (United States)

    Zhao, Guannan; McDonald, Mark; Fenn, Dan; Williams, Stacy; Johnson, Nicholas; Johnson, Neil F.

    2013-12-01

    The goal of developing a firmer theoretical understanding of inhomogeneous temporal processes-in particular, the waiting times in some collective dynamical system-is attracting significant interest among physicists. Quantifying the deviations between the waiting-time distribution and the distribution generated by a random process may help unravel the feedback mechanisms that drive the underlying dynamics. We analyze the waiting-time distributions of high-frequency foreign exchange data for the best executable bid-ask prices across all major currencies. We find that the lognormal distribution yields a good overall fit for the waiting-time distribution between currency rate changes if both short and long waiting times are included. If we restrict our study to long waiting times, each currency pair’s distribution is consistent with a power-law tail with exponent near to 3.5. However, for short waiting times, the overall distribution resembles one generated by an archetypal complex systems model in which boundedly rational agents compete for limited resources. Our findings suggest that a gradual transition arises in trading behavior between a fast regime in which traders act in a boundedly rational way and a slower one in which traders’ decisions are driven by generic feedback mechanisms across multiple timescales and hence produce similar power-law tails irrespective of currency type.

  15. Spatial distribution and vertical variation of arsenic in Guangdong soil profiles, China

    International Nuclear Information System (INIS)

    Zhang, H.H.; Yuan, H.X.; Hu, Y.G.; Wu, Z.F.; Zhu, L.A.; Zhu, L.; Li, F.B.; LI, D.Q.

    2006-01-01

    Total of 260 soil profiles were reported to investigate the arsenic spatial distribution and vertical variation in Guangdong province. The arsenic concentration followed an approximately lognormal distribution. The arsenic geometric mean concentration of 10.4 mg/kg is higher than that of China. An upper baseline concentration of 23.4 mg/kg was estimated for surface soils. The influence of soil properties on arsenic concentration was not important. Arsenic spatial distributions presented similar patterns that high arsenic concentration mainly located in limestone, and sandshale areas, indicating that soil arsenic distribution was dependent on bedrock properties than anthropogenic inputs. Moreover, from A- to C-horizon arsenic geometric mean concentrations had an increasing tendency of 10.4, 10.7 to 11.3 mg/kg. This vertical variation may be related to the lower soil organic matter and soil degradation and erosion. Consequently, the soil arsenic export into surface and groundwaters would reach 1040 t year -1 in the study area. - Soil arsenic movement export is a potential threat to the water quality of the study area

  16. Uranium distribution in mined deposits and in the earth's crust. Final report

    International Nuclear Information System (INIS)

    Deffeyes, K.; MacGregor, I.

    1978-08-01

    Examination of both the global distribution of uranium in various geological units and the distribution of uranium ore grades mined in the U.S. shows that both distributions can be described by a single lognormal curve. The slope of that distribution indicates approximately a 300-fold increase in the amount of uranium contained for each 10-fold decrease in ore grade. Dividing up the U.S. production by depth zones, by geologic setting, by mineralogical types, by geographic regions, and by deposit thicknesses shows substantially the same 300-fold increase in contained uranium for each 10-fold decrease in ore grade. Lieberman's (1976) analysis of uranium discoveries as an exponentially declining function of the feet of borehole drilled was extended. The analysis, in current dollars and also in constant-value dollars, using exploration expenditures and acreage leases as well as drilling effort, shows that a wide range of estimates results. The conclusion suggests that the total uranium available in the 300 to 800 part-per-million range will expand through byproduct and coproduct mining of uranium, through increased exploitation of low-grade ores in known areas, and through the exploration of terrains which historically never produced high-grade ores. These sources of uranium (coupled with efficient reactors like the heavy-water reactors) could postpone the economic need for mining 100 part-per-million deposits, and the need for the breeder reactor and fuel reprocessing, well into the next century

  17. Temporal distribution of earthquakes using renewal process in the Dasht-e-Bayaz region

    Science.gov (United States)

    Mousavi, Mehdi; Salehi, Masoud

    2018-01-01

    Temporal distribution of earthquakes with M w > 6 in the Dasht-e-Bayaz region, eastern Iran has been investigated using time-dependent models. Based on these types of models, it is assumed that the times between consecutive large earthquakes follow a certain statistical distribution. For this purpose, four time-dependent inter-event distributions including the Weibull, Gamma, Lognormal, and the Brownian Passage Time (BPT) are used in this study and the associated parameters are estimated using the method of maximum likelihood estimation. The suitable distribution is selected based on logarithm likelihood function and Bayesian Information Criterion. The probability of the occurrence of the next large earthquake during a specified interval of time was calculated for each model. Then, the concept of conditional probability has been applied to forecast the next major ( M w > 6) earthquake in the site of our interest. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within a specified time, space, and magnitude windows. According to obtained results, the probability of occurrence of an earthquake with M w > 6 in the near future is significantly high.

  18. Millimeter-wave Line Ratios and Sub-beam Volume Density Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Leroy, Adam K.; Gallagher, Molly [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Usero, Antonio [Observatorio Astronmico Nacional (IGN), C/Alfonso XII, 3, E-28014 Madrid (Spain); Schruba, Andreas [Max-Planck-Institut für extraterrestrische Physik, Giessenbachstraße 1, D-85748 Garching (Germany); Bigiel, Frank [Institute für theoretische Astrophysik, Zentrum für Astronomie der Universität Heidelberg, Albert-Ueberle Str. 2, D-69120 Heidelberg (Germany); Kruijssen, J. M. Diederik; Schinnerer, Eva [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstraße 12-14, D-69120 Heidelberg (Germany); Kepley, Amanda [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Blanc, Guillermo A. [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Bolatto, Alberto D. [Department of Astronomy, Laboratory for Millimeter-wave Astronomy, and Joint Space Institute, University of Maryland, College Park, MD 20742 (United States); Cormier, Diane; Jiménez-Donaire, Maria J. [Max Planck Institute für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Hughes, Annie [CNRS, IRAP, 9 av. du Colonel Roche, BP 44346, F-31028 Toulouse cedex 4 (France); Rosolowsky, Erik [Department of Physics, University of Alberta, Edmonton, AB (Canada)

    2017-02-01

    We explore the use of mm-wave emission line ratios to trace molecular gas density when observations integrate over a wide range of volume densities within a single telescope beam. For observations targeting external galaxies, this case is unavoidable. Using a framework similar to that of Krumholz and Thompson, we model emission for a set of common extragalactic lines from lognormal and power law density distributions. We consider the median density of gas that produces emission and the ability to predict density variations from observed line ratios. We emphasize line ratio variations because these do not require us to know the absolute abundance of our tracers. Patterns of line ratio variations have the potential to illuminate the high-end shape of the density distribution, and to capture changes in the dense gas fraction and median volume density. Our results with and without a high-density power law tail differ appreciably; we highlight better knowledge of the probability density function (PDF) shape as an important area. We also show the implications of sub-beam density distributions for isotopologue studies targeting dense gas tracers. Differential excitation often implies a significant correction to the naive case. We provide tabulated versions of many of our results, which can be used to interpret changes in mm-wave line ratios in terms of adjustments to the underlying density distributions.

  19. Spatial Distribution of Soil Fauna In Long Term No Tillage

    Science.gov (United States)

    Corbo, J. Z. F.; Vieira, S. R.; Siqueira, G. M.

    2012-04-01

    The soil is a complex system constituted by living beings, organic and mineral particles, whose components define their physical, chemical and biological properties. Soil fauna plays an important role in soil and may reflect and interfere in its functionality. These organisms' populations may be influenced by management practices, fertilization, liming and porosity, among others. Such changes may reduce the composition and distribution of soil fauna community. Thus, this study aimed to determine the spatial variability of soil fauna in consolidated no-tillage system. The experimental area is located at Instituto Agronômico in Campinas (São Paulo, Brazil). The sampling was conducted in a Rhodic Eutrudox, under no tillage system and 302 points distributed in a 3.2 hectare area in a regular grid of 10.00 m x 10.00 m were sampled. The soil fauna was sampled with "Pitfall Traps" method and traps remained in the area for seven days. Data were analyzed using descriptive statistics to determine the main statistical moments (mean variance, coefficient of variation, standard deviation, skewness and kurtosis). Geostatistical tools were used to determine the spatial variability of the attributes using the experimental semivariogram. For the biodiversity analysis, Shannon and Pielou indexes and richness were calculated for each sample. Geostatistics has proven to be a great tool for mapping the spatial variability of groups from the soil epigeal fauna. The family Formicidae proved to be the most abundant and dominant in the study area. The parameters of descriptive statistics showed that all attributes studied showed lognormal frequency distribution for groups from the epigeal soil fauna. The exponential model was the most suited for the obtained data, for both groups of epigeal soil fauna (Acari, Araneae, Coleoptera, Formicidae and Coleoptera larva), and the other biodiversity indexes. The sampling scheme (10.00 m x 10.00 m) was not sufficient to detect the spatial

  20. Distributions of component failure rates estimated from LER data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1985-01-01

    Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter gamma distributions. In this study, a more complicated distributional form is considered, a mixture of gammas. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single gamma distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution. 9 refs

  1. Universality in the tail of musical note rank distribution

    Science.gov (United States)

    Beltrán del Río, M.; Cocho, G.; Naumis, G. G.

    2008-09-01

    Although power laws have been used to fit rank distributions in many different contexts, they usually fail at the tails. Languages as sequences of symbols have been a popular subject for ranking distributions, and for this purpose, music can be treated as such. Here we show that more than 1800 musical compositions are very well fitted by the first kind two parameter beta distribution, which arises in the ranking of multiplicative stochastic processes. The parameters a and b are obtained for classical, jazz and rock music, revealing interesting features. Specially, we have obtained a clear trend in the values of the parameters for major and minor tonal modes. Finally, we discuss the distribution of notes for each octave and its connection with the ranking of the notes.

  2. Topology Counts: Force Distributions in Circular Spring Networks

    Science.gov (United States)

    Heidemann, Knut M.; Sageman-Furnas, Andrew O.; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F.; Wardetzky, Max

    2018-02-01

    Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.

  3. Topology Counts: Force Distributions in Circular Spring Networks.

    Science.gov (United States)

    Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max

    2018-02-09

    Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.

  4. Distributions of component failure rates, estimated from LER data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1985-01-01

    Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter γ distributions. In this study, a more complicated distributional form is considered, a mixture of γs. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single γ distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution

  5. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  6. Pricing American Asian options with higher moments in the underlying distribution

    Science.gov (United States)

    Lo, Keng-Hsin; Wang, Kehluh; Hsu, Ming-Feng

    2009-01-01

    We develop a modified Edgeworth binomial model with higher moment consideration for pricing American Asian options. With lognormal underlying distribution for benchmark comparison, our algorithm is as precise as that of Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] if the number of the time steps increases. If the underlying distribution displays negative skewness and leptokurtosis as often observed for stock index returns, our estimates can work better than those in Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] and are very similar to the benchmarks in Hull and White [J. Hull, A. White, Efficient procedures for valuing European and American path-dependent options, J. Derivatives 1 (Fall) (1993) 21-31]. The numerical analysis shows that our modified Edgeworth binomial model can value American Asian options with greater accuracy and speed given higher moments in their underlying distribution.

  7. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-06-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  8. Characterization of tropical precipitation using drop size distribution and rain rate-radar reflectivity relation

    Science.gov (United States)

    Das, Saurabh; Maitra, Animesh

    2018-04-01

    Characterization of precipitation is important for proper interpretation of rain information from remotely sensed data. Rain attenuation and radar reflectivity (Z) depend directly on the drop size distribution (DSD). The relation between radar reflectivity/rain attenuation and rain rate (R) varies widely depending upon the origin, topography, and drop evolution mechanism and needs further understanding of the precipitation characteristics. The present work utilizes 2 years of concurrent measurements of DSD using a ground-based disdrometer at five diverse climatic conditions in Indian subcontinent and explores the possibility of rain classification based on microphysical characteristics of precipitation. It is observed that both gamma and lognormal distributions are performing almost similar for Indian region with a marginally better performance by one model than other depending upon the locations. It has also been found that shape-slope relationship of gamma distribution can be a good indicator of rain type. The Z-R relation, Z = ARb, is found to vary widely for different precipitation systems, with convective rain that has higher values of A than the stratiform rain for two locations, whereas the reverse is observed for the rest of the three locations. Further, the results indicate that the majority of rainfall (>50%) in Indian region is due to the convective rain although the occurrence time of convective rain is low (<10%).

  9. Handbook of distribution

    International Nuclear Information System (INIS)

    Mo, In Gyu

    1992-01-01

    This book tells of business strategy and distribution innovation, purpose of intelligent distribution, intelligent supply distribution, intelligent production distribution, intelligent sale distribution software for intelligence and future and distribution. It also introduces component technology keeping intelligent distribution such as bar cord, OCR, packing, and intelligent auto-warehouse, system technology, and cases in America, Japan and other countries.

  10. Modeling the Hydrological Cycle in the Atmosphere of Mars: Influence of a Bimodal Size Distribution of Aerosol Nucleation Particles

    Science.gov (United States)

    Shaposhnikov, Dmitry S.; Rodin, Alexander V.; Medvedev, Alexander S.; Fedorova, Anna A.; Kuroda, Takeshi; Hartogh, Paul

    2018-02-01

    We present a new implementation of the hydrological cycle scheme into a general circulation model of the Martian atmosphere. The model includes a semi-Lagrangian transport scheme for water vapor and ice and accounts for microphysics of phase transitions between them. The hydrological scheme includes processes of saturation, nucleation, particle growth, sublimation, and sedimentation under the assumption of a variable size distribution. The scheme has been implemented into the Max Planck Institute Martian general circulation model and tested assuming monomodal and bimodal lognormal distributions of ice condensation nuclei. We present a comparison of the simulated annual variations, horizontal and vertical distributions of water vapor, and ice clouds with the available observations from instruments on board Mars orbiters. The accounting for bimodality of aerosol particle distribution improves the simulations of the annual hydrological cycle, including predicted ice clouds mass, opacity, number density, and particle radii. The increased number density and lower nucleation rates bring the simulated cloud opacities closer to observations. Simulations show a weak effect of the excess of small aerosol particles on the simulated water vapor distributions.

  11. A measurement of the turbulence-driven density distribution in a non-star-forming molecular cloud

    Energy Technology Data Exchange (ETDEWEB)

    Ginsburg, Adam; Darling, Jeremy [CASA, University of Colorado, 389-UCB, Boulder, CO 80309 (United States); Federrath, Christoph, E-mail: Adam.G.Ginsburg@gmail.com [Monash Centre for Astrophysics, School of Mathematical Sciences, Monash University, Vic 3800 (Australia)

    2013-12-10

    Molecular clouds are supersonically turbulent. This turbulence governs the initial mass function and the star formation rate. In order to understand the details of star formation, it is therefore essential to understand the properties of turbulence, in particular the probability distribution of density in turbulent clouds. We present H{sub 2}CO volume density measurements of a non-star-forming cloud along the line of sight toward W49A. We use these measurements in conjunction with total mass estimates from {sup 13}CO to infer the shape of the density probability distribution function. This method is complementary to measurements of turbulence via the column density distribution and should be applicable to any molecular cloud with detected CO. We show that turbulence in this cloud is probably compressively driven, with a compressive-to-total Mach number ratio b=M{sub C}/M>0.4. We measure the standard deviation of the density distribution, constraining it to the range 1.5 < σ {sub s} < 1.9, assuming that the density is lognormally distributed. This measurement represents an essential input into star formation laws. The method of averaging over different excitation conditions to produce a model of emission from a turbulent cloud is generally applicable to optically thin line observations.

  12. Aerosol Size Distributions During ACE-Asia: Retrievals From Optical Thickness and Comparisons With In-situ Measurements

    Science.gov (United States)

    Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.

    2002-12-01

    As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.

  13. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  14. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  15. Quantifying the distribution of paste-void spacing of hardened cement paste using X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Tae Sup, E-mail: taesup@yonsei.ac.kr [School of Civil and Environmental Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 120-749 (Korea, Republic of); Kim, Kwang Yeom, E-mail: kimky@kict.re.kr [Korea Institute of Construction Technology, 283 Goyangdae-ro, Ilsanseo-gu, Goyang, 411-712 (Korea, Republic of); Choo, Jinhyun, E-mail: jinhyun@stanford.edu [Department of Civil and Environmental Engineering, Stanford University, Stanford, CA 94305 (United States); Kang, Dong Hun, E-mail: timeriver@naver.com [School of Civil and Environmental Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 120-749 (Korea, Republic of)

    2012-11-15

    The distribution of paste-void spacing in cement-based materials is an important feature related to the freeze-thaw durability of these materials, but its reliable estimation remains an unresolved problem. Herein, we evaluate the capability of X-ray computed tomography (CT) for reliable quantification of the distribution of paste-void spacing. Using X-ray CT images of three mortar specimens having different air-entrainment characteristics, we calculate the distributions of paste-void spacing of the specimens by applying previously suggested methods for deriving the exact spacing of air-void systems. This methodology is assessed by comparing the 95th percentile of the cumulative distribution function of the paste-void spacing with spacing factors computed by applying the linear-traverse method to 3D air-void system and reconstructing equivalent air-void distribution in 3D. Results show that the distributions of equivalent void diameter and paste-void spacing follow lognormal and normal distributions, respectively, and the ratios between the 95th percentile paste-void spacing value and the spacing factors reside within the ranges reported by previous numerical studies. This experimental finding indicates that the distribution of paste-void spacing quantified using X-ray CT has the potential to be the basis for a statistical assessment of the freeze-thaw durability of cement-based materials. - Highlights: Black-Right-Pointing-Pointer The paste-void spacing in 3D can be quantified by X-ray CT. Black-Right-Pointing-Pointer The distribution of the paste-void spacing follows normal distribution. Black-Right-Pointing-Pointer The spacing factor and 95th percentile of CDF of paste-void spacing are correlated.

  16. Multiphase flow modeling of a crude-oil spill site with a bimodal permeability distribution

    Science.gov (United States)

    Dillard, Leslie A.; Essaid, Hedeff I.; Herkelrath, William N.

    1997-01-01

    Fluid saturation, particle-size distribution, and porosity measurements were obtained from 269 core samples collected from six boreholes along a 90-m transect at a subregion of a crude-oil spill site, the north pool, near Bemidji, Minnesota. The oil saturation data, collected 11 years after the spill, showed an irregularly shaped oil body that appeared to be affected by sediment spatial variability. The particle-size distribution data were used to estimate the permeability (k) and retention curves for each sample. An additional 344 k estimates were obtained from samples previously collected at the north pool. The 613 k estimates were distributed bimodal lognormally with the two population distributions corresponding to the two predominant lithologies: a coarse glacial outwash deposit and fine-grained interbedded lenses. A two-step geostatistical approach was used to generate a conditioned realization of k representing the bimodal heterogeneity. A cross-sectional multiphase flow model was used to simulate the flow of oil and water in the presence of air along the north pool transect for an 11-year period. The inclusion of a representation of the bimodal aquifer heterogeneity was crucial for reproduction of general features of the observed oil body. If the bimodal heterogeneity was characterized, hysteresis did not have to be incorporated into the model because a hysteretic effect was produced by the sediment spatial variability. By revising the relative permeability functional relation, an improved reproduction of the observed oil saturation distribution was achieved. The inclusion of water table fluctuations in the model did not significantly affect the simulated oil saturation distribution.

  17. Effects of statistical distribution of joint trace length on the stability of tunnel excavated in jointed rock mass

    Directory of Open Access Journals (Sweden)

    Kayvan Ghorbani

    2015-12-01

    Full Text Available The rock masses in a construction site of underground cavern are generally not continuous, due to the presence of discontinuities, such as bedding, joints, faults, and fractures. The performance of an underground cavern is principally ruled by the mechanical behaviors of the discontinuities in the vicinity of the cavern. During underground excavation, many surrounding rock failures have close relationship with joints. The stability study on tunnel in jointed rock mass is of importance to rock engineering, especially tunneling and underground space development. In this study, using the probability density distribution functions of negative exponential, log-normal and normal, we investigated the effect of joint trace length on the stability parameters such as stress and displacement of tunnel constructed in rock mass using UDEC (Universal Distinct Element Code. It was obtained that normal distribution function of joint trace length is more critical on the stability of tunnel, and exponential distribution function has less effect on the tunnel stability compared to the two other distribution functions.

  18. Microfracture spacing distributions and the evolution of fracture patterns in sandstones

    Science.gov (United States)

    Hooker, J. N.; Laubach, S. E.; Marrett, R.

    2018-03-01

    Natural fracture patterns in sandstone were sampled using scanning electron microscope-based cathodoluminescence (SEM-CL) imaging. All fractures are opening-mode and are fully or partially sealed by quartz cement. Most sampled fractures are too small to be height-restricted by sedimentary layers. At very low strains ( 100) datasets show spacings that are best fit by log-normal size distributions, compared to exponential, power law, or normal distributions. The clustering of fractures suggests that the locations of natural factures are not determined by a random process. To investigate natural fracture localization, we reconstructed the opening history of a cluster of fractures within the Huizachal Group in northeastern Mexico, using fluid inclusions from synkinematic cements and thermal-history constraints. The largest fracture, which is the only fracture in the cluster visible to the naked eye, among 101 present, opened relatively late in the sequence. This result suggests that the growth of sets of fractures is a self-organized process, in which small, initially isolated fractures grow and progressively interact, with preferential growth of a subset of fractures developing at the expense of growth of the rest. Size-dependent sealing of fractures within sets suggests that synkinematic cementation may contribute to fracture clustering.

  19. Airborne methane remote measurements reveal heavy-tail flux distribution in Four Corners region.

    Science.gov (United States)

    Frankenberg, C.

    2016-12-01

    Methane (CH4) impacts climate as the second strongest anthropogenic greenhouse gas and air quality by influencing tropospheric ozone levels. Space-based observations have identified the Four Corners region in the Southwest United States as an area of large CH4 enhancements. We conducted an airborne campaign in Four Corners during April 2015 with the next-generation Airborne Visible/Infrared Imaging Spectrometer (near-infrared) and Hyperspectral Thermal Emission Spectrometer (thermal infrared) imaging spectrometers to better understand the source of methane by measuring methane plumes at 1- to 3-m spatial resolution. Our analysis detected more than 250 individual methane plumes from fossil fuel harvesting, processing, and distributing infrastructures, spanning an emission range from the detection limit ˜ 2 kg/h to 5 kg/h through ˜ 5,000 kg/h. Observed sources include gas processing facilities, storage tanks, pipeline leaks, natural seeps and well pads, as well as a coal mine venting shaft. Overall, plume enhancements and inferred fluxes follow a lognormal distribution, with the top 10% emitters contributing 49 to 66% to the inferred total point source flux of 0.23 Tg/y to 0.39 Tg/y. We will summarize the campaign results and provide an overview of how airborne remote sensing can be used to detect and infer methane fluxes over widespread geographic areas and how new instrumentation could be used to perform similar observations from space.

  20. Methods for obtaining distributions of uranium occurrence from estimates of geologic features

    International Nuclear Information System (INIS)

    Ford, C.E.; McLaren, R.A.

    1980-04-01

    The problem addressed in this paper is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data require obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs

  1. Methods for obtaining distributions of uranium occurrence from estimates of geologic features

    International Nuclear Information System (INIS)

    Ford, C.E.; McLaren, R.A.

    1980-04-01

    The problem addressed in this report is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data requires obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs

  2. Natural background radiation and population dose distribution in India

    International Nuclear Information System (INIS)

    Nambi, K.S.V.; Bapat, V.N.; David, M.; Sundaram, V.K.; Sunta, C.M.; Soman, S.D.

    1986-01-01

    A country-wide survey of the outdoor natural background gamma radiation levels has been made using mailed thermoluminescent dosimeters (TLDs). The salient features of the results are: (1) The air-kerma levels and the population doses in various states follow log-normal and normal distributions respectively. (2) The national average value for the air dose (air-kerma) is 775 ± 370 (1σ)μGy/y. (3) The lowest air-kerma recorded is 0.23 mGy/y at Minicoy (Laccadive Islands) and the highest is 26.73 mGy/y at Chavra (monazite areas, Kerala). (4) There are significant temporal variation s (even as high as ± 40 per cent) of the background radiation level at many locations and at least in 10 locations where radon/thoron measurements are available, these could be associated with the seasonal variations in radon/thoron levels. (5) The mail control TLDs indicate a country-wide average value of 785 ± 225 μGy/y for the air-kerma which can be considered to provide a truly national average value for the natural background radiation level in India. (6) The mean natural radiation per caput for the country works out to be 690 ± 200 (1σ) Sv/y. (7) The natural radiation per caput seems to be maximum for Andhra Pradesh (1065 ± 325 μSv/y) and minimum for Maharashtra (370 ± 80 μSv/y). (8) The population dose from the external natural background radiation is estimated to be half a million person-Sievert. (9) Assuming 1 CRP risk factor, it can be estimated that just one out of the 43 cancer deaths occurring on an average per 100,000 population in India, can be attributed to the external natural background radiation. (author). 18 refs., 13 tabs., 9 figs

  3. Temporal and spatial distribution of high energy electrons at Jupiter

    Science.gov (United States)

    Jun, I.; Garrett, H. B.; Ratliff, J. M.

    2003-04-01

    Measurements of the high energy, omni-directional electron environment by the Galileo spacecraft Energetic Particle Detector (EPD) were used to study the high energy electron environment in the Jovian magnetosphere, especially in the region between 8 to 18 Rj (1 Rj = 1 Jovian radius = 71,400 km). 10-minute averages of the EPD data collected between Jupiter orbit insertion (JOI) in 1995 and the orbit number 33 (I33) in 2002 form an extensive dataset, which has been extremely useful to observe temporal and spatial variability of the Jovian high energy electron environment. The count rates of the EPD electron channels (0.174, 0.304, 0.527, 1.5, 2.0, and 11 MeV) were grouped into 0.5 Rj or 0.5 L bins and analyzed statistically. The results indicate that: (1) a log-normal Gaussian distribution well describes the statistics of the high energy electron environment (for example, electron differential fluxes) in the Jovian magnetosphere, in the region studied here; (2) the high energy electron environments inferred by the Galileo EPD measurements are in a close agreement with the data obtained using the Divine model, which was developed more than 30 years ago from Pioneer 10, 11 and Voyager 1, 2 data; (3) the data are better organized when plotted against magnetic radial parameter L than Rj; (4) the standard deviations of the 0.174, 0.304, 0.527 MeV channel count rates are larger than those of the 1.5, 2.0, 11 MeV count rates in 12 Rj. These observations are very helpful to understand short- and long-term, and local variability of the Jovian high energy electron environment, and are discussed in detail.

  4. Determination of material distribution in heading process of small bimetallic bar

    Science.gov (United States)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  5. Sunflower petals: Some physical properties and modeling distribution of their number, dimensions, and mass

    Directory of Open Access Journals (Sweden)

    Amir Hossein Mirzabe

    2018-06-01

    Full Text Available Sunflower petal is one of the parts of the sunflower which has drawn attention and has several applications these days. These applications justify getting information about physical properties, mechanical properties, drying trends, etc. in order to design new machines and use new methods to harvest or dry the sunflower petals. For three varieties of sunflower, picking force of petals was measured; number of petals of each head was counted; unit mass and 1000-unit mass of fresh petals were measured and length, width, and projected area of fresh petals were calculated based on image processing technique; frequency distributions of these parameters were modeled using statistical distribution models namely Gamma, Generalized Extreme Value (G. E. V, Lognormal, and Weibull. Results of picking force showed that with increasing number of days after appearing the first petal on each head from 5 to 14 and decreasing loading rate from 150 g min−1 to 50 g min−1 values of picking force were decreased for three varieties, but diameter of sunflower head had different effects on picking force for each variety. Length, width, and number of petals of Dorsefid variety ranged from 38.52 to 95.44 mm, 3.80 to 9.28 mm and 29 to 89, respectively. The corresponding values ranged from 34.19 to 88.18 mm, 4.28 to 10.60 mm and 21 to 89, respectively for Shamshiri variety and ranged from 44.47 to 114.63 mm, 7.03 to 20.31 mm and 29 to 89 for Sirena variety. Results of frequency distribution modeling indicated that in most cases, G. E. V and Weibull distributions had better performance than other distributions. Keywords: Sunflower (Helianthus annus L. petal, Picking force, Image processing, Fibonacci sequence, Lucas sequence

  6. A methodology to quantify the stochastic distribution of friction coefficient required for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon; Lesch, Mary F

    2008-11-01

    The required friction coefficient is defined as the minimum friction needed at the shoe and floor interface to support human locomotion. The available friction is the maximum friction coefficient that can be supported without a slip at the shoe and floor interface. A statistical model was recently introduced to estimate the probability of slip and fall incidents by comparing the available friction with the required friction, assuming that both the available and required friction coefficients have stochastic distributions. This paper presents a methodology to investigate the stochastic distributions of the required friction coefficient for level walking. In this experiment, a walkway with a layout of three force plates was specially designed in order to capture a large number of successful strikes without causing fatigue in participants. The required coefficient of friction data of one participant, who repeatedly walked on this walkway under four different walking conditions, is presented as an example of the readiness of the methodology examined in this paper. The results of the Kolmogorov-Smirnov goodness-of-fit test indicated that the required friction coefficient generated from each foot and walking condition by this participant appears to fit the normal, log-normal or Weibull distributions with few exceptions. Among these three distributions, the normal distribution appears to fit all the data generated with this participant. The average of successful strikes for each walk achieved with three force plates in this experiment was 2.49, ranging from 2.14 to 2.95 for each walking condition. The methodology and layout of the experimental apparatus presented in this paper are suitable for being applied to a full-scale study.

  7. THE X-RAY FLUX DISTRIBUTION OF SAGITTARIUS A* AS SEEN BY CHANDRA

    International Nuclear Information System (INIS)

    Neilsen, J.; Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands))" data-affiliation=" (Astronomical Institute, Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands))" >Markoff, S.; Nowak, M. A.; Baganoff, F. K.; Dexter, J.; Witzel, G.; Barrière, N.; Li, Y.; Degenaar, N.; Fragile, P. C.; Gammie, C.; Goldwurm, A.; Grosso, N.; Haggard, D.

    2015-01-01

    We present a statistical analysis of the X-ray flux distribution of Sgr A* from the Chandra X-Ray Observatory's 3 Ms Sgr A* X-ray Visionary Project in 2012. Our analysis indicates that the observed X-ray flux distribution can be decomposed into a steady quiescent component, represented by a Poisson process with rate Q = (5.24 ± 0.08) × 10 –3  counts s –1 , and a variable component, represented by a power law process (dN/dF∝F –ξ , ξ=1.92 −0.02 +0.03 ). This slope matches our recently reported distribution of flare luminosities. The variability may also be described by a log-normal process with a median unabsorbed 2-8 keV flux of 1.8 −0.6 +0.8 ×10 −14  erg s –1  cm –2 and a shape parameter σ = 2.4 ± 0.2, but the power law provides a superior description of the data. In this decomposition of the flux distribution, all of the intrinsic X-ray variability of Sgr A* (spanning at least three orders of magnitude in flux) can be attributed to flaring activity, likely in the inner accretion flow. We confirm that at the faint end, the variable component contributes ∼10% of the apparent quiescent flux, as previously indicated by our statistical analysis of X-ray flares in these Chandra observations. Our flux distribution provides a new and important observational constraint on theoretical models of Sgr A*, and we use simple radiation models to explore the extent to which a statistical comparison of the X-ray and infrared can provide insights into the physics of the X-ray emission mechanism

  8. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  9. Distribution functions of magnetic nanoparticles determined by a numerical inversion method

    International Nuclear Information System (INIS)

    Bender, P; Balceris, C; Ludwig, F; Posth, O; Bogart, L K; Szczerba, W; Castro, A; Nilsson, L; Costo, R; Gavilán, H; González-Alonso, D; Pedro, I de; Barquín, L Fernández; Johansson, C

    2017-01-01

    In the present study, we applied a regularized inversion method to extract the particle size, magnetic moment and relaxation-time distribution of magnetic nanoparticles from small-angle x-ray scattering (SAXS), DC magnetization (DCM) and AC susceptibility (ACS) measurements. For the measurements the particles were colloidally dispersed in water. At first approximation the particles could be assumed to be spherically shaped and homogeneously magnetized single-domain particles. As model functions for the inversion, we used the particle form factor of a sphere (SAXS), the Langevin function (DCM) and the Debye model (ACS). The extracted distributions exhibited features/peaks that could be distinctly attributed to the individually dispersed and non-interacting nanoparticles. Further analysis of these peaks enabled, in combination with a prior characterization of the particle ensemble by electron microscopy and dynamic light scattering, a detailed structural and magnetic characterization of the particles. Additionally, all three extracted distributions featured peaks, which indicated deviations of the scattering (SAXS), magnetization (DCM) or relaxation (ACS) behavior from the one expected for individually dispersed, homogeneously magnetized nanoparticles. These deviations could be mainly attributed to partial agglomeration (SAXS, DCM, ACS), uncorrelated surface spins (DCM) and/or intra-well relaxation processes (ACS). The main advantage of the numerical inversion method is that no ad hoc assumptions regarding the line shape of the extracted distribution functions are required, which enabled the detection of these contributions. We highlighted this by comparing the results with the results obtained by standard model fits, where the functional form of the distributions was a priori assumed to be log-normal shaped. (paper)

  10. Time distributions of recurrences of immunogenic and nonimmunogenic tumors following local irradiation

    International Nuclear Information System (INIS)

    Suit, H.D.; Sedlacek, R.; Fagundes, L.; Goitein, M.; Rothman, K.J.

    1978-01-01

    Three hundred and fourteen mice received single-dose irradiation of the right leg and thigh as treatment of an 8-mm mammary carcinoma isotransplant, and were then observed until death, usually by 1000 days. The time distributions of death due to local recurrence, radiation-induced sarcoma, distant metastasis in the absence of local regrowth, second primary, intercurrent disease, and unknown causes have been evaluated. The times for the transplant tumor inoculum to grow to an 8-mm tumor and the times of death due to local regrowth, distant metastasis, or induced tumor were all approximately log-normally distributed. Of the 128 recurrences, the latest-appearing 3 were at 300, 323, and 436 days; no recurrences were noted during the time period from 436 to 1000 days. These findings have been interpreted to mean that in some cases absolute cure of mice of the tumor in the leg was achieved by radiation alone at the dose levels employed. Radiation-induced sarcomas began to appear after 300 days. The time of appearance of the radiation-induced tumors was inversely related to radiation dose. Similar data for an immunogenic fibrosarcoma show that recurrences appeared earlier and were more closely bunched with respect to time than the recurrences of mammary carcinoma. The time distribution of the development of radiation-induced tumors in non-tumor-bearing animals was also approximately long-normally distributed; the slope of the time distribution curve was the same as that for radiation-induced tumors in mice which had been treated for tumor

  11. Bimodal Nanoparticle Size Distributions Produced by Laser Ablation of Microparticles in Aerosols

    International Nuclear Information System (INIS)

    Nichols, William T.; Malyavanatham, Gokul; Henneke, Dale E.; O'Brien, Daniel T.; Becker, Michael F.; Keto, John W.

    2002-01-01

    Silver nanoparticles were produced by laser ablation of a continuously flowing aerosol of microparticles in nitrogen at varying laser fluences. Transmission electron micrographs were analyzed to determine the effect of laser fluence on the nanoparticle size distribution. These distributions exhibited bimodality with a large number of particles in a mode at small sizes (3-6-nm) and a second, less populated mode at larger sizes (11-16-nm). Both modes shifted to larger sizes with increasing laser fluence, with the small size mode shifting by 35% and the larger size mode by 25% over a fluence range of 0.3-4.2-J/cm 2 . Size histograms for each mode were found to be well represented by log-normal distributions. The distribution of mass displayed a striking shift from the large to the small size mode with increasing laser fluence. These results are discussed in terms of a model of nanoparticle formation from two distinct laser-solid interactions. Initially, laser vaporization of material from the surface leads to condensation of nanoparticles in the ambient gas. Material evaporation occurs until the plasma breakdown threshold of the microparticles is reached, generating a shock wave that propagates through the remaining material. Rapid condensation of the vapor in the low-pressure region occurs behind the traveling shock wave. Measurement of particle size distributions versus gas pressure in the ablation region, as well as, versus microparticle feedstock size confirmed the assignment of the larger size mode to surface-vaporization and the smaller size mode to shock-formed nanoparticles

  12. Vertical random variability of the distribution coefficient in the soil and its effect on the migration of fallout radionuclides

    International Nuclear Information System (INIS)

    Bunzl, K.

    2002-01-01

    In the field, the distribution coefficient, K d , for the sorption of a radionuclide by the soil cannot be expected to be constant. Even in a well defined soil horizon, K d will vary stochastically in horizontal as well as in vertical direction around a mean value. The horizontal random variability of K d produce a pronounced tailing effect in the concentration depth profile of a fallout radionuclide, much less is known on the corresponding effect of the vertical random variability. To analyze this effect theoretically, the classical convection-dispersion model in combination with the random-walk particle method was applied. The concentration depth profile of a radionuclide was calculated one year after deposition assuming constant values of the pore water velocity, the diffusion/dispersion coefficient, and the distribution coefficient (K d = 100 cm 3 x g -1 ) and exhibiting a vertical variability for K d according to a log-normal distribution with a geometric mean of 100 cm 3 x g -1 and a coefficient of variation of CV 0.53. The results show that these two concentration depth profiles are only slightly different, the location of the peak is shifted somewhat upwards, and the dispersion of the concentration depth profile is slightly larger. A substantial tailing effect of the concentration depth profile is not perceivable. Especially with respect to the location of the peak, a very good approximation of the concentration depth profile is obtained if the arithmetic mean of the K d -values (K d = 113 cm 3 x g -1 ) and a slightly increased dispersion coefficient are used in the analytical solution of the classical convection-dispersion equation with constant K d . The evaluation of the observed concentration depth profile with the analytical solution of the classical convection-dispersion equation with constant parameters will, within the usual experimental limits, hardly reveal the presence of a log-normal random distribution of K d in the vertical direction in

  13. Multilevel quadrature of elliptic PDEs with log-normal diffusion

    KAUST Repository

    Harbrecht, Helmut; Peters, Michael; Siebenmorgen, Markus

    2015-01-01

    Each function evaluation corresponds to a deterministic elliptic boundary value problem which can be solved by finite elements on an appropriate level of refinement. The complexity is thus given by the number

  14. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  15. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  16. An exponential distribution

    International Nuclear Information System (INIS)

    Anon

    2009-01-01

    In this presentation author deals with the probabilistic evaluation of product life on the example of the exponential distribution. The exponential distribution is special one-parametric case of the weibull distribution.

  17. Centrality dependence of baryon and meson momentum distributions in proton-nucleus collisions

    International Nuclear Information System (INIS)

    Hwa, Rudolph C.; Yang, C.B.

    2002-01-01

    The proton and neutron inclusive distributions in the projectile fragmentation region of pA collisions are studied in the valon model. Momentum degradation and flavor changes due to the nuclear medium are described at the valon level using two parameters. Particle production is treated by means of the recombination subprocess. The centrality dependences of the net proton and neutron spectra of the NA49 data are satisfactorily reproduced. The effective degradation length is determined to be 17 fm. Pion inclusive distributions can be calculated without any adjustable parameters

  18. Probability distribution relationships

    Directory of Open Access Journals (Sweden)

    Yousry Abdelkader

    2013-05-01

    Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.

  19. Statistical Methods and Software for the Analysis of Occupational Exposure Data with Non-detectable Values

    Energy Technology Data Exchange (ETDEWEB)

    Frome, EL

    2005-09-20

    Environmental exposure measurements are, in general, positive and may be subject to left censoring; i.e,. the measured value is less than a ''detection limit''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. Parametric methods used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level, an upper percentile, and the exceedance fraction are used to characterize exposure levels, and confidence limits are used to describe the uncertainty in these estimates. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on an upper percentile (i.e., the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical data analysis and graphics has greatly enhanced the availability of high-quality nonproprietary (open source) software that serves as the basis for implementing the methods in this paper.

  20. Critical assessment of the pore size distribution in the rim region of high burnup UO{sub 2} fuels

    Energy Technology Data Exchange (ETDEWEB)

    Cappia, F. [European Commission, Joint Research Centre, Institute for Transuranium Elements, P.O. Box 2340, 76125 Karlsruhe (Germany); Department of Nuclear Engineering, Faculty of Mechanical Engineering, Technische Universität München, D-85748 Garching bei München (Germany); Pizzocri, D. [European Commission, Joint Research Centre, Institute for Transuranium Elements, P.O. Box 2340, 76125 Karlsruhe (Germany); Nuclear Engineering Division, Energy Department, Politecnico di Milano, 20156 Milano (Italy); Schubert, A.; Van Uffelen, P.; Paperini, G.; Pellottiero, D. [European Commission, Joint Research Centre, Institute for Transuranium Elements, P.O. Box 2340, 76125 Karlsruhe (Germany); Macián-Juan, R. [Department of Nuclear Engineering, Faculty of Mechanical Engineering, Technische Universität München, D-85748 Garching bei München (Germany); Rondinella, V.V., E-mail: Vincenzo.RONDINELLA@ec.europa.eu [European Commission, Joint Research Centre, Institute for Transuranium Elements, P.O. Box 2340, 76125 Karlsruhe (Germany)

    2016-11-15

    A new methodology is introduced to analyse porosity data in the high burnup structure. Image analysis is coupled with the adaptive kernel density estimator to obtain a detailed characterisation of the pore size distribution, without a-priori assumption on the functional form of the distribution. Subsequently, stereological analysis is carried out. The method shows advantages compared to the classical approach based on the histogram in terms of detail in the description and accuracy within the experimental limits. Results are compared to the approximation of a log-normal distribution. In the investigated local burnup range (80–200 GWd/tHM), the agreement of the two approaches is satisfactory. From the obtained total pore density and mean pore diameter as a function of local burnup, pore coarsening is observed starting from ≈100 GWd/tHM, in agreement with a previous investigation. - Highlights: • A new methodology to analyse porosity is introduced. • The method shows advantages compared to the histogram. • Pore density and mean diameter data vs. burnup are presented. • Pore coarsening is observed starting from ≈100 GWd/tHM.

  1. Signatures of self-assembly in size distributions of wood members in dam structures of Castor canadensis

    Directory of Open Access Journals (Sweden)

    David M. Blersch

    2014-12-01

    Full Text Available Beavers (Castor canadensis construct dams on rivers throughout most of their historical range in North America, and their impact on water patterns in the landscape is considerable. Dam formation by beavers involves two processes: (1 intentional construction through the selection and placement of wood and sediment, which facilitates (2 the passive capture and accretion of suspended wood and sediment. The second process is a self-assembly mechanism that the beavers leverage by utilizing energy subsidies of watershed transport processes. The relative proportion of beaver activity to self-assembly processes in dam construction, however, is unknown. Here we show that lotic self-assembly processes account for a substantial portion of the work expended in beaver dam construction. We found through comprehensive measurement of the stick dimensions that the distributions for diameter, length, and volume are log-normal. By noting evidence of teeth markings, we determined that size distributions skewed significantly larger for wood handled by beavers compared to those that were not. Subsequent mass calculations suggest that beavers perform 50%–70% of the work of wood member placement for dam assembly, with riparian self-assembly processes contributing the remainder. Additionally, our results establish a benchmark for assessing the proportion of self-assembly work in similar riparian structures. Keywords: Beaver dam, Construction, Castor canadensis, Self-assembly, Distribution, Wood

  2. Critical assessment of the pore size distribution in the rim region of high burnup UO_2 fuels

    International Nuclear Information System (INIS)

    Cappia, F.; Pizzocri, D.; Schubert, A.; Van Uffelen, P.; Paperini, G.; Pellottiero, D.; Macián-Juan, R.; Rondinella, V.V.

    2016-01-01

    A new methodology is introduced to analyse porosity data in the high burnup structure. Image analysis is coupled with the adaptive kernel density estimator to obtain a detailed characterisation of the pore size distribution, without a-priori assumption on the functional form of the distribution. Subsequently, stereological analysis is carried out. The method shows advantages compared to the classical approach based on the histogram in terms of detail in the description and accuracy within the experimental limits. Results are compared to the approximation of a log-normal distribution. In the investigated local burnup range (80–200 GWd/tHM), the agreement of the two approaches is satisfactory. From the obtained total pore density and mean pore diameter as a function of local burnup, pore coarsening is observed starting from ≈100 GWd/tHM, in agreement with a previous investigation. - Highlights: • A new methodology to analyse porosity is introduced. • The method shows advantages compared to the histogram. • Pore density and mean diameter data vs. burnup are presented. • Pore coarsening is observed starting from ≈100 GWd/tHM.

  3. Reactor power distribution monitor

    International Nuclear Information System (INIS)

    Hoizumi, Atsushi.

    1986-01-01

    Purpose: To grasp the margin for the limit value of the power distribution peaking factor inside the reactor under operation by using the reactor power distribution monitor. Constitution: The monitor is composed of the 'constant' file, (to store in-reactor power distributions obtained from analysis), TIP and thermocouple, lateral output distribution calibrating apparatus, axial output distribution synthesizer and peaking factor synthesizer. The lateral output distribution calibrating apparatus is used to make calibration by comparing the power distribution obtained from the thermocouples to the power distribution obtained from the TIP, and then to provide the power distribution lateral peaking factors. The axial output distribution synthesizer provides the power distribution axial peaking factors in accordance with the signals from the out-pile neutron flux detector. These axial and lateral power peaking factors are synthesized with high precision in the three-dimensional format and can be monitored at any time. (Kamimura, M.)

  4. Comparable Analysis of the Distribution Functions of Runup Heights of the 1896, 1933 and 2011 Japanese Tsunamis in the Sanriku Area

    Science.gov (United States)

    Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.

    2012-04-01

    Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.

  5. Comparable analysis of the distribution functions of runup heights of the 1896, 1933 and 2011 Japanese Tsunamis in the Sanriku area

    Directory of Open Access Journals (Sweden)

    B. H. Choi

    2012-05-01

    Full Text Available Data from a field survey of the 2011 Tohoku-oki tsunami in the Sanriku area of Japan is used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated by a theoretical log-normal curve. The characteristics of the distribution functions of the 2011 event are compared with data from two previous catastrophic tsunamis (1896 and 1933 that occurred in almost the same region. The number of observations during the last tsunami is very large, which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and the number of observed runup heights suggested by Kajiura (1983 based on a small amount of data on previous tsunamis. The distribution function of the 2011 event demonstrates the sensitivity to the number of measurements (many of them cannot be considered independent measurements and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.

  6. Chemical Composition Based Aerosol Optical Properties According to Size Distribution and Mixture Types during Smog and Asian Dust Events in Seoul, Korea

    Science.gov (United States)

    Jung, Chang Hoon; Lee, Ji Yi; Um, Junshik; Lee, Seung Soo; Kim, Yong Pyo

    2018-02-01

    This study investigated the optical properties of aerosols involved in different meteorological events, including smog and Asian dust days. Carbonaceous components and inorganic species were measured in Seoul, Korea between 25 and 31 March 2012. Based on the measurements, the optical properties of aerosols were calculated by considering composition, size distribution, and mixing state of aerosols. To represent polydisperse size distributions of aerosols, a lognormal size distribution with a wide range of geometric mean diameters and geometric standard deviations was used. For the optical property calculations, the Mie theory was used to compute single-scattering properties of aerosol particles with varying size and composition. Analysis of the sampled data showed that the water-soluble components of organic matter increased on smog days, whereas crustal elements increased on dust days. The water content significantly influenced the optical properties of aerosols during the smog days as a result of high relative humidity and an increase in the water-soluble component. The absorption coefficients depended on the aerosol mixture type and the aerosol size distributions. Therefore, to improve our knowledge on radiative impacts of aerosols, especially the regional impacts of aerosols in East Asia, accurate measurements of aerosols, such as size distribution, composition, and mixture type, under different meteorological conditions are required.

  7. On bivariate geometric distribution

    Directory of Open Access Journals (Sweden)

    K. Jayakumar

    2013-05-01

    Full Text Available Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  8. Extended Poisson Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Anum Fatima

    2015-09-01

    Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.

  9. RANGE AND DISTRIBUTION OF TECHNETIUM KD VALUES IN THE SRS SUBSURFACE ENVIRONMENT

    International Nuclear Information System (INIS)

    Kaplan, D.

    2008-01-01

    Performance assessments (PAs) are risk calculations used to estimate the amount of low-level radioactive waste that can be disposed at DOE sites. Distribution coefficients (K d values) are input parameters used in PA calculations to provide a measure of radionuclide sorption to sediment; the greater the K d value, the greater the sorption and the slower the estimated movement of the radionuclide through sediment. Understanding and quantifying K d value variability is important for estimating the uncertainty of PA calculations. Without this information, it is necessary to make overly conservative estimates about the possible limits of K d values, which in turn may increase disposal costs. Finally, technetium is commonly found to be amongst the radionuclides posing potential risk at waste disposal locations because it is believed to be highly mobile in its anionic form (pertechnetate, TcO 4 - ), it exists in relatively high concentrations in SRS waste, and it has a long half-life (213,000 years). The objectives of this laboratory study were to determine under SRS environmental conditions: (1) whether and to what extent TcO 4 - sorbs to sediments, (2) the range of Tc K d values, (3) the distribution (normal or log-normal) of Tc K d values, and (4) how strongly Tc sorbs to SRS sediments through desorption experiments. Objective 3, to identify the Tc K d distribution is important because it provides a statistical description that influences stochastic modeling of estimated risk. The approach taken was to collect 26 sediments from a non-radioactive containing sediment core collected from E-Area, measure Tc K d values and then perform statistical analysis to describe the measured Tc K d values. The mean K d value was 3.4 ± 0.5 mL/g and ranged from -2.9 to 11.2 mL/g. The data did not have a Normal distribution (as defined by the Shapiro-Wilk's Statistic) and had a 95-percentile range of 2.4 to 4.4 mL/g. The E-Area subsurface is subdivided into three hydrostratigraphic

  10. Environmental distributions and the practical utilisation of detection limited environment measurement data

    International Nuclear Information System (INIS)

    Daniels, W.M.; Higgins, N.A.

    2002-01-01

    This study investigated methods of providing summary statistics for measurements of radioactive contamination in food when the available measurements are incomplete. Several techniques to calculate, for instance, the mean level of contamination when a significant number of samples are found to have less than the minimum level reported by measurements, are discussed. To support the estimation of summary statistics the study identifies physical processes that give rise to observed distributions, eg lognormal for the range of radioactivity levels found in environmental and food samples. The improved estimates possible by application of the methods reviewed will allow the Food Standards Agency to gain a better understanding of the levels of radioactivity in the environment and if required direct effort to minimising the most significant uncertainties in these estimates. The Ministry of Agriculture, Fisheries and Food, Radiological Safety and Nutrition Division (now part of the Food Standards Agency) funded this study, under contract RP0342. This work was undertaken under the Environmental Assessments and Emergency Response Group's Quality Management System, which has been approved by Lloyd's Register Quality Assurance to the Quality Management Standards ISO 9001:2000 and TickIT Guide Issue 5, certificate number 956546. (author)

  11. Spatial and size distributions of garnets grown in a pseudotachylyte generated during a lower crust earthquake

    Science.gov (United States)

    Clerc, Adriane; Renard, François; Austrheim, Håkon; Jamtveit, Bjørn

    2018-05-01

    In the Bergen Arc, western Norway, rocks exhumed from the lower crust record earthquakes that formed during the Caledonian collision. These earthquakes occurred at about 30-50 km depth under granulite or amphibolite facies metamorphic conditions. Coseismic frictional heating produced pseudotachylytes in this area. We describe pseudotachylytes using field data to infer earthquake magnitude (M ≥ 6.6), low dynamic friction during rupture propagation (μd earthquake arrest. High resolution 3D X-ray microtomography imaging reveals the microstructure of a pseudotachylyte sample, including numerous garnets and their corona of plagioclase that we infer have crystallized in the pseudotachylyte. These garnets 1) have dendritic shapes and are surrounded by plagioclase coronae almost fully depleted in iron, 2) have a log-normal volume distribution, 3) increase in volume with increasing distance away from the pseudotachylyte-host rock boundary, and 4) decrease in number with increasing distance away from the pseudotachylyte -host rock boundary. These characteristics indicate fast mineral growth, likely within seconds. We propose that these new quantitative criteria may assist in the unambiguous identification of pseudotachylytes in the field.

  12. Towards the use of radon distribution in schools as a health indicator

    International Nuclear Information System (INIS)

    Milu, C.; Gheorghe, R.; Dumitrescu, A.

    2006-01-01

    A pilot study concerning indoor radon and gamma dose measurements in school and kindergarten was performed in Bucharest metropolitan area, within a bilateral cooperation between the Institute of Public Health, Bucharest, Romania, and Joszef Stefan Institute, Ljubliana, Slovenia. One hundred schools and kindergartens were included in this study. Because the geological structure of Bucharest subsoil is the same for whole selected aria (a loess platform), the school selection criteria were the age of buildings and the type of building materials. Indoor radon concentrations were measured with nuclear track detectors one month, during the winter. The data have presented a lognormal distribution in the range of (43:477). An arithmetic mean of 146 Bq/m 3 and a geometric mean of 128.18 Bq/m 3 were obtained. Concomitant with indoor radon measurements, gamma dose rate measurement carried out, using thermoluminescence dosimeters. The results ranged from 65.51 to 127.45 n Sv/hour, with a mean of 91.1141(SD12.22) n Sv/mouth and a geometric mean of 90.31 n Sv/hour. The obtained results merely show a preliminary picture of indoor radon and gamma levels in schools and kindergarten from Bucharest and as such a good base for a national monitoring program. (authors)

  13. Particle size distribution of dust collected from Alcator C-MOD

    International Nuclear Information System (INIS)

    Gorman, S.V.; Carmack, W.J.; Hembree, P.B.

    1998-01-01

    There are important safety issues associated with tokamak dust, accumulated primarily from sputtering and disruptions. The dust may contain tritium, it may be activated, chemically toxic, and chemically reactive. The purpose of this paper is to present results from analyses of particulate collected from the Alcator C-MOD tokamak located at Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. The sample obtained from C-MOD was not originally intended for examination outside of MIT. The sample was collected with the intent of performing only a composition analysis. However, MIT provided the INEEL with this sample for particle analysis. The sample was collected by vacuuming a section of the machine (covering approximately 1/3 of the machine surface) with a coarse fiber filter as the collection surface. The sample was then analyzed using an optical microscope, SEM microscope, Microtrac FRA particle size analyzer. The data fit a log-normal distribution. The count median diameter (CMD) of the samples ranged from 0.3 microm to 1.1 microm with geometric standard deviations (GSD) ranging from 2.8 to 5.2 and a mass median diameter (MMD) ranging from 7.22 to 176 microm

  14. Distributed Data Management and Distributed File Systems

    CERN Document Server

    Girone, Maria

    2015-01-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  15. Unifying distribution functions: some lesser known distributions.

    Science.gov (United States)

    Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P

    2008-08-01

    We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.

  16. Cumulative Poisson Distribution Program

    Science.gov (United States)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  17. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    trace out the entire distribution. A univariate quantile regression model is used to examine stock and bond return distributions individually, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that certain parts of the return distributions......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing...

  18. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    Science.gov (United States)

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Mercury distribution and transport in the North Atlantic Ocean along the GEOTRACES-GA01 transect

    Science.gov (United States)

    Cossa, Daniel; Heimbürger, Lars-Eric; Pérez, Fiz F.; García-Ibáñez, Maribel I.; Sonke, Jeroen E.; Planquette, Hélène; Lherminier, Pascale; Boutorh, Julia; Cheize, Marie; Lukas Menzel Barraqueta, Jan; Shelley, Rachel; Sarthou, Géraldine

    2018-04-01

    We report here the results of total mercury (HgT) determinations along the 2014 Geotraces Geovide cruise (GA01 transect) in the North Atlantic Ocean (NA) from Lisbon (Portugal) to the coast of Labrador (Canada). HgT concentrations in unfiltered samples (HgTUNF) were log-normally distributed and ranged between 0.16 and 1.54 pmol L-1, with a geometric mean of 0.51 pmol L-1 for the 535 samples analysed. The dissolved fraction (waters. The HgTUNF concentrations were similarly low in the subpolar gyre waters ( ˜ 0.45 pmol L-1), whereas they exceeded 0.60 pmol L-1 in the subtropical gyre waters. The HgTUNF distribution mirrored that of dissolved oxygen concentration, with highest concentration levels associated with oxygen-depleted zones. The relationship between HgTF and the apparent oxygen utilization confirms the nutrient-like behaviour of Hg in the NA. An extended optimum multiparameter analysis allowed us to characterize HgTUNF concentrations in the different source water types (SWTs) present along the transect. The distribution pattern of HgTUNF, modelled by the mixing of SWTs, show Hg enrichment in Mediterranean waters and North East Atlantic Deep Water and low concentrations in young waters formed in the subpolar gyre and Nordic seas. The change in anthropogenic Hg concentrations in the Labrador Sea Water during its eastward journey suggests a continuous decrease in Hg content in this water mass over the last decades. Calculation of the water transport driven by the Atlantic Meridional Overturning Circulation across the Portugal-Greenland transect indicates northward Hg transport within the upper limb and southward Hg transport within the lower limb, with resulting net northward transport of about 97.2 kmol yr-1.

  20. Collection strategy, inner morphology, and size distribution of dust particles in ASDEX Upgrade

    Science.gov (United States)

    Balden, M.; Endstrasser, N.; Humrickhouse, P. W.; Rohde, V.; Rasinski, M.; von Toussaint, U.; Elgeti, S.; Neu, R.; the ASDEX Upgrade Team

    2014-07-01

    The dust collection and analysis strategy in ASDEX Upgrade (AUG) is described. During five consecutive operation campaigns (2007-2011), Si collectors were installed, which were supported by filtered vacuum sampling and collection with adhesive tapes in 2009. The outer and inner morphology (e.g. shape) and elemental composition of the collected particles were analysed by scanning electron microscopy. The majority of the ˜50 000 analysed particles on the Si collectors of campaign 2009 contain tungsten—the plasma-facing material in AUG—and show basically two different types of outer appearance: spheroids and irregularly shaped particles. By far most of the W-dominated spheroids consist of a solid W core, i.e. solidified W droplets. A part of these particles is coated with a low-Z material; a process that seems to happen presumably in the far scrape-off layer plasma. In addition, some conglomerates of B, C and W appear as spherical particles after their contact with plasma. By far most of the particles classified as B-, C- and W-dominated irregularly shaped particles consist of the same conglomerate with varying fraction of embedded W in the B-C matrix and some porosity, which can exceed 50%. The fragile structures of many conglomerates confirm the absence of intensive plasma contact. Both the ablation and mobilization of conglomerate material and the production of W droplets are proposed to be triggered by arcing. The size distribution of each dust particle class is best described by a log-normal distribution allowing an extrapolation of the dust volume and surface area. The maximum in this distribution is observed above the resolution limit of 0.28 µm only for the W-dominated spheroids, at around 1 µm. The amount of W-containing dust is extrapolated to be less than 300 mg on the horizontal areas of AUG.

  1. Collection strategy, inner morphology, and size distribution of dust particles in ASDEX Upgrade

    International Nuclear Information System (INIS)

    Balden, M.; Endstrasser, N.; Rohde, V.; Rasinski, M.; Von Toussaint, U.; Elgeti, S.; Neu, R.; Humrickhouse, P.W.

    2014-01-01

    The dust collection and analysis strategy in ASDEX Upgrade (AUG) is described. During five consecutive operation campaigns (2007–2011), Si collectors were installed, which were supported by filtered vacuum sampling and collection with adhesive tapes in 2009. The outer and inner morphology (e.g. shape) and elemental composition of the collected particles were analysed by scanning electron microscopy. The majority of the ∼50 000 analysed particles on the Si collectors of campaign 2009 contain tungsten—the plasma-facing material in AUG—and show basically two different types of outer appearance: spheroids and irregularly shaped particles. By far most of the W-dominated spheroids consist of a solid W core, i.e. solidified W droplets. A part of these particles is coated with a low-Z material; a process that seems to happen presumably in the far scrape-off layer plasma. In addition, some conglomerates of B, C and W appear as spherical particles after their contact with plasma. By far most of the particles classified as B-, C- and W-dominated irregularly shaped particles consist of the same conglomerate with varying fraction of embedded W in the B–C matrix and some porosity, which can exceed 50%. The fragile structures of many conglomerates confirm the absence of intensive plasma contact. Both the ablation and mobilization of conglomerate material and the production of W droplets are proposed to be triggered by arcing. The size distribution of each dust particle class is best described by a log-normal distribution allowing an extrapolation of the dust volume and surface area. The maximum in this distribution is observed above the resolution limit of 0.28 µm only for the W-dominated spheroids, at around 1 µm. The amount of W-containing dust is extrapolated to be less than 300 mg on the horizontal areas of AUG. (paper)

  2. Drinking Water Distribution Systems

    Science.gov (United States)

    Learn about an overview of drinking water distribution systems, the factors that degrade water quality in the distribution system, assessments of risk, future research about these risks, and how to reduce cross-connection control risk.

  3. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  4. TRANSMUTED EXPONENTIATED EXPONENTIAL DISTRIBUTION

    OpenAIRE

    MEROVCI, FATON

    2013-01-01

    In this article, we generalize the exponentiated exponential distribution using the quadratic rank transmutation map studied by Shaw etal. [6] to develop a transmuted exponentiated exponential distribution. Theproperties of this distribution are derived and the estimation of the model parameters is discussed. An application to real data set are finally presented forillustration

  5. Leadership for Distributed Teams

    NARCIS (Netherlands)

    De Rooij, J.P.G.

    2009-01-01

    The aim of this dissertation was to study the little examined, yet important issue of leadership for distributed teams. Distributed teams are defined as: “teams of which members are geographically distributed and are therefore working predominantly via mediated communication means on an

  6. Extreme value distributions

    CERN Document Server

    Ahsanullah, Mohammad

    2016-01-01

    The aim of the book is to give a through account of the basic theory of extreme value distributions. The book cover a wide range of materials available to date. The central ideas and results of extreme value distributions are presented. The book rwill be useful o applied statisticians as well statisticians interrested to work in the area of extreme value distributions.vmonograph presents the central ideas and results of extreme value distributions.The monograph gives self-contained of theory and applications of extreme value distributions.

  7. Distributed plot-making

    DEFF Research Database (Denmark)

    Jensen, Lotte Groth; Bossen, Claus

    2016-01-01

    different socio-technical systems (paper-based and electronic patient records). Drawing on the theory of distributed cognition and narrative theory, primarily inspired by the work done within health care by Cheryl Mattingly, we propose that the creation of overview may be conceptualised as ‘distributed plot......-making’. Distributed cognition focuses on the role of artefacts, humans and their interaction in information processing, while narrative theory focuses on how humans create narratives through the plot construction. Hence, the concept of distributed plot-making highlights the distribution of information processing...

  8. Fitting and Analyzing Randomly Censored Geometric Extreme Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Muhammad Yameen Danish

    2016-06-01

    Full Text Available The paper presents the Bayesian analysis of two-parameter geometric extreme exponential distribution with randomly censored data. The continuous conjugate prior of the scale and shape parameters of the model does not exist while computing the Bayes estimates, it is assumed that the scale and shape parameters have independent gamma priors. It is seen that the closed-form expressions for the Bayes estimators are not possible; we suggest the Lindley’s approximation to obtain the Bayes estimates. However, the Bayesian credible intervals cannot be constructed while using this method, we propose Gibbs sampling to obtain the Bayes estimates and also to construct the Bayesian credible intervals. Monte Carlo simulation study is carried out to observe the behavior of the Bayes estimators and also to compare with the maximum likelihood estimators. One real data analysis is performed for illustration.

  9. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  10. Numerical Modeling Describing the Effects of Heterogeneous Distributions of Asperities on the Quasi-static Evolution of Frictional Slip

    Science.gov (United States)

    Selvadurai, P. A.; Parker, J. M.; Glaser, S. D.

    2017-12-01

    A better understanding of how slip accumulates along faults and its relation to the breakdown of shear stress is beneficial to many engineering disciplines, such as, hydraulic fracture and understanding induced seismicity (among others). Asperities forming along a preexisting fault resist the relative motion of the two sides of the interface and occur due to the interaction of the surface topographies. Here, we employ a finite element model to simulate circular partial slip asperities along a nominally flat frictional interface. Shear behavior of our partial slip asperity model closely matched the theory described by Cattaneo. The asperity model was employed to simulate a small section of an experimental fault formed between two bodies of polymethyl methacrylate, which consisted of multiple asperities whose location and sizes were directly measured using a pressure sensitive film. The quasi-static shear behavior of the interface was modeled for cyclical loading conditions, and the frictional dissipation (hysteresis) was normal stress dependent. We further our understanding by synthetically modeling lognormal size distributions of asperities that were randomly distributed in space. Synthetic distributions conserved the real contact area and aspects of the size distributions from the experimental case, allowing us to compare the constitutive behaviors based solely on spacing effects. Traction-slip behavior of the experimental interface appears to be considerably affected by spatial clustering of asperities that was not present in the randomly spaced, synthetic asperity distributions. Estimates of bulk interfacial shear stiffness were determined from the constitutive traction-slip behavior and were comparable to the theoretical estimates of multi-contact interfaces with non-interacting asperities.

  11. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    Science.gov (United States)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely

  12. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    Energy Technology Data Exchange (ETDEWEB)

    Plionis, Alexander A [Los Alamos National Laboratory; Peterson, Dominic S [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory; Lamont, Stephen P [Los Alamos National Laboratory

    2009-01-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  13. Quantitative imaging reveals heterogeneous growth dynamics and treatment-dependent residual tumor distributions in a three-dimensional ovarian cancer model

    Science.gov (United States)

    Celli, Jonathan P.; Rizvi, Imran; Evans, Conor L.; Abu-Yousif, Adnan O.; Hasan, Tayyaba

    2010-09-01

    Three-dimensional tumor models have emerged as valuable in vitro research tools, though the power of such systems as quantitative reporters of tumor growth and treatment response has not been adequately explored. We introduce an approach combining a 3-D model of disseminated ovarian cancer with high-throughput processing of image data for quantification of growth characteristics and cytotoxic response. We developed custom MATLAB routines to analyze longitudinally acquired dark-field microscopy images containing thousands of 3-D nodules. These data reveal a reproducible bimodal log-normal size distribution. Growth behavior is driven by migration and assembly, causing an exponential decay in spatial density concomitant with increasing mean size. At day 10, cultures are treated with either carboplatin or photodynamic therapy (PDT). We quantify size-dependent cytotoxic response for each treatment on a nodule by nodule basis using automated segmentation combined with ratiometric batch-processing of calcein and ethidium bromide fluorescence intensity data (indicating live and dead cells, respectively). Both treatments reduce viability, though carboplatin leaves micronodules largely structurally intact with a size distribution similar to untreated cultures. In contrast, PDT treatment disrupts micronodular structure, causing punctate regions of toxicity, shifting the distribution toward smaller sizes, and potentially increasing vulnerability to subsequent chemotherapeutic treatment.

  14. Species-abundance distribution patterns of soil fungi: contribution to the ecological understanding of their response to experimental fire in Mediterranean maquis (southern Italy).

    Science.gov (United States)

    Persiani, Anna Maria; Maggi, Oriana

    2013-01-01

    Experimental fires, of both low and high intensity, were lit during summer 2000 and the following 2 y in the Castel Volturno Nature Reserve, southern Italy. Soil samples were collected Jul 2000-Jul 2002 to analyze the soil fungal community dynamics. Species abundance distribution patterns (geometric, logarithmic, log normal, broken-stick) were compared. We plotted datasets with information both on species richness and abundance for total, xerotolerant and heat-stimulated soil microfungi. The xerotolerant fungi conformed to a broken-stick model for both the low- and high intensity fires at 7 and 84 d after the fire; their distribution subsequently followed logarithmic models in the 2 y following the fire. The distribution of the heat-stimulated fungi changed from broken-stick to logarithmic models and eventually to a log-normal model during the post-fire recovery. Xerotolerant and, to a far greater extent, heat-stimulated soil fungi acquire an important functional role following soil water stress and/or fire disturbance; these disturbances let them occupy unsaturated habitats and become increasingly abundant over time.

  15. Weighted Lomax distribution.

    Science.gov (United States)

    Kilany, N M

    2016-01-01

    The Lomax distribution (Pareto Type-II) is widely applicable in reliability and life testing problems in engineering as well as in survival analysis as an alternative distribution. In this paper, Weighted Lomax distribution is proposed and studied. The density function and its behavior, moments, hazard and survival functions, mean residual life and reversed failure rate, extreme values distributions and order statistics are derived and studied. The parameters of this distribution are estimated by the method of moments and the maximum likelihood estimation method and the observed information matrix is derived. Moreover, simulation schemes are derived. Finally, an application of the model to a real data set is presented and compared with some other well-known distributions.

  16. Are Parton Distributions Positive?

    CERN Document Server

    Forte, Stefano; Ridolfi, Giovanni; Altarelli, Guido; Forte, Stefano; Ridolfi, Giovanni

    1999-01-01

    We show that the naive positivity conditions on polarized parton distributions which follow from their probabilistic interpretation in the naive parton model are reproduced in perturbative QCD at the leading log level if the quark and gluon distribution are defined in terms of physical processes. We show how these conditions are modified at the next-to-leading level, and discuss their phenomenological implications, in particular in view of the determination of the polarized gluon distribution

  17. Are parton distributions positive?

    International Nuclear Information System (INIS)

    Forte, Stefano; Altarelli, Guido; Ridolfi, Giovanni

    1999-01-01

    We show that the naive positivity conditions on polarized parton distributions which follow from their probabilistic interpretation in the naive parton model are reproduced in perturbative QCD at the leading log level if the quark and gluon distribution are defined in terms of physical processes. We show how these conditions are modified at the next-to-leading level, and discuss their phenomenological implications, in particular in view of the determination of the polarized gluon distribution

  18. dftools: Distribution function fitting

    Science.gov (United States)

    Obreschkow, Danail

    2018-05-01

    dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

  19. Raindrop Size Distribution in Different Climatic Regimes from Disdrometer and Dual-Polarized Radar Analysis.

    Science.gov (United States)

    Bringi, V. N.; Chandrasekar, V.; Hubbert, J.; Gorgucci, E.; Randeu, W. L.; Schoenhuber, M.

    2003-01-01

    The application of polarimetric radar data to the retrieval of raindrop size distribution parameters and rain rate in samples of convective and stratiform rain types is presented. Data from the Colorado State University (CSU), CHILL, NCAR S-band polarimetric (S-Pol), and NASA Kwajalein radars are analyzed for the statistics and functional relation of these parameters with rain rate. Surface drop size distribution measurements using two different disdrometers (2D video and RD-69) from a number of climatic regimes are analyzed and compared with the radar retrievals in a statistical and functional approach. The composite statistics based on disdrometer and radar retrievals suggest that, on average, the two parameters (generalized intercept and median volume diameter) for stratiform rain distributions lie on a straight line with negative slope, which appears to be consistent with variations in the microphysics of stratiform precipitation (melting of larger, dry snow particles versus smaller, rimed ice particles). In convective rain, `maritime-like' and `continental-like' clusters could be identified in the same two-parameter space that are consistent with the different multiplicative coefficients in the Z = aR1.5 relations quoted in the literature for maritime and continental regimes.

  20. Marine aerosol distribution and variability over the pristine Southern Indian Ocean

    Science.gov (United States)

    Mallet, Paul-Étienne; Pujol, Olivier; Brioude, Jérôme; Evan, Stéphanie; Jensen, Andrew

    2018-06-01

    This paper presents an 8-year (2005-2012 inclusive) study of the marine aerosol distribution and variability over the Southern Indian Ocean, precisely in the area { 10 °S - 40 °S ; 50 °E - 110 °E } which has been identified as one of the most pristine regions of the globe. A large dataset consisting of satellite data (POLDER, CALIOP), AERONET measurements at Saint-Denis (French Réunion Island) and model reanalysis (MACC), has been used. In spite of a positive bias of about 0.05 between the AOD (aerosol optical depth) given by POLDER and MACC on one hand and the AOD measured by AERONET on the other, consistent results for aerosol distribution and variability over the area considered have been obtained. First, aerosols are mainly confined below 2km asl (above sea level) and are dominated by sea salt, especially in the center of the area of interest, with AOD ≤ 0 . 1. This zone is the most pristine and is associated with the position of the Mascarene anticyclone. There, the direct radiative effect is assessed around - 9 Wm-2 at the top of the atmosphere and probability density functions of the AOD s are leptokurtic lognormal functions without any significant seasonal variation. It is also suggested that the Madden-Jullian oscillation impacts sea salt emissions in the northern part of the area considered by modifying the state of the ocean surface. Finally, this area is surrounded in the northeast and the southwest by seasonal Australian and South African intrusions (AOD > 0.1) ; throughout the year, the ITCZ seems to limit continental contaminations from Asia. Due to the long period of time considered (almost a decade), this paper completes and strengthens results of studies based on observations performed during previous specific field campaigns.

  1. Sorting a distribution theory

    CERN Document Server

    Mahmoud, Hosam M

    2011-01-01

    A cutting-edge look at the emerging distributional theory of sorting Research on distributions associated with sorting algorithms has grown dramatically over the last few decades, spawning many exact and limiting distributions of complexity measures for many sorting algorithms. Yet much of this information has been scattered in disparate and highly specialized sources throughout the literature. In Sorting: A Distribution Theory, leading authority Hosam Mahmoud compiles, consolidates, and clarifies the large volume of available research, providing a much-needed, comprehensive treatment of the

  2. Electric distribution systems

    CERN Document Server

    Sallam, A A

    2010-01-01

    "Electricity distribution is the penultimate stage in the delivery of electricity to end users. The only book that deals with the key topics of interest to distribution system engineers, Electric Distribution Systems presents a comprehensive treatment of the subject with an emphasis on both the practical and academic points of view. Reviewing traditional and cutting-edge topics, the text is useful to practicing engineers working with utility companies and industry, undergraduate graduate and students, and faculty members who wish to increase their skills in distribution system automation and monitoring."--

  3. Cooling water distribution system

    Science.gov (United States)

    Orr, Richard

    1994-01-01

    A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using an interconnected series of radial guide elements, a plurality of circumferential collector elements and collector boxes to collect and feed the cooling water into distribution channels extending along the curved surface of the steel containment vessel. The cooling water is uniformly distributed over the curved surface by a plurality of weirs in the distribution channels.

  4. Distributed Structure Searchable Toxicity

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Distributed Structure Searchable Toxicity (DSSTox) online resource provides high quality chemical structures and annotations in association with toxicity data....

  5. Distributed Energy Technology Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Distributed Energy Technologies Laboratory (DETL) is an extension of the power electronics testing capabilities of the Photovoltaic System Evaluation Laboratory...

  6. Distribution of Chinese names

    Science.gov (United States)

    Huang, Ding-wei

    2013-03-01

    We present a statistical model for the distribution of Chinese names. Both family names and given names are studied on the same basis. With naive expectation, the distribution of family names can be very different from that of given names. One is affected mostly by genealogy, while the other can be dominated by cultural effects. However, we find that both distributions can be well described by the same model. Various scaling behaviors can be understood as a result of stochastic processes. The exponents of different power-law distributions are controlled by a single parameter. We also comment on the significance of full-name repetition in Chinese population.

  7. Statistical distribution sampling

    Science.gov (United States)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  8. Análise de distribuição de chuva para Santa Maria, RS Analysis of rainfall distribution for Santa Maria, RS, Brazil

    Directory of Open Access Journals (Sweden)

    Joel C. da Silva

    2007-02-01

    Full Text Available O estudo em pauta teve como objetivo analisar a distribuição da quantidade diária de precipitação e do número de dias com chuva e determinar a variação da probabilidade de ocorrência de precipitação diária, durante os meses do ano, em Santa Maria, RS. Os dados de precipitação utilizados foram obtidos durante 36 anos de observação, na Estação Climatológica do 8º Distrito de Meteorologia, localizada na Universidade Federal de Santa Maria (29º 43' 23" de latitude Sul e 53º 43' 15" de longitude Oeste, altitude 95 m. Analisaram-se as seguintes funções de distribuição de probabilidade: gama, Weibull, normal, log-normal e exponencial. As funções que melhor descreveram a distribuição das probabilidades foram gama e Weibull. O maior número de dias com chuva ocorreu durante os meses de inverno porém o volume de precipitação é menor nesses dias, resultando em total mensal semelhante para todos os meses do ano.The objectives of this study were to analyze the distribution of total daily rainfall data and the number of rainy-days, and to determine the probability variation of daily precipitation during the months of the year in Santa Maria, Rio Grande do Sul State, Brazil. A 36-year rainfall database measured at the Climatological Station of 8th District of Meteorology, located in Santa Maria Federal University (29º 43' 23" S and 53º 43' 15" W were used in the study. The following probability distribution functions were tested: gamma, Weibull, normal, lognormal and exponential. The functions that best described the frequency distribution were gamma and Weibull. There were more number of rainy days in the winter, but with less amount of rainfall, resulting in similar monthly total precipitation for the twelve months of the year.

  9. Thickness distributions and evolution of growth mechanisms of NH4-illite from the fossil hydrothermal system of Harghita Bai, Eastern Carpathians, Romania

    Science.gov (United States)

    Bobos, Iuliu; Eberl, Dennis D.

    2013-01-01

    The crystal growth of NH4-illite (NH4-I) from the hydrothermal system of Harghita Bãi (Eastern Carpathians) was deduced from the shapes of crystal thickness distributions (CTDs). The 4-illite-smectite (I-S) interstratified structures (R1, R2, and R3-type ordering) with a variable smectite-layer content. The NH4-I-S (40–5% S) structures were identified underground in a hydrothermal breccia structure, whereas the K-I/NH4-I mixtures were found at the deepest level sampled (−110 m). The percentage of smectite interlayers generally decreases with increasing depth in the deposit. This decrease in smectite content is related to the increase in degree of fracturing in the breccia structure and corresponds to a general increase in mean illite crystal thickness. In order to determine the thickness distributions of NH4-I crystals (fundamental illite particles) which make up the NH4-I-S interstratified structures and the NH4,-I/K-I mixtures, 27 samples were saturated with Li+ and aqueous solutions of PVP-10 to remove swelling and then were analyzed by X-ray diffraction. The profiles for the mean crystallite thickness (Tmean) and crystallite thickness distribution (CTD) of NH4-I crystallites were determined by the Bertaut-Warren-Averbach method using the MudMaster computer code. The Tmean of NH4-I from NH4-I-S samples ranges from 3.4 to 7.8 nm. The Tmean measured for the NH4-I/K-I mixture phase ranges from 7.8 nm to 11.7 nm (NH4-I) and from 12.1 to 24.7 nm (K-I).The CTD shapes of NH4-I fundamental particles are asymptotic and lognormal, whereas illites from NH4-I/K-I mixtures have bimodal shapes related to the presence of two lognormal-like CTDs corresponding to NH4-I and K-I.The crystal-growth mechanism for NH4-I samples was simulated using the Galoper code. Reaction pathways for NH4-I crystal nucleation and growth could be determined for each sample by plotting their CTD parameters on an α–β2 diagram constructed using Galoper. This analysis shows that NH4-I crystals

  10. Consideration of time-evolving capacity distributions and improved degradation models for seismic fragility assessment of aging highway bridges

    International Nuclear Information System (INIS)

    Ghosh, Jayadipta; Sood, Piyush

    2016-01-01

    This paper presents a methodology to develop seismic fragility curves for deteriorating highway bridges by uniquely accounting for realistic pitting corrosion deterioration and time-dependent capacity distributions for reinforced concrete columns under chloride attacks. The proposed framework offers distinct improvements over state-of-the-art procedures for fragility assessment of degrading bridges which typically assume simplified uniform corrosion deterioration model and pristine limit state capacities. Depending on the time in service life and deterioration mechanism, this study finds that capacity limit states for deteriorating bridge columns follow either lognormal distribution or generalized extreme value distributions (particularly for pitting corrosion). Impact of column degradation mechanism on seismic response and fragility of bridge components and system is assessed using nonlinear time history analysis of three-dimensional finite element bridge models reflecting the uncertainties across structural modeling parameters, deterioration parameters and ground motion. Comparisons are drawn between the proposed methodology and traditional approaches to develop aging bridge fragility curves. Results indicate considerable underestimations of system level fragility across different damage states using the traditional approach compared to the proposed realistic pitting model for chloride induced corrosion. Time-dependent predictive functions are provided to interpolate logistic regression coefficients for continuous seismic reliability evaluation along the service life with reasonable accuracy. - Highlights: • Realistic modeling of chloride induced corrosion deterioration in the form of pitting. • Time-evolving capacity distribution for aging bridge columns under chloride attacks. • Time-dependent seismic fragility estimation of highway bridges at component and system level. • Mathematical functions for continuous tracking of seismic fragility along service

  11. Activity size distributions of some naturally occurring radionuclides 7Be, 40K and 212Pb in indoor and outdoor environments

    International Nuclear Information System (INIS)

    Mohamed, A.

    2005-01-01

    The activity size distributions of natural radionuclides 7 Be and 40 K were measured outdoor in El-Minia city, Egypt by means of gamma spectroscopy. A low-pressure Berner cascade impactor was used as a sampling device. The activity size distribution of both 7 Be and 40 K was described by one log-normal distribution, which was represented by the accumulation mode. The activity median aerodynamic diameter (AMAD) of 7 Be and 40 K was determined to be 530 and 1550nm with a relative geometric standard deviation (δ, which was defined as the dispersion of the peak) of 2.4 and 2, respectively. The same sampling device (Berner impactor) and a screen diffusion battery were used to measure the activity size distribution, activity concentration and unattached fraction (f P ) of 212 Pb in indoor air of El-Minia City, Egypt. The mean activity median aerodynamic diameter (AMAD) of the accumulation mode for attached 212 Pb was determined to be 250nm with a mean geometric standard deviation (δ) of 2.6. The mean value of the specific concentration of 212 Pb associated with that mode was determined to be 460+/-20mBqm -3 . The activity median thermodynamic diameter (AMTD) of unattached 212 Pb was determined to be 1.25nm with δ of 1.4. A mean unattached fraction (f p ) of 0.13+/-0.02 was obtained at a mean aerosol particle concentration of 1.8x10 3 cm -3 . The mean activity concentration of unattached 212 Pb was found to be 19+/-3mBqm -3 . It was found that the aerosol concentration played an important role in varying the unattached, attached activity concentration and unattached fraction (f P )

  12. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    Directory of Open Access Journals (Sweden)

    Eckhard Limpert

    Full Text Available BACKGROUND: The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. METHODOLOGY/PRINCIPAL FINDINGS: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log- normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. CONCLUSIONS/SIGNIFICANCE: The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  13. Distributed security in closed distributed systems

    DEFF Research Database (Denmark)

    Hernandez, Alejandro Mario

    properties. This is also restricted to distributed systems in which the set of locations is known a priori. All this follows techniques borrowed from both the model checking and the static analysis communities. In the end, we reach a step towards solving the problem of enforcing security in distributed...... systems. We achieve the goal of showing how this can be done, though we restrict ourselves to closed systems and with a limited set of enforceable security policies. In this setting, our approach proves to be efficient. Finally, we achieve all this by bringing together several fields of Computer Science......The goal of the present thesis is to discuss, argue and conclude about ways to provide security to the information travelling around computer systems consisting of several known locations. When developing software systems, security of the information managed by these plays an important role...

  14. Distributed intelligence in CAMAC

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1977-01-01

    The CAMAC digital interface standard has served us well since 1969. During this time there have been enormous advances in digital electronics. In particular, low cost microprocessors now make it feasible to consider use of distributed intelligence even in simple data acquisition systems. This paper describes a simple extension of the CAMAC standard which allows distributed intelligence at the crate level

  15. Distributed intelligence in CAMAC

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1977-01-01

    A simple extension of the CAMAC standard is described which allows distributed intelligence at the crate level. By distributed intelligence is meant that there is more than one source of control in a system. This standard is just now emerging from the NIM Dataway Working Group and its European counterpart. 1 figure

  16. Wigner distribution in optics

    NARCIS (Netherlands)

    Bastiaans, M.J.; Testorf, M.; Hennelly, B.; Ojeda-Castañeda, J.

    2009-01-01

    In 1932 Wigner introduced a distribution function in mechanics that permitted a description of mechanical phenomena in a phase space. Such a Wigner distribution was introduced in optics by Dolin and Walther in the sixties, to relate partial coherence to radiometry. A few years later, the Wigner

  17. Cache Oblivious Distribution Sweeping

    DEFF Research Database (Denmark)

    Brodal, G.S.; Fagerberg, R.

    2002-01-01

    We adapt the distribution sweeping method to the cache oblivious model. Distribution sweeping is the name used for a general approach for divide-and-conquer algorithms where the combination of solved subproblems can be viewed as a merging process of streams. We demonstrate by a series of algorith...

  18. Distributed Energy Implementation Options

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Chandralata N [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-13

    This presentation covers the options for implementing distributed energy projects. It distinguishes between options available for distributed energy that is government owned versus privately owned, with a focus on the privately owned options including Energy Savings Performance Contract Energy Sales Agreements (ESPC ESAs). The presentation covers the new ESPC ESA Toolkit and other Federal Energy Management Program resources.

  19. Distributed Operating Systems

    NARCIS (Netherlands)

    Mullender, Sape J.

    1987-01-01

    In the past five years, distributed operating systems research has gone through a consolidation phase. On a large number of design issues there is now considerable consensus between different research groups. In this paper, an overview of recent research in distributed systems is given. In turn, the

  20. Intelligent distribution network design

    NARCIS (Netherlands)

    Provoost, F.

    2009-01-01

    Distribution networks (medium voltage and low voltage) are subject to changes caused by re-regulation of the energy supply, economical and environmental constraints more sensitive equipment, power quality requirements and the increasing penetration of distributed generation. The latter is seen as

  1. Smart Distribution Systems

    Directory of Open Access Journals (Sweden)

    Yazhou Jiang

    2016-04-01

    Full Text Available The increasing importance of system reliability and resilience is changing the way distribution systems are planned and operated. To achieve a distribution system self-healing against power outages, emerging technologies and devices, such as remote-controlled switches (RCSs and smart meters, are being deployed. The higher level of automation is transforming traditional distribution systems into the smart distribution systems (SDSs of the future. The availability of data and remote control capability in SDSs provides distribution operators with an opportunity to optimize system operation and control. In this paper, the development of SDSs and resulting benefits of enhanced system capabilities are discussed. A comprehensive survey is conducted on the state-of-the-art applications of RCSs and smart meters in SDSs. Specifically, a new method, called Temporal Causal Diagram (TCD, is used to incorporate outage notifications from smart meters for enhanced outage management. To fully utilize the fast operation of RCSs, the spanning tree search algorithm is used to develop service restoration strategies. Optimal placement of RCSs and the resulting enhancement of system reliability are discussed. Distribution system resilience with respect to extreme events is presented. Test cases are used to demonstrate the benefit of SDSs. Active management of distributed generators (DGs is introduced. Future research in a smart distribution environment is proposed.

  2. The Distributed Criterion Design

    Science.gov (United States)

    McDougall, Dennis

    2006-01-01

    This article describes and illustrates a novel form of the changing criterion design called the distributed criterion design, which represents perhaps the first advance in the changing criterion design in four decades. The distributed criterion design incorporates elements of the multiple baseline and A-B-A-B designs and is well suited to applied…

  3. Evaluating Distributed Timing Constraints

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems.......In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems....

  4. Advanced Distribution Management System

    OpenAIRE

    Avazov, Artur; Sobinova, Lubov Anatolievna

    2016-01-01

    This article describes the advisability of using advanced distribution management systems in the electricity distribution networks area and considers premises of implementing ADMS within the Smart Grid era. Also, it gives the big picture of ADMS and discusses the ADMS advantages and functionalities.

  5. Advanced Distribution Management System

    Science.gov (United States)

    Avazov, Artur R.; Sobinova, Liubov A.

    2016-02-01

    This article describes the advisability of using advanced distribution management systems in the electricity distribution networks area and considers premises of implementing ADMS within the Smart Grid era. Also, it gives the big picture of ADMS and discusses the ADMS advantages and functionalities.

  6. Advanced Distribution Management System

    Directory of Open Access Journals (Sweden)

    Avazov Artur R.

    2016-01-01

    Full Text Available This article describes the advisability of using advanced distribution management systems in the electricity distribution networks area and considers premises of implementing ADMS within the Smart Grid era. Also, it gives the big picture of ADMS and discusses the ADMS advantages and functionalities.

  7. Mathematical modeling and comparison of protein size distribution in different plant, animal, fungal and microbial species reveals a negative correlation between protein size and protein number, thus providing insight into the evolution of proteomes

    Directory of Open Access Journals (Sweden)

    Tiessen Axel

    2012-02-01

    Full Text Available Abstract Background The sizes of proteins are relevant to their biochemical structure and for their biological function. The statistical distribution of protein lengths across a diverse set of taxa can provide hints about the evolution of proteomes. Results Using the full genomic sequences of over 1,302 prokaryotic and 140 eukaryotic species two datasets containing 1.2 and 6.1 million proteins were generated and analyzed statistically. The lengthwise distribution of proteins can be roughly described with a gamma type or log-normal model, depending on the species. However the shape parameter of the gamma model has not a fixed value of 2, as previously suggested, but varies between 1.5 and 3 in different species. A gamma model with unrestricted shape parameter described best the distributions in ~48% of the species, whereas the log-normal distribution described better the observed protein sizes in 42% of the species. The gamma restricted function and the sum of exponentials distribution had a better fitting in only ~5% of the species. Eukaryotic proteins have an average size of 472 aa, whereas bacterial (320 aa and archaeal (283 aa proteins are significantly smaller (33-40% on average. Average protein sizes in different phylogenetic groups were: Alveolata (628 aa, Amoebozoa (533 aa, Fornicata (543 aa, Placozoa (453 aa, Eumetazoa (486 aa, Fungi (487 aa, Stramenopila (486 aa, Viridiplantae (392 aa. Amino acid composition is biased according to protein size. Protein length correlated negatively with %C, %M, %K, %F, %R, %W, %Y and positively with %D, %E, %Q, %S and %T. Prokaryotic proteins had a different protein size bias for %E, %G, %K and %M as compared to eukaryotes. Conclusions Mathematical modeling of protein length empirical distributions can be used to asses the quality of small ORFs annotation in genomic releases (detection of too many false positive small ORFs. There is a negative correlation between average protein size and total number of

  8. Development of distributed target

    CERN Document Server

    Yu Hai Jun; Li Qin; Zhou Fu Xin; Shi Jin Shui; Ma Bing; Chen Nan; Jing Xiao Bing

    2002-01-01

    Linear introduction accelerator is expected to generate small diameter X-ray spots with high intensity. The interaction of the electron beam with plasmas generated at the X-ray converter will make the spot on target increase with time and debase the X-ray dose and the imaging resolving power. A distributed target is developed which has about 24 pieces of thin 0.05 mm tantalum films distributed over 1 cm. due to the structure adoption, the distributed target material over a large volume decreases the energy deposition per unit volume and hence reduces the temperature of target surface, then reduces the initial plasma formalizing and its expansion velocity. The comparison and analysis with two kinds of target structures are presented using numerical calculation and experiments, the results show the X-ray dose and normalized angle distribution of the two is basically the same, while the surface of the distributed target is not destroyed like the previous block target

  9. Distributed Analysis in CMS

    CERN Document Server

    Fanfani, Alessandra; Sanches, Jose Afonso; Andreeva, Julia; Bagliesi, Giusepppe; Bauerdick, Lothar; Belforte, Stefano; Bittencourt Sampaio, Patricia; Bloom, Ken; Blumenfeld, Barry; Bonacorsi, Daniele; Brew, Chris; Calloni, Marco; Cesini, Daniele; Cinquilli, Mattia; Codispoti, Giuseppe; D'Hondt, Jorgen; Dong, Liang; Dongiovanni, Danilo; Donvito, Giacinto; Dykstra, David; Edelmann, Erik; Egeland, Ricky; Elmer, Peter; Eulisse, Giulio; Evans, Dave; Fanzago, Federica; Farina, Fabio; Feichtinger, Derek; Fisk, Ian; Flix, Josep; Grandi, Claudio; Guo, Yuyi; Happonen, Kalle; Hernandez, Jose M; Huang, Chih-Hao; Kang, Kejing; Karavakis, Edward; Kasemann, Matthias; Kavka, Carlos; Khan, Akram; Kim, Bockjoo; Klem, Jukka; Koivumaki, Jesper; Kress, Thomas; Kreuzer, Peter; Kurca, Tibor; Kuznetsov, Valentin; Lacaprara, Stefano; Lassila-Perini, Kati; Letts, James; Linden, Tomas; Lueking, Lee; Maes, Joris; Magini, Nicolo; Maier, Gerhild; McBride, Patricia; Metson, Simon; Miccio, Vincenzo; Padhi, Sanjay; Pi, Haifeng; Riahi, Hassen; Riley, Daniel; Rossman, Paul; Saiz, Pablo; Sartirana, Andrea; Sciaba, Andrea; Sekhri, Vijay; Spiga, Daniele; Tuura, Lassi; Vaandering, Eric; Vanelderen, Lukas; Van Mulders, Petra; Vedaee, Aresh; Villella, Ilaria; Wicklund, Eric; Wildish, Tony; Wissing, Christoph; Wurthwein, Frank

    2009-01-01

    The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.

  10. Managing Distributed Software Projects

    DEFF Research Database (Denmark)

    Persson, John Stouby

    Increasingly, software projects are becoming geographically distributed, with limited face-toface interaction between participants. These projects face particular challenges that need careful managerial attention. This PhD study reports on how we can understand and support the management...... of distributed software projects, based on a literature study and a case study. The main emphasis of the literature study was on how to support the management of distributed software projects, but also contributed to an understanding of these projects. The main emphasis of the case study was on how to understand...... the management of distributed software projects, but also contributed to supporting the management of these projects. The literature study integrates what we know about risks and risk-resolution techniques, into a framework for managing risks in distributed contexts. This framework was developed iteratively...

  11. Distributed Control Diffusion

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2007-01-01

    . Programming a modular, self-reconfigurable robot is however a complicated task: the robot is essentially a real-time, distributed embedded system, where control and communication paths often are tightly coupled to the current physical configuration of the robot. To facilitate the task of programming modular....... This approach allows the programmer to dynamically distribute behaviors throughout a robot and moreover provides a partial abstraction over the concrete physical shape of the robot. We have implemented a prototype of a distributed control diffusion system for the ATRON modular, self-reconfigurable robot......, self-reconfigurable robots, we present the concept of distributed control diffusion: distributed queries are used to identify modules that play a specific role in the robot, and behaviors that implement specific control strategies are diffused throughout the robot based on these role assignments...

  12. Distributed Language and Dialogism

    DEFF Research Database (Denmark)

    Steffensen, Sune Vork

    2015-01-01

    addresses Linell’s critique of Distributed Language as rooted in biosemiotics and in theories of organism-environment systems. It is argued that Linell’s sense-based approach entails an individualist view of how conspecific Others acquire their status as prominent parts of the sense-maker’s environment......This article takes a starting point in Per Linell’s (2013) review article on the book Distributed Language (Cowley, 2011a) and other contributions to the field of ‘Distributed Language’, including Cowley et al. (2010) and Hodges et al. (2012). The Distributed Language approach is a naturalistic...... and anti-representational approach to language that builds on recent developments in the cognitive sciences. With a starting point in Linell’s discussion of the approach, the article aims to clarify four aspects of a distributed view of language vis-à-vis the tradition of Dialogism, as presented by Linell...

  13. Pervasive Electricity Distribution System

    Directory of Open Access Journals (Sweden)

    Muhammad Usman Tahir

    2017-06-01

    Full Text Available Now a days a country cannot become economically strong until and unless it has enough electrical power to fulfil industrial and domestic needs. Electrical power being the pillar of any country’s economy, needs to be used in an efficient way. The same step is taken here by proposing a new system for energy distribution from substation to consumer houses, also it monitors the consumer consumption and record data. Unlike traditional manual Electrical systems, pervasive electricity distribution system (PEDS introduces a fresh perspective to monitor the feeder line status at distribution and consumer level. In this system an effort is taken to address the issues of electricity theft, manual billing, online monitoring of electrical distribution system and automatic control of electrical distribution points. The project is designed using microcontroller and different sensors, its GUI is designed in Labview software.

  14. Angular reflectance of suspended gold, aluminum and silver nanospheres on a gold film: Effects of concentration and size distribution

    International Nuclear Information System (INIS)

    Aslan, Mustafa M.; Wriedt, Thomas

    2010-01-01

    In this article, we describe a parametric study of the effects of the size distribution (SD) and the concentration of nanospheres in ethanol on the angular reflectance. Calculations are based on an effective medium approach in which the effective dielectric constant of the mixture is obtained using the Maxwell-Garnett formula. The detectable size limits of gold, aluminum, and silver nanospheres on a 50-nm-thick gold film are calculated to investigate the sensitivity of the reflectance to the SD and the concentration of the nanospheres. The following assumptions are made: (1) the total number of particles in the unit volume of suspension is constant, (2) the nanospheres in the suspension on a gold film have a SD with three different concentrations, and (3) there is no agglomeration and the particles have a log-normal SD, where the effective diameter, d eff and the effective variance, ν eff are given. The dependence of the reflectance on the d eff , ν eff , and the width of the SD are also investigated numerically. The angular variation of the reflectance as a function of the incident angle shows a strong dependence on the effective size of the metallic nanospheres. The results confirm that the size of the nanospheres (d eff o and 75 o for a given concentration with a particular SD.

  15. A Computer Program for Practical Semivariogram Modeling and Ordinary Kriging: A Case Study of Porosity Distribution in an Oil Field

    Science.gov (United States)

    Mert, Bayram Ali; Dag, Ahmet

    2017-12-01

    In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.

  16. The distribution of committed dose equivalents to workers exposed to tritium in the luminising industry in the United Kingdom

    International Nuclear Information System (INIS)

    Hipkin, J.

    1977-01-01

    In the United Kingdom tritium has become almost the only radionuclide that is used in luminising. Two distinct methods of luminising are used, one involving the use of tritium gas and the other involving the use of tritium activated luminous paint. All major luminisers have voluntarily taken part in urine monitoring programmes. The analyses have been carried out by the National Radiological Protection Board and estimates of committed dose equivalent have been made from the results. The work presented is an analysis of the committed dose equivalents received by all the individuals monitored in the years 1974, 1975 and 1976. It is shown that doses follow, in general, a lognormal distribution modified only at the high dose end by what must be described as dose management. Further evidence for dose management is seen when the pattern of dose versus time are analysed for selected individuals. It is shown that the maximum permissible dose as recommended by the International Commission on Radiological Protection, is only rarely exceeded. It is also shown that there is a substantial difference in the degree of exposure between workers involved in gaseous tritium luminising and workers using paint luminising. A comparison is made between exposure in gaseous tritium luminising and exposure in another common use of gaseous tritium, ie. the filling of electronic devices with tritium gas. It is shown that exposure is very much less in the electronic device work

  17. Distribution Integration | Grid Modernization | NREL

    Science.gov (United States)

    Distribution Integration Distribution Integration The goal of NREL's distribution integration research is to tackle the challenges facing the widespread integration of distributed energy resources NREL engineers mapping out a grid model on a whiteboard. NREL's research on the integration of

  18. Distributed Propulsion Vehicles

    Science.gov (United States)

    Kim, Hyun Dae

    2010-01-01

    Since the introduction of large jet-powered transport aircraft, the majority of these vehicles have been designed by placing thrust-generating engines either under the wings or on the fuselage to minimize aerodynamic interactions on the vehicle operation. However, advances in computational and experimental tools along with new technologies in materials, structures, and aircraft controls, etc. are enabling a high degree of integration of the airframe and propulsion system in aircraft design. The National Aeronautics and Space Administration (NASA) has been investigating a number of revolutionary distributed propulsion vehicle concepts to increase aircraft performance. The concept of distributed propulsion is to fully integrate a propulsion system within an airframe such that the aircraft takes full synergistic benefits of coupling of airframe aerodynamics and the propulsion thrust stream by distributing thrust using many propulsors on the airframe. Some of the concepts are based on the use of distributed jet flaps, distributed small multiple engines, gas-driven multi-fans, mechanically driven multifans, cross-flow fans, and electric fans driven by turboelectric generators. This paper describes some early concepts of the distributed propulsion vehicles and the current turboelectric distributed propulsion (TeDP) vehicle concepts being studied under the NASA s Subsonic Fixed Wing (SFW) Project to drastically reduce aircraft-related fuel burn, emissions, and noise by the year 2030 to 2035.

  19. Centralized versus distributed propulsion

    Science.gov (United States)

    Clark, J. P.

    1982-01-01

    The functions and requirements of auxiliary propulsion systems are reviewed. None of the three major tasks (attitude control, stationkeeping, and shape control) can be performed by a collection of thrusters at a single central location. If a centralized system is defined as a collection of separated clusters, made up of the minimum number of propulsion units, then such a system can provide attitude control and stationkeeping for most vehicles. A distributed propulsion system is characterized by more numerous propulsion units in a regularly distributed arrangement. Various proposed large space systems are reviewed and it is concluded that centralized auxiliary propulsion is best suited to vehicles with a relatively rigid core. These vehicles may carry a number of flexible or movable appendages. A second group, consisting of one or more large flexible flat plates, may need distributed propulsion for shape control. There is a third group, consisting of vehicles built up from multiple shuttle launches, which may be forced into a distributed system because of the need to add additional propulsion units as the vehicles grow. The effects of distributed propulsion on a beam-like structure were examined. The deflection of the structure under both translational and rotational thrusts is shown as a function of the number of equally spaced thrusters. When two thrusters only are used it is shown that location is an important parameter. The possibility of using distributed propulsion to achieve minimum overall system weight is also examined. Finally, an examination of the active damping by distributed propulsion is described.

  20. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-01-01

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO 2 )]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO 2 ), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO 2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO 2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower