WorldWideScience

Sample records for two-parameter lognormal distribution

  1. Neuronal variability during handwriting: lognormal distribution.

    Directory of Open Access Journals (Sweden)

    Valery I Rupasov

    Full Text Available We examined time-dependent statistical properties of electromyographic (EMG signals recorded from intrinsic hand muscles during handwriting. Our analysis showed that trial-to-trial neuronal variability of EMG signals is well described by the lognormal distribution clearly distinguished from the Gaussian (normal distribution. This finding indicates that EMG formation cannot be described by a conventional model where the signal is normally distributed because it is composed by summation of many random sources. We found that the variability of temporal parameters of handwriting--handwriting duration and response time--is also well described by a lognormal distribution. Although, the exact mechanism of lognormal statistics remains an open question, the results obtained should significantly impact experimental research, theoretical modeling and bioengineering applications of motor networks. In particular, our results suggest that accounting for lognormal distribution of EMGs can improve biomimetic systems that strive to reproduce EMG signals in artificial actuators.

  2. A Lognormal Distribution of Metal Resources

    Institute of Scientific and Technical Information of China (English)

    Donald A.Singer

    2011-01-01

    For national or global resource estimation of frequencies of metals, a lognormal distribution has commonly been recommended but not adequately tested. Tests of frequencies of Cu, Zn, Pb, A_g, and Au contents of 1 984 well-explored mineral deposits display a poor fit to the lognormal distribution. When the same metals plus Mo, Co, Nb2O3, and REE2O3 are grouped into 19 geologically defined deposit types, only eight of the 73 tests fail to be fit by lognormal distribution, and most of those failures are in two deposit types suggesting a problem with those types. Estimates of the mean and standard deviation of each of the metals in each of the deposit types are provided for modeling.

  3. Pareto tails and lognormal body of US cities size distribution

    Science.gov (United States)

    Luckstead, Jeff; Devadoss, Stephen

    2017-01-01

    We consider a distribution, which consists of lower tail Pareto, lognormal body, and upper tail Pareto, to estimate the size distribution of all US cities. This distribution fits the data more accurately than a distribution that comprises of only lognormal and the upper tail Pareto.

  4. Lognormal Behavior of the Size Distributions of Animation Characters

    Science.gov (United States)

    Yamamoto, Ken

    This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.

  5. Packing fraction of particles with lognormal size distribution.

    Science.gov (United States)

    Brouwers, H J H

    2014-05-01

    This paper addresses the packing and void fraction of polydisperse particles with a lognormal size distribution. It is demonstrated that a binomial particle size distribution can be transformed into a continuous particle-size distribution of the lognormal type. Furthermore, an original and exact expression is derived that predicts the packing fraction of mixtures of particles with a lognormal distribution, which is governed by the standard deviation, mode of packing, and particle shape only. For a number of particle shapes and their packing modes (close, loose) the applicable values are given. This closed-form analytical expression governing the packing fraction is thoroughly compared with empirical and computational data reported in the literature, and good agreement is found.

  6. Packing fraction of particles with lognormal size distribution

    Science.gov (United States)

    Brouwers, H. J. H.

    2014-05-01

    This paper addresses the packing and void fraction of polydisperse particles with a lognormal size distribution. It is demonstrated that a binomial particle size distribution can be transformed into a continuous particle-size distribution of the lognormal type. Furthermore, an original and exact expression is derived that predicts the packing fraction of mixtures of particles with a lognormal distribution, which is governed by the standard deviation, mode of packing, and particle shape only. For a number of particle shapes and their packing modes (close, loose) the applicable values are given. This closed-form analytical expression governing the packing fraction is thoroughly compared with empirical and computational data reported in the literature, and good agreement is found.

  7. On the Laplace transform of the Lognormal distribution

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    Integral transforms of the lognormal distribution are of great importance in statistics and probability, yet closed-form expressions do not exist. A wide variety of methods have been employed to provide approximations, both analytical and numerical. In this paper, we analyze a closed-form approxi...... to construct a reliable Monte Carlo estimator of L(θ) and prove it to be logarithmically efficient in the rare event sense as θ→∞....

  8. On the log-normal distribution of network traffic

    Science.gov (United States)

    Antoniou, I.; Ivanov, V. V.; Ivanov, Valery V.; Zrelov, P. V.

    2002-07-01

    A detailed analysis of traffic measurements shows that the aggregation of these measurements forms a statistical distribution, which is approximated with high accuracy by the log-normal distribution. The inter-arrival times and packet sizes, contributing to the formation of network traffic, can be considered as independent. Applying the wavelet transform to traffic measurements, we demonstrate the multiplicative character of traffic series. This result confirms that the scheme, developed by Kolmogorov [Dokl. Akad. Nauk SSSR 31 (1941) 99] for the homogeneous fragmentation of grains, applies also to network traffic.

  9. Interval Estimations of the Two-Parameter Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Lai Jiang

    2012-01-01

    Full Text Available In applied work, the two-parameter exponential distribution gives useful representations of many physical situations. Confidence interval for the scale parameter and predictive interval for a future independent observation have been studied by many, including Petropoulos (2011 and Lawless (1977, respectively. However, interval estimates for the threshold parameter have not been widely examined in statistical literature. The aim of this paper is to, first, obtain the exact significance function of the scale parameter by renormalizing the p∗-formula. Then the approximate Studentization method is applied to obtain the significance function of the threshold parameter. Finally, a predictive density function of the two-parameter exponential distribution is derived. A real-life data set is used to show the implementation of the method. Simulation studies are then carried out to illustrate the accuracy of the proposed methods.

  10. Time Truncated Testing Strategy using Multiple Testers: Lognormal Distributed Lifetime

    Directory of Open Access Journals (Sweden)

    Itrat Batool Naqvi

    2014-06-01

    Full Text Available In this study, group acceptance sampling plan proposed by Aslam et al. (2011 is reconsidered when the lifetime variant of the test item follows lognormal distribution. The optimal plan parameters are obtained by considering various pre-specified parameters. The plan parameters are obtained using the non-linear optimization solution using two points approach. The advantage of the proposed plan is discussed over the existing plan using the single point approach and the proposed plan is more efficient than the existing plan.

  11. Analysis of random laser scattering pulse signals with lognormal distribution

    Institute of Scientific and Technical Information of China (English)

    Yan Zhen-Gang; Bian Bao-Min; Wang Shou-Yu; Lin Ying-Lu; Wang Chun-Yong; Li Zhen-Hua

    2013-01-01

    The statistical distribution of natural phenomena is of great significance in studying the laws of nature.In order to study the statistical characteristics of a random pulse signal,a random process model is proposed theoretically for better studying of the random law of measured results.Moreover,a simple random pulse signal generation and testing system is designed for studying the counting distributions of three typical objects including particles suspended in the air,standard particles,and background noise.Both normal and lognormal distribution fittings are used for analyzing the experimental results and testified by chi-square distribution fit test and correlation coefficient for comparison.In addition,the statistical laws of three typical objects and the relations between them are discussed in detail.The relation is also the non-integral dimension fractal relation of statistical distributions of different random laser scattering pulse signal groups.

  12. Galaxy rotation curves with log-normal density distribution

    CERN Document Server

    Marr, John H

    2015-01-01

    The log-normal distribution represents the probability of finding randomly distributed particles in a micro canonical ensemble with high entropy. To a first approximation, a modified form of this distribution with a truncated termination may represent an isolated galactic disk, and this disk density distribution model was therefore run to give the best fit to the observational rotation curves for 37 representative galaxies. The resultant curves closely matched the observational data for a wide range of velocity profiles and galaxy types with rising, flat or descending curves in agreement with Verheijen's classification of 'R', 'F' and 'D' type curves, and the corresponding theoretical total disk masses could be fitted to a baryonic Tully Fisher relation (bTFR). Nine of the galaxies were matched to galaxies with previously published masses, suggesting a mean excess dynamic disk mass of dex0.61+/-0.26 over the baryonic masses. Although questionable with regard to other measurements of the shape of disk galaxy g...

  13. A method to dynamic stochastic multicriteria decision making with log-normally distributed random variables.

    Science.gov (United States)

    Wang, Xin-Fan; Wang, Jian-Qiang; Deng, Sheng-Yue

    2013-01-01

    We investigate the dynamic stochastic multicriteria decision making (SMCDM) problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG) operator and the dynamic log-normal distribution weighted geometric (DLNDWG) operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  14. A Method to Dynamic Stochastic Multicriteria Decision Making with Log-Normally Distributed Random Variables

    Directory of Open Access Journals (Sweden)

    Xin-Fan Wang

    2013-01-01

    Full Text Available We investigate the dynamic stochastic multicriteria decision making (SMCDM problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG operator and the dynamic log-normal distribution weighted geometric (DLNDWG operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  15. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  16. Angular momentum of disc galaxies with a lognormal density distribution

    CERN Document Server

    Marr, John Herbert

    2015-01-01

    Whilst most galaxy properties scale with galaxy mass, similar scaling relations for angular momentum are harder to demonstrate. A lognormal (LN) density distribution for disc mass provides a good overall fit to the observational data for disc rotation curves for a wide variety of galaxy types and luminosities. In this paper, the total angular momentum J and energy $\\vert{}$E$\\vert{}$ were computed for 38 disc galaxies from the published rotation curves and plotted against the derived disc masses, with best fit slopes of 1.683$\\pm{}$0.018 and 1.643$\\pm{}$0.038 respectively, using a theoretical model with a LN density profile. The derived mean disc spin parameter was $\\lambda{}$=0.423$\\pm{}$0.014. Using the rotation curve parameters V$_{max}$ and R$_{max}$ as surrogates for the virial velocity and radius, the virial mass estimator $M_{disc}\\propto{}R_{max}V_{max}^2$ was also generated, with a log-log slope of 1.024$\\pm{}$0.014 for the 38 galaxies, and a proportionality constant ${\\lambda{}}^*=1.47\\pm{}0.20\\time...

  17. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  18. The truncated lognormal distribution as a luminosity function for SWIFT-BAT gamma-ray bursts

    CERN Document Server

    Zaninetti, L

    2016-01-01

    The determination of the luminosity function (LF) in gamma ray bursts (GRBs) depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here we analyse three cosmologies: the standard cosmology, the plasma cosmology, and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law, and secondly by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  19. The Truncated Lognormal Distribution as a Luminosity Function for SWIFT-BAT Gamma-Ray Bursts

    Directory of Open Access Journals (Sweden)

    Lorenzo Zaninetti

    2016-11-01

    Full Text Available The determination of the luminosity function (LF in Gamma ray bursts (GRBs depends on the adopted cosmology, each one characterized by its corresponding luminosity distance. Here, we analyze three cosmologies: the standard cosmology, the plasma cosmology and the pseudo-Euclidean universe. The LF of the GRBs is firstly modeled by the lognormal distribution and the four broken power law and, secondly, by a truncated lognormal distribution. The truncated lognormal distribution fits acceptably the range in luminosity of GRBs as a function of the redshift.

  20. Discriminating between Weibull distributions and log-normal distributions emerging in branching processes

    Science.gov (United States)

    Goh, Segun; Kwon, H. W.; Choi, M. Y.

    2014-06-01

    We consider the Yule-type multiplicative growth and division process, and describe the ubiquitous emergence of Weibull and log-normal distributions in a single framework. With the help of the integral transform and series expansion, we show that both distributions serve as asymptotic solutions of the time evolution equation for the branching process. In particular, the maximum likelihood method is employed to discriminate between the emergence of the Weibull distribution and that of the log-normal distribution. Further, the detailed conditions for the distinguished emergence of the Weibull distribution are probed. It is observed that the emergence depends on the manner of the division process for the two different types of distribution. Numerical simulations are also carried out, confirming the results obtained analytically.

  1. Handbook of tables for order statistics from lognormal distributions with applications

    CERN Document Server

    Balakrishnan, N

    1999-01-01

    Lognormal distributions are one of the most commonly studied models in the sta­ tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma­ cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...

  2. Analysis of a stochastic model for bacterial growth and the lognormality in the cell-size distribution

    CERN Document Server

    Yamamoto, Ken

    2016-01-01

    This paper theoretically analyzes a phenomenological stochastic model for bacterial growth. This model comprises cell divisions and linear growth of cells, where growth rates and cell cycles are drawn from lognormal distributions. We derive that the cell size is expressed as a sum of independent lognormal variables. We show numerically that the quality of the lognormal approximation greatly depends on the distributions of the growth rate and cell cycle. Furthermore, we show that actual parameters of the growth rate and cell cycle take values which give good lognormal approximation, so the experimental cell-size distribution is in good agreement with a lognormal distribution.

  3. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    Energy Technology Data Exchange (ETDEWEB)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D' Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.

    2016-08-30

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  4. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    Energy Technology Data Exchange (ETDEWEB)

    Clerkin, L.; et al.

    2016-05-06

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  5. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    Science.gov (United States)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.

    2017-04-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  6. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    CERN Document Server

    Clerkin, L; Manera, M; Lahav, O; Abdalla, F; Amara, A; Bacon, D; Chang, C; Gaztañaga, E; Hawken, A; Jain, B; Joachimi, B; Vikram, V; Abbott, T; Allam, S; Armstrong, R; Benoit-Lévy, A; Bernstein, G M; Bertin, E; Brooks, D; Burk, D L; Rosell, A Carnero; Kind, M Carrasco; Crocce, M; Cunha, C E; D'Andrea, C B; da Costa, L N; Desai, S; Diehl, H T; Dietrich, J P; Eifler, T F; Evrard, A E; Flaugher, B; Fosalba, P; Frieman, J; Gerdes, D W; Gruen, D; Gruendl, R A; Gutierrez, G; Honscheid, K; James, D J; Kent, S; Kuehn, K; Kuropatkin, N; Lima, M; Melchior, P; Miquel, R; Nord, B; Plazas, A A; Romer, A K; Sanchez, E; Schubnell, M; Sevilla-Noarbe, I; Smith, R C; Santos, M Soares; Sobreira, F; Suchyta, E; Swanson, M E C; Tarle, G; Walker, A R

    2016-01-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL d...

  8. Models for Unsaturated Hydraulic Conductivity Based on Truncated Lognormal Pore-size Distributions

    CERN Document Server

    Malama, Bwalya

    2013-01-01

    We develop a closed-form three-parameter model for unsaturated hydraulic conductivity associated with a three-parameter lognormal model of moisture retention, which is based on lognormal grainsize distribution. The derivation of the model is made possible by a slight modification to the theory of Mualem. We extend the three-parameter lognormal distribution to a four-parameter model that also truncates the pore size distribution at a minimum pore radius. We then develop the corresponding four-parameter model for moisture retention and the associated closed-form expression for unsaturated hydraulic conductivity. The four-parameter model is fitted to experimental data, similar to the models of Kosugi and van Genuchten. The proposed four-parameter model retains the physical basis of Kosugi's model, while improving fit to observed data especially when simultaneously fitting pressure-saturation and pressure-conductivity data.

  9. Species Abundance in a Forest Community in South China: A Case of Poisson Lognormal Distribution

    Institute of Scientific and Technical Information of China (English)

    Zuo-Yun YIN; Hai REN; Qian-Mei ZHANG; Shao-Lin PENG; Qin-Feng GUO; Guo-Yi ZHOU

    2005-01-01

    Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m×20 m, 5 m×5 m, and 1 m×1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal;(ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (σ andμ) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the σ and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/σ should be an alternative measure of diversity.

  10. Asymptotic Results for the Two-parameter Poisson-Dirichlet Distribution

    CERN Document Server

    Feng, Shui

    2009-01-01

    The two-parameter Poisson-Dirichlet distribution is the law of a sequence of decreasing nonnegative random variables with total sum one. It can be constructed from stable and Gamma subordinators with the two-parameters, $\\alpha$ and $\\theta$, corresponding to the stable component and Gamma component respectively. The moderate deviation principles are established for the two-parameter Poisson-Dirichlet distribution and the corresponding homozygosity when $\\theta$ approaches infinity, and the large deviation principle is established for the two-parameter Poisson-Dirichlet distribution when both $\\alpha$ and $\\theta$ approach zero.

  11. Estimation of expected value and coefficient of variation for lognormal and gamma distributions

    Energy Technology Data Exchange (ETDEWEB)

    White, G.C.

    1978-07-01

    Concentrations of environmental pollutants tend to follow positively skewed frequency distributions. Two such density functions are the gamma and lognormal. Minimum variance unbiased estimators of the expected value for both densities are available. The small sample statistical properties of each of these estimators were compared for their own distributions, as well as for the other distribution, to check the robustness of the estimator. The arithmetic mean is known to provide an unbiased estimator of expected value when the underlying density of the sample is either lognormal or gamma, and results indicated the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two. Further Monte Carlo simulations were conducted to study the robustness of the above estimators by simulating a lognormal or gamma distribution with the expected value of a particular observation selected from a uniform distribution before the lognormal or gamma observation is generated. Again, the arithmetic mean provides an unbiased estimate of expected value, and the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two.

  12. MODELING PARTICLE SIZE DISTRIBUTION IN HETEROGENEOUS POLYMERIZATION SYSTEMS USING MULTIMODAL LOGNORMAL FUNCTION

    Directory of Open Access Journals (Sweden)

    J. C. Ferrari

    Full Text Available Abstract This work evaluates the usage of the multimodal lognormal function to describe Particle Size Distributions (PSD of emulsion and suspension polymerization processes, including continuous reactions with particle re-nucleation leading to complex multimodal PSDs. A global optimization algorithm, namely Particle Swarm Optimization (PSO, was used for parameter estimation of the proposed model, minimizing the objective function defined by the mean squared errors. Statistical evaluation of the results indicated that the multimodal lognormal function could describe distinctive features of different types of PSDs with accuracy and consistency.

  13. Recovering the nonlinear density field from the galaxy distribution with a Poisson-Lognormal filter

    CERN Document Server

    Kitaura, Francisco S; Metcalf, R Benton

    2009-01-01

    We present a general expression for a lognormal filter given an arbitrary nonlinear galaxy bias. We derive this filter as the maximum a posteriori solution assuming a lognormal prior distribution for the matter field with a given mean field and modeling the observed galaxy distribution by a Poissonian process. We have performed a three-dimensional implementation of this filter with a very efficient Newton-Krylov inversion scheme. Furthermore, we have tested it with a dark matter N-body simulation assuming a unit galaxy bias relation and compared the results with previous density field estimators like the inverse weighting scheme and Wiener filtering. Our results show good agreement with the underlying dark matter field for overdensities even above delta~1000 which exceeds by one order of magnitude the regime in which the lognormal is expected to be valid. The reason is that for our filter the lognormal assumption enters as a prior distribution function, but the maximum a posteriori solution is also conditione...

  14. Lognormal Distribution of Some Physiological Responses in Young Healthy Indian Males

    Directory of Open Access Journals (Sweden)

    S. S. Verma

    1986-01-01

    Full Text Available Evaluation of statistical distribution of physiological responses is of fundamental importance for better statistical interpretation of physiological phenomenon. In this paper, statistical distribution of three important physiological responses viz., maximal aerobic power (VO2 max, maximal heart rate (HR max and maximum voluntary ventilation (MVV in young healthy Indian males of age ranging from 19 to 22 years have been worked out. It is concluded that these three important physiological responses follow the lognormal distribution.

  15. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  16. On Modelling Insurance Data by Using a Generalized Lognormal Distribution || Sobre la modelización de datos de seguros usando una distribución lognormal generalizada

    Directory of Open Access Journals (Sweden)

    García, Victoriano J.

    2014-12-01

    Full Text Available In this paper, a new heavy-tailed distribution is used to model data with a strong right tail, as often occurs in practical situations. The distribution proposed is derived from the lognormal distribution, by using the Marshall and Olkin procedure. Some basic properties of this new distribution are obtained and we present situations where this new distribution correctly reflects the sample behaviour for the right tail probability. An application of the model to dental insurance data is presented and analysed in depth. We conclude that the generalized lognormal distribution proposed is a distribution that should be taken into account among other possible distributions for insurance data in which the properties of a heavy-tailed distribution are present. || Presentamos una nueva distribución lognormal con colas pesadas que se adapta bien a muchas situaciones prácticas en el campo de los seguros. Utilizamos el procedimiento de Marshall y Olkin para generar tal distribución y estudiamos sus propiedades básicas. Se presenta una aplicación de la misma para datos de seguros dentales que es analizada en profundidad, concluyendo que tal distribución deberá formar parte del catálogo de distribuciones a tener cuenta para la modernización de datos en seguros cuando hay presencia de colas pesadas.

  17. Determination of substrate log-normal distribution in the AZ91/SICP composite

    Directory of Open Access Journals (Sweden)

    J. Lelito

    2015-01-01

    Full Text Available The aim in this work is to develop a log-normal distribution of heterogeneous nucleation substrates for the composite based on AZ91 alloy reinforced by SiC particles. The computational algorithm allowing the restore of the nucleation substrates distribution was used. The experiment was performed for the AZ91 alloy containing 1 % wt. of SiC particles. Obtained from experiment, the grains density of magnesium primary phase and supercooling were used to algorithm as input data.

  18. Radical tessellation of the packing of spheres with a log-normal size distribution

    Science.gov (United States)

    Yi, L. Y.; Dong, K. J.; Zou, R. P.; Yu, A. B.

    2015-09-01

    The packing of particles with a log-normal size distribution is studied by means of the discrete element method. The packing structures are analyzed in terms of the topological properties such as the number of faces per radical polyhedron and the number of edges per face, and the metric properties such as the perimeter and area per face and the perimeter, area, and volume per radical polyhedron, obtained from the radical tessellation. The effect of the geometric standard deviation in the log-normal distribution on these properties is quantified. It is shown that when the size distribution gets wider, the packing becomes denser; thus the radical tessellation of a particle has decreased topological and metric properties. The quantitative relationships obtained should be useful in the modeling and analysis of structural properties such as effective thermal conductivity and permeability.

  19. On the Bivariate Nakagami-Lognormal Distribution and Its Correlation Properties

    Directory of Open Access Journals (Sweden)

    Juan Reig

    2014-01-01

    Full Text Available The bivariate Nakagami-lognormal distribution used to model the composite fast fading and shadowing has been examined exhaustively. In particular, we have derived the joint probability density function, the cross-moments, and the correlation coefficient in power terms. Also, two procedures to generate two correlated Nakagami-lognormal random variables are described. These procedures can be used to evaluate the robustness of the sample correlation coefficient distribution in both macro- and microdiversity scenarios. It is shown that the bias and the standard deviation of this sample correlation coefficient are substantially high for large shadowing standard deviations found in wireless communication measurements, even if the number of observations is considerable.

  20. A Study of the Application of the Lognormal Distribution to Corrective Maintenance Repair Time

    Science.gov (United States)

    1979-06-01

    PERFORMING ORGANIZATION NAM AND AOOREW I0. PROGRAM ELEMENT. PROJECT. TASK Naval Postgraduate School AREA & WORK UNIT NUMERS Monterey, California 93940 11...from the more usual procedure in which the test statistic is compared to a value which is such that the area under the distribution to its right is...Most of the sets of data show that the lognormal distribucion cannot be rejected as an adequate descriptor for corrective maintenance repair time

  1. Approximation to Distribution of Product of Random Variables Using Orthogonal Polynomials for Lognormal Density

    CERN Document Server

    Zheng, Zhong; Hämäläinen, Jyri; Tirkkonen, Olav

    2012-01-01

    We derive a closed-form expression for the orthogonal polynomials associated with the general lognormal density. The result can be utilized to construct easily computable approximations for probability density function of a product of random variables. As an example, we have calculated the approximative distribution for the product of correlated Nakagami-m variables. Simulations indicate that accuracy of the proposed approximation is good.

  2. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    CERN Document Server

    Yamazaki, Dai G; Takahashi, Keitaro; 10.1103/PhysRevD.84.123006

    2011-01-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at $k\\simeq 10^{-2.5}$ Mpc$^{-1}$ with the upper limit $B\\lesssim 3$ nG.

  3. Analysis of variance of communication latencies in anesthesia: comparing means of multiple log-normal distributions.

    Science.gov (United States)

    Ledolter, Johannes; Dexter, Franklin; Epstein, Richard H

    2011-10-01

    Anesthesiologists rely on communication over periods of minutes. The analysis of latencies between when messages are sent and responses obtained is an essential component of practical and regulatory assessment of clinical and managerial decision-support systems. Latency data including times for anesthesia providers to respond to messages have moderate (> n = 20) sample sizes, large coefficients of variation (e.g., 0.60 to 2.50), and heterogeneous coefficients of variation among groups. Highly inaccurate results are obtained both by performing analysis of variance (ANOVA) in the time scale or by performing it in the log scale and then taking the exponential of the result. To overcome these difficulties, one can perform calculation of P values and confidence intervals for mean latencies based on log-normal distributions using generalized pivotal methods. In addition, fixed-effects 2-way ANOVAs can be extended to the comparison of means of log-normal distributions. Pivotal inference does not assume that the coefficients of variation of the studied log-normal distributions are the same, and can be used to assess the proportional effects of 2 factors and their interaction. Latency data can also include a human behavioral component (e.g., complete other activity first), resulting in a bimodal distribution in the log-domain (i.e., a mixture of distributions). An ANOVA can be performed on a homogeneous segment of the data, followed by a single group analysis applied to all or portions of the data using a robust method, insensitive to the probability distribution.

  4. On modeling of lifetime data using two-parameter Gamma and Weibull distributions

    NARCIS (Netherlands)

    Shanker, Rama; Shukla, Kamlesh Kumar; Shanker, Ravi; Leonida, Tekie Asehun

    2016-01-01

    The analysis and modeling of lifetime data are crucial in almost all applied sciences including medicine, insurance, engineering, behavioral sciences and finance, amongst others. The main objective of this paper is to have a comparative study of two-parameter gamma and Weibull distributions for mode

  5. An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand

    Science.gov (United States)

    Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.

    2005-01-01

    An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…

  6. Possible Lognormal Distribution of Fermi-LAT Data of OJ 287

    Indian Academy of Sciences (India)

    G. G. Deng; Y. Liu; J. H. Fan; H. G. Wang

    2014-09-01

    OJ 287 is a BL Lac object at redshift = 0.306 that has shown double-peaked bursts at regular intervals of 12 yr during the last 40 yr according to previous research. Some of the AGN ray power density shows a white noise process, while some others shows a red noise process. Some AGN flux presents normal or log-normal distribution. The two processes have an intrinsic relationship with centre black hole emission mechanism. We present the results of the analysis of the Fermi-LAT data. We review some problems concerning the random process.

  7. Survey on Log-Normally Distributed Market-Technical Trend Data

    Directory of Open Access Journals (Sweden)

    René Brenner

    2016-07-01

    Full Text Available In this survey, a short introduction of the recent discovery of log-normally-distributed market-technical trend data will be given. The results of the statistical evaluation of typical market-technical trend variables will be presented. It will be shown that the log-normal assumption fits better to empirical trend data than to daily returns of stock prices. This enables one to mathematically evaluate trading systems depending on such variables. In this manner, a basic approach to an anti-cyclic trading system will be given as an example.

  8. STANDARDIZED PRECIPITATION INDEX (SPI CALCULATED WITH THE USE OF LOG-NORMAL DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Edward Gąsiorek

    2014-10-01

    Full Text Available The problem analyzed in this paper is the continuation of research conducted on data from Wrocław-Swojec agro- and hydrometeorology observatory in 1964–2009 period and published in “Infrastruktura i Ekologia Terenów Wiejskich” nr 3/III/2012, pp. 197–208. The paper concerns two methods of calculation of standardized precipitation index (SPI. The first one extracts SPI directly from gamma distribution, since monthly precipitation sums in the 1964–2009 period in Wrocław are gamma distributed. The second method is based on the transformations of data leading to normal distribution. The authors calculate SPI with the use of log-normal distribution and confront it with values obtained by gamma and normal distributions. The aim of this paper is to comparatively assess the SPI values obtained with those three different methods.

  9. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  10. Piecewise log-normal approximation of size distributions for aerosol modelling

    Directory of Open Access Journals (Sweden)

    K. von Salzen

    2006-01-01

    Full Text Available An efficient and accurate method for the representation of particle size distributions in atmospheric models is proposed. The method can be applied, but is not necessarily restricted, to aerosol mass and number size distributions. A piecewise log-normal approximation of the number size distribution within sections of the particle size spectrum is used. Two of the free parameters of the log-normal approximation are obtained from the integrated number and mass concentration in each section. The remaining free parameter is prescribed. The method is efficient in a sense that only relatively few calculations are required for applications of the method in atmospheric models. Applications of the method in simulations of particle growth by condensation and simulations with a single column model for nucleation, condensation, gravitational settling, wet deposition, and mixing are described. The results are compared to results from simulations employing single- and double-moment bin methods that are frequently used in aerosol modelling. According to these comparisons, the accuracy of the method is noticeably higher than the accuracy of the other methods.

  11. Retention for Stoploss reinsurance to minimize VaR in compound Poisson-Lognormal distribution

    Science.gov (United States)

    Soleh, Achmad Zanbar; Noviyanti, Lienda; Nurrahmawati, Irma

    2015-12-01

    Automobile insurance is one of the emerging general insurance's product in Indonesia. Fluctuation in total premium revenues and total claim expenses leads to a risk that insurance company can not be able to pay consumer's claims, thus reinsurance is needeed. Reinsurance is a risk transfer mechanism from the insurance company to another company called reinsurer, one of the reinsurance type is Stoploss. Because reinsurer charges premium to the insurance company, it is important to determine the retention or the total claims to be retain solely by the insurance company. Thus, retention is determined using Value at Risk (VaR) which minimize the total risk of the insurance company in the presence of Stoploss reinsurance. Retention depends only on the distribution of total claims and reinsurance loading factor. We use the compound Poisson distribution and the Log-Normal Distribution to illustrate the retention value in a collective risk model.

  12. Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights

    Science.gov (United States)

    Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato

    2016-07-01

    Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.

  13. Thermal and log-normal distributions of plasma in laser driven Coulomb explosions of deuterium clusters

    Science.gov (United States)

    Barbarino, M.; Warrens, M.; Bonasera, A.; Lattuada, D.; Bang, W.; Quevedo, H. J.; Consoli, F.; de Angelis, R.; Andreoli, P.; Kimura, S.; Dyer, G.; Bernstein, A. C.; Hagel, K.; Barbui, M.; Schmidt, K.; Gaul, E.; Donovan, M. E.; Natowitz, J. B.; Ditmire, T.

    2016-08-01

    In this work, we explore the possibility that the motion of the deuterium ions emitted from Coulomb cluster explosions is highly disordered enough to resemble thermalization. We analyze the process of nuclear fusion reactions driven by laser-cluster interactions in experiments conducted at the Texas Petawatt laser facility using a mixture of D2+3He and CD4+3He cluster targets. When clusters explode by Coulomb repulsion, the emission of the energetic ions is “nearly” isotropic. In the framework of cluster Coulomb explosions, we analyze the energy distributions of the ions using a Maxwell-Boltzmann (MB) distribution, a shifted MB distribution (sMB), and the energy distribution derived from a log-normal (LN) size distribution of clusters. We show that the first two distributions reproduce well the experimentally measured ion energy distributions and the number of fusions from d-d and d-3He reactions. The LN distribution is a good representation of the ion kinetic energy distribution well up to high momenta where the noise becomes dominant, but overestimates both the neutron and the proton yields. If the parameters of the LN distributions are chosen to reproduce the fusion yields correctly, the experimentally measured high energy ion spectrum is not well represented. We conclude that the ion kinetic energy distribution is highly disordered and practically not distinguishable from a thermalized one.

  14. How log-normal is your country? An analysis of the statistical distribution of the exported volumes of products

    Science.gov (United States)

    Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea

    2016-10-01

    We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.

  15. Reconstruction of probabilistic S-N curves under fatigue life following lognormal distribution with given confidence

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-xiang; YANG Bing; PENG Jia-chun

    2007-01-01

    When the historic probabilistic S-N curves are given under special survival probability and confidence levels and there is no possible to re-test, fatigue reliability analysis at other levels can not be done except for the special levels. Therefore, the wide applied curves are expected. Monte Carlo reconstruction methods of the test data and the curves are investigated under fatigue life following lognormal distribution. To overcome the non-conservative assessment of existent man-made enlarging the sample size up to thousands, a simulation policy is employed to address the true production where the sample size is controlled less than 20 for material specimens, 10 for structural component specimens and the errors matching the statistical parameters are less than 5 percent. Availability and feasibility of the present methods have been indicated by the reconstruction practice of the test data and curves for 60Si2Mn high strength spring steel of railway industry.

  16. A log-normal distribution model for the molecular weight of aquatic fulvic acids

    Science.gov (United States)

    Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.

    2000-01-01

    The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.

  17. The Lognormal Probability Distribution Function of the Perseus Molecular Cloud: A Comparison of HI and Dust

    CERN Document Server

    Burkhart, Blakesley; Murray, Claire; Stanimirovic, Snezana

    2015-01-01

    The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e. Av <1) PDF using dust tracers. In order to constrain the shape and properties of the low column density probability distribution function, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution, and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a powerlaw form. We find that the PDF of the atomic gas is narrow and at column densities larger than...

  18. Geomagnetic storms, the Dst ring-current myth and lognormal distributions

    Science.gov (United States)

    Campbell, W.H.

    1996-01-01

    The definition of geomagnetic storms dates back to the turn of the century when researchers recognized the unique shape of the H-component field change upon averaging storms recorded at low latitude observatories. A generally accepted modeling of the storm field sources as a magnetospheric ring current was settled about 30 years ago at the start of space exploration and the discovery of the Van Allen belt of particles encircling the Earth. The Dst global 'ring-current' index of geomagnetic disturbances, formulated in that period, is still taken to be the definitive representation for geomagnetic storms. Dst indices, or data from many world observatories processed in a fashion paralleling the index, are used widely by researchers relying on the assumption of such a magnetospheric current-ring depiction. Recent in situ measurements by satellites passing through the ring-current region and computations with disturbed magnetosphere models show that the Dst storm is not solely a main-phase to decay-phase, growth to disintegration, of a massive current encircling the Earth. Although a ring current certainly exists during a storm, there are many other field contributions at the middle-and low-latitude observatories that are summed to show the 'storm' characteristic behavior in Dst at these observatories. One characteristic of the storm field form at middle and low latitudes is that Dst exhibits a lognormal distribution shape when plotted as the hourly value amplitude in each time range. Such distributions, common in nature, arise when there are many contributors to a measurement or when the measurement is a result of a connected series of statistical processes. The amplitude-time displays of Dst are thought to occur because the many time-series processes that are added to form Dst all have their own characteristic distribution in time. By transforming the Dst time display into the equivalent normal distribution, it is shown that a storm recovery can be predicted with

  19. Ultrahigh throughput plasma processing of free standing silicon nanocrystals with lognormal size distribution

    Energy Technology Data Exchange (ETDEWEB)

    Dogan, Ilker; Kramer, Nicolaas J.; Westermann, Rene H. J.; Verheijen, Marcel A. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Dohnalova, Katerina; Gregorkiewicz, Tom [Van der Waals-Zeeman Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); Smets, Arno H. M. [Photovoltaic Materials and Devices Laboratory, Delft University of Technology, P.O. Box 5031, 2600 GA Delft (Netherlands); Sanden, Mauritius C. M. van de [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Dutch Institute for Fundamental Energy Research (DIFFER), P.O. Box 1207, 3430 BE Nieuwegein (Netherlands)

    2013-04-07

    We demonstrate a method for synthesizing free standing silicon nanocrystals in an argon/silane gas mixture by using a remote expanding thermal plasma. Transmission electron microscopy and Raman spectroscopy measurements reveal that the distribution has a bimodal shape consisting of two distinct groups of small and large silicon nanocrystals with sizes in the range 2-10 nm and 50-120 nm, respectively. We also observe that both size distributions are lognormal which is linked with the growth time and transport of nanocrystals in the plasma. Average size control is achieved by tuning the silane flow injected into the vessel. Analyses on morphological features show that nanocrystals are monocrystalline and spherically shaped. These results imply that formation of silicon nanocrystals is based on nucleation, i.e., these large nanocrystals are not the result of coalescence of small nanocrystals. Photoluminescence measurements show that silicon nanocrystals exhibit a broad emission in the visible region peaked at 725 nm. Nanocrystals are produced with ultrahigh throughput of about 100 mg/min and have state of the art properties, such as controlled size distribution, easy handling, and room temperature visible photoluminescence.

  20. Remark about Transition Probabilities Calculation for Single Server Queues with Lognormal Inter-Arrival or Service Time Distributions

    Science.gov (United States)

    Lee, Moon Ho; Dudin, Alexander; Shaban, Alexy; Pokhrel, Subash Shree; Ma, Wen Ping

    Formulae required for accurate approximate calculation of transition probabilities of embedded Markov chain for single-server queues of the GI/M/1, GI/M/1/K, M/G/1, M/G/1/K type with heavy-tail lognormal distribution of inter-arrival or service time are given.

  1. The Hum: log-normal distribution and planetary-solar resonance

    Science.gov (United States)

    Tattersall, R.

    2013-12-01

    Observations of solar and planetary orbits, rotations, and diameters show that these attributes are related by simple ratios. The forces of gravity and magnetism and the principles of energy conservation, entropy, power laws, and the log-normal distribution which are evident are discussed in relation to planetary distribution with respect to time in the solar system. This discussion is informed by consideration of the periodicities of interactions, as well as the regularity and periodicity of fluctuations in proxy records which indicate solar variation. It is demonstrated that a simple model based on planetary interaction frequencies can well replicate the timing and general shape of solar variation over the period of the sunspot record. Finally, an explanation is offered for the high degree of stable organisation and correlation with cyclic solar variability observed in the solar system. The interaction of the forces of gravity and magnetism along with the thermodynamic principles acting on planets may be analogous to those generating the internal dynamics of the Sun. This possibility could help account for the existence of strong correlations between orbital dynamics and solar variation for which a sufficiently powerful physical mechanism has yet to be fully demonstrated.

  2. Inference of bioequivalence for log-normal distributed data with unspecified variances.

    Science.gov (United States)

    Xu, Siyan; Hua, Steven Y; Menton, Ronald; Barker, Kerry; Menon, Sandeep; D'Agostino, Ralph B

    2014-07-30

    Two drugs are bioequivalent if the ratio of a pharmacokinetic (PK) parameter of two products falls within equivalence margins. The distribution of PK parameters is often assumed to be log-normal, therefore bioequivalence (BE) is usually assessed on the difference of logarithmically transformed PK parameters (δ). In the presence of unspecified variances, test procedures such as two one-sided tests (TOST) use sample estimates for those variances; Bayesian models integrate them out in the posterior distribution. These methods limit our knowledge on the extent that inference about BE is affected by the variability of PK parameters. In this paper, we propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for δ. Demonstrated with published real-life data, the proposed method not only produces results that are same as TOST and comparable with Bayesian method but also helps identify ranges of variances, which could make the determination of BE more achievable. Our findings manifest the advantages of the proposed method in making inference about the extent that BE is affected by the unspecified variances, which cannot be accomplished either by TOST or Bayesian method.

  3. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  4. LogCauchy, log-sech and lognormal distributions of species abundances in forest communities

    Science.gov (United States)

    Yin, Z.-Y.; Peng, S.-L.; Ren, H.; Guo, Q.; Chen, Z.-H.

    2005-01-01

    Species-abundance (SA) pattern is one of the most fundamental aspects of biological community structure, providing important information regarding species richness, species-area relation and succession. To better describe the SA distribution (SAD) in a community, based on the widely used lognormal (LN) distribution model with exp(-x2) roll-off on Preston's octave scale, this study proposed two additional models, logCauchy (LC) and log-sech (LS), respectively with roll-offs of simple x-2 and e-x. The estimation of the theoretical total number of species in the whole community, S*, including very rare species not yet collected in sample, was derived from the left-truncation of each distribution. We fitted these three models by Levenberg-Marquardt nonlinear regression and measured the model fit to the data using coefficient of determination of regression, parameters' t-test and distribution's Kolmogorov-Smirnov (KS) test. Examining the SA data from six forest communities (five in lower subtropics and one in tropics), we found that: (1) on a log scale, all three models that are bell-shaped and left-truncated statistically adequately fitted the observed SADs, and the LC and LS did better than the LN; (2) from each model and for each community the S* values estimated by the integral and summation methods were almost equal, allowing us to estimate S* using a simple integral formula and to estimate its asymptotic confidence internals by regression of a transformed model containing it; (3) following the order of LC, LS, and LN, the fitted distributions became lower in the peak, less concave in the side, and shorter in the tail, and overall the LC tended to overestimate, the LN tended to underestimate, while the LS was intermediate but slightly tended to underestimate, the observed SADs (particularly the number of common species in the right tail); (4) the six communities had some similar structural properties such as following similar distribution models, having a common

  5. The Lognormal Probability Distribution Function of the Perseus Molecular Cloud: A Comparison of HI and Dust

    Science.gov (United States)

    Burkhart, Blakesley; Lee, Min-Young; Murray, Claire E.; Stanimirović, Snezana

    2015-10-01

    The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e., {A}V\\lt 1) PDF using dust tracers. In order to constrain the shape and properties of the low column density PDF, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a power-law form. We find that the PDF of the atomic gas is narrow, and at column densities larger than the HI-H2 transition, the HI rapidly depletes, suggesting that the HI PDF may be used to find the HI-H2 transition column density. We also calculate the sonic Mach number of the atomic gas by using HI absorption line data, which yield a median value of Ms = 4.0 for the CNM, while the HI emission PDF, which traces both the WNM and CNM, has a width more consistent with transonic turbulence.

  6. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  7. Upper Bound of the Generalized p Value for the Population Variances of Lognormal Distributions with Known Coefficients of Variation

    Directory of Open Access Journals (Sweden)

    Rada Somkhuean

    2017-01-01

    Full Text Available This paper presents an upper bound for each of the generalized p values for testing the one population variance, the difference between two population variances, and the ratio of population variances for lognormal distribution when coefficients of variation are known. For each of the proposed generalized p values, we derive a closed form expression of the upper bound of the generalized p value. Numerical computations illustrate the theoretical results.

  8. Lognormal distribution of firing time and rate from a single neuron?

    CERN Document Server

    Kish, Eszter A; Der, Andras; Kish, Laszlo B

    2014-01-01

    Even a single neuron may be able to produce significant lognormal features in its firing statistics due to noise in the charging ion current. A mathematical scheme introduced in advanced nanotechnology is relevant for the analysis of this mechanism in the simplest case, the integrate-and-fire model with white noise in the charging ion current.

  9. Are human interactivity times lognormal?

    CERN Document Server

    Blenn, Norbert

    2016-01-01

    In this paper, we are analyzing the interactivity time, defined as the duration between two consecutive tasks such as sending emails, collecting friends and followers and writing comments in online social networks (OSNs). The distributions of these times are heavy tailed and often described by a power-law distribution. However, power-law distributions usually only fit the heavy tail of empirical data and ignore the information in the smaller value range. Here, we argue that the durations between writing emails or comments, adding friends and receiving followers are likely to follow a lognormal distribution. We discuss the similarities between power-law and lognormal distributions, show that binning of data can deform a lognormal to a power-law distribution and propose an explanation for the appearance of lognormal interactivity times. The historical debate of similarities between lognormal and power-law distributions is reviewed by illustrating the resemblance of measurements in this paper with the historical...

  10. The Razor’s Edge of Collapse: The Transition Point from Lognormal to Power-Law Distributions in Molecular Clouds

    Science.gov (United States)

    Burkhart, Blakesley; Stalpes, Kye; Collins, David C.

    2017-01-01

    We derive an analytic expression for the transitional column density value ({η }t) between the lognormal and power-law form of the probability distribution function (PDF) in star-forming molecular clouds. Our expression for {η }t depends on the mean column density, the variance of the lognormal portion of the PDF, and the slope of the power-law portion of the PDF. We show that {η }t can be related to physical quantities such as the sonic Mach number of the flow and the power-law index for a self-gravitating isothermal sphere. This implies that the transition point between the lognormal and power-law density/column density PDF represents the critical density where turbulent and thermal pressure balance, the so-called “post-shock density.” We test our analytic prediction for the transition column density using dust PDF observations reported in the literature, as well as numerical MHD simulations of self-gravitating supersonic turbulence with the Enzo code. We find excellent agreement between the analytic {η }t and the measured values from the numerical simulations and observations (to within 1.2 AV). We discuss the utility of our expression for determining the properties of the PDF from unresolved low-density material in dust observations, for estimating the post-shock density, and for determining the H i–H2 transition in clouds.

  11. Modelling the Skinner Thesis: Consequences of a Lognormal or a Bimodal Resource Base Distribution

    NARCIS (Netherlands)

    Auping, W.L.

    2014-01-01

    The copper case is often used as an example in resource depletion studies. Despite these studies, several profound uncertainties remain in the system. One of these uncertainties is the distribution of copper grades in the lithosphere. The Skinner thesis promotes the idea that copper grades may be

  12. Modelling the Skinner Thesis: Consequences of a Lognormal or a Bimodal Resource Base Distribution

    NARCIS (Netherlands)

    Auping, W.L.

    2014-01-01

    The copper case is often used as an example in resource depletion studies. Despite these studies, several profound uncertainties remain in the system. One of these uncertainties is the distribution of copper grades in the lithosphere. The Skinner thesis promotes the idea that copper grades may be di

  13. Log-normal spray drop distribution...analyzed by two new computer programs

    Science.gov (United States)

    Gerald S. Walton

    1968-01-01

    Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...

  14. Lognormal Infection Times of Online Information Spread

    CERN Document Server

    Doerr, Christian; Van Mieghem, Piet

    2013-01-01

    The infection times of individuals in online information spread such as the inter-arrival time of Twitter messages or the propagation time of news stories on a social media site can be explained through a convolution of lognormally distributed observation and reaction times of the individual participants. Experimental measurements support the lognormal shape of the individual contributing processes, and have resemblance to previously reported lognormal distributions of human behavior and contagious processes.

  15. Study on damages constitutive model of rocks based on lognormal distribution

    Institute of Scientific and Technical Information of China (English)

    LI Shu-chun; XU Jiang; TAO Yun-qi; TANG Xiao-jun

    2007-01-01

    The damage constitutive relation of entire rock failure process was established using the theory of representative volume element obeying the Iognormal distribution law,and the integrated damages constitutive model of rock under triaxial compression was established. Comparing with triaxial compression test result, it shows that this model correctly reflects the relationship of stress-strain. At the same time, according to the principle of the rock fatigue failure that conforms to completely the static entire process curve, a new method of establishing cyclic fatigue damage evolution equation was discussed, this method form is simple and the physics significance is clear, it may join preferably the damage relations of the rock static entire process curve.

  16. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    Science.gov (United States)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  17. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  18. A Study of the Application of the Lognormal and Gamma Distributions to Corrective Maintenance Repair Time Data.

    Science.gov (United States)

    1982-10-01

    8217....,•,-.. . . -. ::..: .. - - .__ - -! jIaIi I SYSTEMS /EQUIPMENTS ANALYZED SetNo S Y S T E " N A M E 2 French Fessenheim Puups Repair Time 3 Condenser Extrac ’on aump 4...lognormal assumption. The plots are also very good. The gamma family does not well represent the data sets. d. Sets No. 1 and 2 - French Fessenheim Pumps...6.0 * 6.5* 7.0 1 * 󈧴 set No d 0 964 so 0 00 4 to :80022TID jag.9 I4 SET NO 2 - FRENCH FESSENHEIM PUMPS (REPAIR TIME) SAMPLE SIZE N = 43 NO. OF CELLS

  19. The reliability assessment of the electromagnetic valve of high-speed electric multiple units braking system based on two-parameter exponential distribution

    Directory of Open Access Journals (Sweden)

    Jianwei Yang

    2016-06-01

    Full Text Available In order to solve the reliability assessment of braking system component of high-speed electric multiple units, this article, based on two-parameter exponential distribution, provides the maximum likelihood estimation and Bayes estimation under a type-I life test. First of all, we evaluate the failure probability value according to the classical estimation method and then obtain the maximum likelihood estimation of parameters of two-parameter exponential distribution by performing and using the modified likelihood function. On the other hand, based on Bayesian theory, this article also selects the beta and gamma distributions as the prior distribution, combines with the modified maximum likelihood function, and innovatively applies a Markov chain Monte Carlo algorithm to parameters assessment based on Bayes estimation method for two-parameter exponential distribution, so that two reliability mathematical models of the electromagnetic valve are obtained. Finally, through type-I life test, the failure rates according to maximum likelihood estimation and Bayes estimation method based on Markov chain Monte Carlo algorithm are, respectively, 2.650 × 10−5 and 3.037 × 10−5. Compared with the failure rate of a electromagnetic valve 3.005 × 10−5, it proves that the Bayes method can use a Markov chain Monte Carlo algorithm to estimate reliability for two-parameter exponential distribution and Bayes estimation is more closer to the value of electromagnetic valve. So, by fully integrating multi-source, Bayes estimation method can preferably modify and precisely estimate the parameters, which can provide a certain theoretical basis for the safety operation of high-speed electric multiple units.

  20. Beyond lognormal inequality: The Lorenz Flow Structure

    Science.gov (United States)

    Eliazar, Iddo

    2016-11-01

    Observed from a socioeconomic perspective, the intrinsic inequality of the lognormal law happens to manifest a flow generated by an underlying ordinary differential equation. In this paper we extend this feature of the lognormal law to a general "Lorenz Flow Structure" of Lorenz curves-objects that quantify socioeconomic inequality. The Lorenz Flow Structure establishes a general framework of size distributions that span continuous spectra of socioeconomic states ranging from the pure-communism extreme to the absolute-monarchy extreme. This study introduces and explores the Lorenz Flow Structure, analyzes its statistical properties and its inequality properties, unveils the unique role of the lognormal law within this general structure, and presents various examples of this general structure. Beyond the lognormal law, the examples include the inverse-Pareto and Pareto laws-which often govern the tails of composite size distributions.

  1. Sequential Testing of Hypotheses Concerning the Reliability of a System Modeled by a Two-Parameter Weibull Distribution.

    Science.gov (United States)

    1981-12-01

    the variance of point estimators are given by Mendenhall and Scheaffer (Ref 17:269), for both biased and unbiased estimations. In addition to this...Weibull Distribution. Thesis, Wright-Patterson AFB, Ohio: Air Force Institute of Technology, December 1980. 17. Mendenhall, W. and R. L. Scheaffer

  2. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    Science.gov (United States)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  3. Effects of Initial Values and Convergence Criterion in the Two-Parameter Logistic Model When Estimating the Latent Distribution in BILOG-MG 3.

    Directory of Open Access Journals (Sweden)

    Ingo W Nader

    Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.

  4. Zipf's law and log-normal distributions in measures of scientific output across fields and institutions: 40 years of Slovenia's research as an example

    CERN Document Server

    Perc, Matjaz

    2010-01-01

    Slovenia's Current Research Information System (SICRIS) currently hosts 86,443 publications with citation data from 8,359 researchers working on the whole plethora of social and natural sciences from 1970 till present. Using these data, we show that the citation distributions derived from individual publications have Zipfian properties in that they can be fitted by a power law $P(x) \\sim x^{-\\alpha}$, with $\\alpha$ between 2.4 and 3.1 depending on the institution and field of research. Distributions of indexes that quantify the success of researchers rather than individual publications, on the other hand, cannot be associated with a power law. We find that for Egghe's g-index and Hirsch's h-index the log-normal form $P(x) \\sim \\exp[-a\\ln x -b(\\ln x)^2]$ applies best, with $a$ and $b$ depending moderately on the underlying set of researchers. In special cases, particularly for institutions with a strongly hierarchical constitution and research fields with high self-citation rates, exponential distributions can...

  5. Improving lognormal models for cosmological fields

    CERN Document Server

    Xavier, Henrique S; Joachimi, Benjamin

    2016-01-01

    It is common practice in cosmology to model large-scale structure observables as lognormal random fields, and this approach has been successfully applied in the past to the matter density and weak lensing convergence fields separately. We argue that this approach has fundamental limitations which prevent its use for jointly modelling these two fields since the lognormal distribution's shape can prevent certain correlations to be attainable. Given the need of ongoing and future large-scale structure surveys for fast joint simulations of clustering and weak lensing, we propose two ways of overcoming these limitations. The first approach slightly distorts the power spectra of the fields using one of two algorithms that minimises either the absolute or the fractional distortions. The second one is by obtaining more accurate convergence marginal distributions, for which we provide a fitting function, by integrating the lognormal density along the line of sight. The latter approach also provides a way to determine ...

  6. Methane emission rates from the Arctic coastal tundra at Barrow are log-normally distributed: Is this a tail that wags climate?

    Science.gov (United States)

    von Fischer, J. C.; Rhew, R.

    2008-12-01

    Over the past two growing seasons, we have conducted >200 point measurements of methane emission and ecosystem respiration rates on the Arctic coastal tundra within the Barrow Environmental Observatory. These measures reveal that methane emission rates are log-normally distributed, but ecosystem respiration rates are normally distributed. The contrast in frequency distributions indicates that methane and carbon dioxide emission rates respond in a qualitatively different way to their environmental drivers: while ecosystem respiration rates rise linearly with increasing temperature and soil moisture, methane emissions increase exponentially. Thus, the long positive tail in methane emission rates does generate positive feedback on climate change that is strongly non-linear. To further evaluate this response, we examined the spatial statistics of our dataset, and conducted additional measures of carbon flux from points on the landscape that typically had the highest rates of methane emission. The spatial analysis showed that neither ecosystem respiration nor methane emission rates have spatial co-correlation beyond that predicted by macroscopic properties of vegetation (e.g., species composition, plant height) and soil (e.g., permafrost depth, temperature, water content), suggesting that our findings can be used to scale up. Our analysis of high-emission points focused on wet and flooded areas where Carex aquatilis growth was greatest. Here, we found variation in methane emission rates to be correlated with Carex aboveground biomass and rates of gross primary production, but not ecosystem respiration. Given the sensitivity of Carex's phenotype to inundation, permafrost depth and soil temperature, we anticipate that the magnitude the climate-methane feedback in the Arctic coastal plain will depend strongly on how permafrost thaw alters the ecology of Carex aquatilis.

  7. Lognormal Approximation of Complex Path-Dependent Pension Scheme Payoffs

    DEFF Research Database (Denmark)

    Jørgensen, Peter Løchte

    2007-01-01

    properties are accurately approximated by a suitably adapted lognormal distribution. The quality of the lognormal approximation is explored via a range of simulation-based numerical experiments, and we point to several other potential practical applications of the paper's theoretical results....

  8. 对数正态分布寿命型序贯验证试验方法%Sequential compliance test method for lognormal distribution

    Institute of Scientific and Technical Information of China (English)

    邓清; 袁宏杰

    2012-01-01

    Using the experience of sequential verification test program in exponential distribution for reference,the method of making the sequential verification test program in lognormal distribution was discussed,which takes the average life as an indicator.The test procedure of sequential test was provided,and the upper limit value of the producer and consumer risks were studied under censoring.According to the sampling method in practical engineering,the simulation method was proposed to evaluate the above mentioned test program.Evaluation results indicate that the proposed sequential verification test program can meet the requirements of controlling both sides of risk on the premise of satisfying the requirement of sample and censored size.And the consumer's risk is lower than the expected value.%借鉴指数分布寿命型序贯验证试验方案的思想,讨论了以平均寿命为指标的对数正态分布寿命型产品序贯验证试验的制定方法,给出了序贯试验的试验程序,研究了截尾状态下序贯试验的生产方风险和使用方风险的上限.基于工程实际的抽样方法,给出了计算机仿真评价方法,对给出的序贯试验方案进行评价.评价结果表明,在样本量和截尾数满足要求的前提下,所提出的序贯验证试验方法能够满足对双方风险的控制要求,且对使用方风险提供了更大的保护.

  9. A lognormal model for response times on test items

    NARCIS (Netherlands)

    van der Linden, Willem J.

    2006-01-01

    A lognormal model for the response times of a person on a set of test items is investigated. The model has a parameter structure analogous to the two-parameter logistic response models in item response theory, with a parameter for the speed of each person as well as parameters for the time intensity

  10. Methodology for lognormal modelling of malignant pleural mesothelioma survival time distributions: a study of 5580 case histories from Europe and USA

    Science.gov (United States)

    Mould, Richard F.; Lahanas, Michael; Asselain, Bernard; Brewster, David; Burgers, Sjaak A.; Damhuis, Ronald A. M.; DeRycke, Yann; Gennaro, Valerio; Szeszenia-Dabrowska, Neonila

    2004-09-01

    A truncated left-censored and right-censored lognormal model has been validated for representing pleural mesothelioma survival times in the range 5-200 weeks for data subsets grouped by age for males, 40-49, 50-59, 60-69, 70-79 and 80+ years and for all ages combined for females. The cases available for study were from Europe and USA and totalled 5580. This is larger than any other pleural mesothelioma cohort accrued for study. The methodology describes the computation of reference baseline probabilities, 5-200 weeks, which can be used in clinical trials to assess results of future promising treatment methods. This study is an extension of previous lognormal modelling by Mould et al (2002 Phys. Med. Biol. 47 3893-924) to predict long-term cancer survival from short-term data where the proportion cured is denoted by C and the uncured proportion, which can be represented by a lognormal, by (1 - C). Pleural mesothelioma is a special case when C = 0.

  11. The Sum and Difference of Two Lognormal Random Variables

    Directory of Open Access Journals (Sweden)

    C. F. Lo

    2012-01-01

    Full Text Available We have presented a new unified approach to model the dynamics of both the sum and difference of two correlated lognormal stochastic variables. By the Lie-Trotter operator splitting method, both the sum and difference are shown to follow a shifted lognormal stochastic process, and approximate probability distributions are determined in closed form. Illustrative numerical examples are presented to demonstrate the validity and accuracy of these approximate distributions. In terms of the approximate probability distributions, we have also obtained an analytical series expansion of the exact solutions, which can allow us to improve the approximation in a systematic manner. Moreover, we believe that this new approach can be extended to study both (1 the algebraic sum of N lognormals, and (2 the sum and difference of other correlated stochastic processes, for example, two correlated CEV processes, two correlated CIR processes, and two correlated lognormal processes with mean-reversion.

  12. Power law behaviors in natural and social phenomena and the double Pareto lognormal distribution%自然与社会环境中的幂律现象和双帕累托对数正态分布

    Institute of Scientific and Technical Information of China (English)

    方正; 王杰

    2011-01-01

    Power law behaviors are ubiquitous in natural and social phenomena. How to accurately describe such behaviors and provide reasonable explanations of why such behaviors occur, however, has long been a standing open problem. The double Pareto lognormal distribution offers, from the stochastic process point of view, a viable approach to this problem. This article elaborates the mathematical concept of the double Pareto lognormal distribution and provides an overview of natural and social phenomena that exhibit such distribution. These include the number of friends in social networks, Internet file sizes, stock market returns, wealth possessions in human societies, human settlement sizes, oil field reserves, and areas burnt from forest wildfire.%幂律是在许多自然和社会环境中都能观察到的现象.但如何精确地描述这种现象并合理地解释这种现象的成因却一直令人困扰.双帕累托对数正态分布从随机过程的角度对这一问题给出了一个新的思路.本文首先描述双帕累托对数正态分布的数学推导与生成模型,然后解释此分布为什么会在社交网朋友的数量、互联网文件的大小、股票市场的回报、社会财富的占有、城市人口的规模、油田的储量及森林火灾焚烧的面积等现象中出现的可能原因.

  13. Lognormal Approximation of Complex Path-dependent Pension Scheme Payoffs

    DEFF Research Database (Denmark)

    Jørgensen, Peter Løchte

    This paper analyzes an explicit return smoothing mechanism which has recently been introduced as part of a new type of pension savings contract that has been offered by Danish life insurers. We establish the payoff function implied by the return smoothing mechanism and show that its probabilistic...... properties are accurately approximated by a suitably adapted lognormal distribution. The quality of the lognormal approximation is explored via a range of simulation based numerical experiments, and we point to several other potential practical applications of the paper's theoretical results....

  14. Can Self Organized Critical Accretion Disks Generate a Log-normal Emission Variability in AGN?

    CERN Document Server

    Kunjaya, Chatief; Vierdayanti, Kiki; Herlie, Stefani

    2011-01-01

    Active Galactic Nuclei (AGN), such as Seyfert galaxies, quasars, etc., show light variations in all wavelength bands, with various amplitude and in many time scales. The variations usually look erratic, not periodic nor purely random. Many of these objects also show lognormal flux distribution and RMS - flux relation and power law frequency distribution. So far, the lognormal flux distribution of black hole objects is only observational facts without satisfactory explanation about the physical mechanism producing such distribution in the accretion disk. One of the most promising models based on cellular automaton mechanism has been successful in reproducing PSD (Power Spectral Density) of the observed objects but could not reproduce lognormal flux distribution. Such distribution requires the existence of underlying multiplicative process while the existing SOC models are based on additive processes. A modified SOC model based on cellular automaton mechanism for producing lognormal flux distribution is present...

  15. The Distribution of the Asymptotic Number of Citations to Sets of Publications by a Researcher or From an Academic Department Are Consistent With a Discrete Lognormal Model

    CERN Document Server

    Moreira, João A G; Amaral, Luís A Nunes

    2015-01-01

    How to quantify the impact of a researcher's or an institution's body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as "very high research activity". Our approach has three distinct advantages. First, it accurately captures the overall scien...

  16. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    Science.gov (United States)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-11-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[S|[Lambda

  17. Modelling diameter distributions of Quercus suber L. stands in “Los Alcornocales” Natural Park (Cádiz-Málaga, Spain by using the two-parameter Weibull function

    Directory of Open Access Journals (Sweden)

    A. Calzado

    2013-04-01

    Full Text Available Aim of study: The aim of this work was to model diameter distributions of Quercus suber stands. The ultimate goal was to construct models enabling the development of more affordable forest inventory methods. This is the first study of this type on cork oak forests in the area.Area of study: The area of study is “Los Alcornocales” Natural Park (Cádiz-Málaga, Spain.Material and methods: The diameter distributions of 100 permanent plots were modelled with the two-parameter Weibull function. Distribution parameters were fitted with the non-linear regression, maximum likelihood, moment and percentile-based methods. Goodness of fit with the different methods was compared in terms of number of plots rejected by the Kolmogorov-Smirnov test, bias, mean square error and mean absolute error. The scale and shape parameters in the Weibull function were related to the stand variables by using the parameter prediction model.Main results: The best fitting was obtained with the non-linear regression approach, using as initial values those obtained by maximum likelihood method, the percentage of rejections by the Kolmogorov-Smirnov test was 2% of the total number of cases. The scale parameter (b was successfully modelled in terms of the quadratic mean diameter under cork (R2 adj = 0.99. The shape parameter (c was modelled by using maximum diameter, minimum diameter and plot elevation (R2 adj = 0.40.Research highlights: The proposed model diameter distribution can be a highly useful tool for the inventorying and management of cork oak forests.Key words: maximum likelihood method; moment method; non linear regression approach; parameter prediction model; percentile method; scale parameter; shape parameter.

  18. Log-normality of indoor radon data in the Walloon region of Belgium.

    Science.gov (United States)

    Cinelli, Giorgia; Tondeur, François

    2015-05-01

    The deviations of the distribution of Belgian indoor radon data from the log-normal trend are examined. Simulated data are generated to provide a theoretical frame for understanding these deviations. It is shown that the 3-component structure of indoor radon (radon from subsoil, outdoor air and building materials) generates deviations in the low- and high-concentration tails, but this low-C trend can be almost completely compensated by the effect of measurement uncertainties and by possible small errors in background subtraction. The predicted low-C and high-C deviations are well observed in the Belgian data, when considering the global distribution of all data. The agreement with the log-normal model is improved when considering data organised in homogeneous geological groups. As the deviation from log-normality is often due to the low-C tail for which there is no interest, it is proposed to use the log-normal fit limited to the high-C half of the distribution. With this prescription, the vast majority of the geological groups of data are compatible with the log-normal model, the remaining deviations being mostly due to a few outliers, and rarely to a "fat tail". With very few exceptions, the log-normal modelling of the high-concentration part of indoor radon data is expected to give reasonable results, provided that the data are organised in homogeneous geological groups.

  19. The Probability Density Functions to Diameter Distributions for Scots Pine Oriental Beech and Mixed Stands

    Directory of Open Access Journals (Sweden)

    Aydın Kahriman

    2011-11-01

    Full Text Available Determine the diameter distribution of a stand and its relations with stand ages, site index, density and mixture percentage is very important both biologically and economically. The Weibull with two parameters, Weibull with three parameters, Gamma with two parameters, Gamma with three parameters, Beta, Lognormal with two parameters, Lognormal with three parameters, Normal, Johnson SB probability density functions were used to determination of diameter distributions. This study aimed to compared based on performance of describing different diameter distribution and to describe the best successful function of diameter distributions. The data were obtaited from 162 temporary sample plots measured Scots pine and Oriental beech mixed stands in Black Sea Region. The results show that four parameter Johnson SB function for both scots pine and oriental beech is the best successful function to describe diameter distributions based on error index values calculated by difference between observed and predicted diameter distributions.

  20. The lognormal handwriter: learning, performing and declining.

    Directory of Open Access Journals (Sweden)

    Réjean ePlamondon

    2013-12-01

    Full Text Available The generation of handwriting is a complex neuromotor skill requiring the interaction of many cognitive processes. It aims at producing a message to be imprinted as an ink trace left on a writing medium. The generated trajectory of the pen tip is made up of strokes superimposed over time. The Kinematic Theory of rapid human movements and its family of lognormal models provide analytical representations of these strokes, often considered as the basic unit of handwriting. This paradigm has not only been experimentally confirmed in numerous predictive and physiologically significant tests but it has also been shown to be the ideal mathematical description for the impulse response of a neuromuscular system. This latter demonstration suggests that the lognormality of the velocity patterns can be interpreted as reflecting the behaviour of subjects who are in perfect control of their movements. To illustrate this interpretation, we present a short overview of the main concepts behind the Kinematic Theory and briefly describe how its models can be exploited, using various software tools, to investigate these ideal lognormal behaviors. We emphasize that the parameters extracted during various tasks can be used to analyze some underlying processes associated with their realization. To investigate the operational convergence hypothesis, we report on two original studies. First, we focus on the early steps of the motor learning process as seen as a converging behaviour toward the production of more precise lognormal patterns as young children practicing handwriting start to become more fluent writers. Second, we illustrate how aging affects handwriting by pointing out the increasing departure from the ideal lognormal behaviour as the control of the fine motricity begins to decline. Overall, the paper highlights this developmental process of merging toward a lognormal behaviour with learning, mastering this behaviour to succeed in performing a given task

  1. Can self-organized critical accretion disks generate a log-normal emission variability in AGN?

    Science.gov (United States)

    Kunjaya, C.; Mahasena, P.; Vierdayanti, K.; Herlie, S.

    2011-12-01

    Active Galactic Nuclei (AGN), such as Seyfert galaxies, quasars, etc., show light variations in all wavelength bands, with various amplitude and in many time scales. The variations usually look erratic, not periodic nor purely random. Many of these objects also show lognormal flux distribution and RMS-flux relation and power law frequency distribution. So far, the lognormal flux distribution of black hole objects is only observational facts without satisfactory explanation about the physical mechanism producing such distribution in the accretion disk. One of the most promising models based on cellular automaton mechanism has been successful in reproducing PSD (Power Spectral Density) of the observed objects but could not reproduce lognormal flux distribution. Such distribution requires the existence of underlying multiplicative process while the existing SOC models are based on additive processes. A modified SOC model based on cellular automaton mechanism for producing lognormal flux distribution is presented in this paper. The idea is that the energy released in the avalanche and diffusion in the accretion disk is not entirely emitted instantaneously as in the original cellular automaton model. Some part of the energy is kept in the disk and thus increase its energy content so that the next avalanche will be in higher energy condition and will release more energy. The later an avalanche occurs, the more amount of energy is emitted to the observers. This can provide multiplicative effects to the flux and produces lognormal flux distribution.

  2. Asymptotics of sums of lognormal random variables with Gaussian copula

    DEFF Research Database (Denmark)

    Asmussen, Søren; Rojas-Nandayapa, Leonardo

    2008-01-01

    Let (Y1, ..., Yn) have a joint n-dimensional Gaussian distribution with a general mean vector and a general covariance matrix, and let Xi = eYi, Sn = X1 + ⋯ + Xn. The asymptotics of P (Sn > x) as n → ∞ are shown to be the same as for the independent case with the same lognormal marginals. In part....... In particular, for identical marginals it holds that P (Sn > x) ∼ n P (X1 > x) no matter what the correlation structure is. © 2008 Elsevier B.V. All rights reserved....

  3. Multilevel quadrature of elliptic PDEs with log-normal diffusion

    KAUST Repository

    Harbrecht, Helmut

    2015-01-07

    We apply multilevel quadrature methods for the moment computation of the solution of elliptic PDEs with lognormally distributed diffusion coefficients. The computation of the moments is a difficult task since they appear as high dimensional Bochner integrals over an unbounded domain. Each function evaluation corresponds to a deterministic elliptic boundary value problem which can be solved by finite elements on an appropriate level of refinement. The complexity is thus given by the number of quadrature points times the complexity for a single elliptic PDE solve. The multilevel idea is to reduce this complexity by combining quadrature methods with different accuracies with several spatial discretization levels in a sparse grid like fashion.

  4. Renormalizable two-parameter piecewise isometries.

    Science.gov (United States)

    Lowenstein, J H; Vivaldi, F

    2016-06-01

    We exhibit two distinct renormalization scenarios for two-parameter piecewise isometries, based on 2π/5 rotations of a rhombus and parameter-dependent translations. Both scenarios rely on the recently established renormalizability of a one-parameter triangle map, which takes place if and only if the parameter belongs to the algebraic number field K=Q(5) associated with the rotation matrix. With two parameters, features emerge which have no counterpart in the single-parameter model. In the first scenario, we show that renormalizability is no longer rigid: whereas one of the two parameters is restricted to K, the second parameter can vary continuously over a real interval without destroying self-similarity. The mechanism involves neighbouring atoms which recombine after traversing distinct return paths. We show that this phenomenon also occurs in the simpler context of Rauzy-Veech renormalization of interval exchange transformations, here regarded as parametric piecewise isometries on a real interval. We explore this analogy in some detail. In the second scenario, which involves two-parameter deformations of a three-parameter rhombus map, we exhibit a weak form of rigidity. The phase space splits into several (non-convex) invariant components, on each of which the renormalization still has a free parameter. However, the foliations of the different components are transversal in parameter space; as a result, simultaneous self-similarity of the component maps requires that both of the original parameters belong to the field K.

  5. A Lognormal Recurrent Network Model for Burst Generation during Hippocampal Sharp Waves.

    Science.gov (United States)

    Omura, Yoshiyuki; Carvalho, Milena M; Inokuchi, Kaoru; Fukai, Tomoki

    2015-10-28

    The strength of cortical synapses distributes lognormally, with a long tail of strong synapses. Various properties of neuronal activity, such as the average firing rates of neurons, the rate and magnitude of spike bursts, the magnitude of population synchrony, and the correlations between presynaptic and postsynaptic spikes, also obey lognormal-like distributions reported in the rodent hippocampal CA1 and CA3 areas. Theoretical models have demonstrated how such a firing rate distribution emerges from neural network dynamics. However, how the other properties also display lognormal patterns remain unknown. Because these features are likely to originate from neural dynamics in CA3, we model a recurrent neural network with the weights of recurrent excitatory connections distributed lognormally to explore the underlying mechanisms and their functional implications. Using multi-timescale adaptive threshold neurons, we construct a low-frequency spontaneous firing state of bursty neurons. This state well replicates the observed statistical properties of population synchrony in hippocampal pyramidal cells. Our results show that the lognormal distribution of synaptic weights consistently accounts for the observed long-tailed features of hippocampal activity. Furthermore, our model demonstrates that bursts spread over the lognormal network much more effectively than single spikes, implying an advantage of spike bursts in information transfer. This efficiency in burst propagation is not found in neural network models with Gaussian-weighted recurrent excitatory synapses. Our model proposes a potential network mechanism to generate sharp waves in CA3 and associated ripples in CA1 because bursts occur in CA3 pyramidal neurons most frequently during sharp waves.

  6. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  7. Pareto-Lognormal Modeling of Known and Unknown Metal Resources. II. Method Refinement and Further Applications

    Energy Technology Data Exchange (ETDEWEB)

    Agterberg, Frits, E-mail: agterber@nrcan.gc.ca [Geological Survey of Canada (Canada)

    2017-07-01

    Pareto-lognormal modeling of worldwide metal deposit size–frequency distributions was proposed in an earlier paper (Agterberg in Nat Resour 26:3–20, 2017). In the current paper, the approach is applied to four metals (Cu, Zn, Au and Ag) and a number of model improvements are described and illustrated in detail for copper and gold. The new approach has become possible because of the very large inventory of worldwide metal deposit data recently published by Patiño Douce (Nat Resour 25:97–124, 2016c). Worldwide metal deposits for Cu, Zn and Ag follow basic lognormal size–frequency distributions that form straight lines on lognormal Q–Q plots. Au deposits show a departure from the straight-line model in the vicinity of their median size. Both largest and smallest deposits for the four metals taken as examples exhibit hyperbolic size–frequency relations and their Pareto coefficients are determined by fitting straight lines on log rank–log size plots. As originally pointed out by Patiño Douce (Nat Resour Res 25:365–387, 2016d), the upper Pareto tail cannot be distinguished clearly from the tail of what would be a secondary lognormal distribution. The method previously used in Agterberg (2017) for fitting the bridge function separating the largest deposit size–frequency Pareto tail from the basic lognormal is significantly improved in this paper. A new method is presented for estimating the approximate deposit size value at which the upper tail Pareto comes into effect. Although a theoretical explanation of the proposed Pareto-lognormal distribution model is not a required condition for its applicability, it is shown that existing double Pareto-lognormal models based on Brownian motion generalizations of the multiplicative central limit theorem are not applicable to worldwide metal deposits. Neither are various upper tail frequency amplification models in their present form. Although a physicochemical explanation remains possible, it is argued that

  8. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  9. General collision branching processes with two parameters

    Institute of Scientific and Technical Information of China (English)

    CHEN AnYue; LI JunPing

    2009-01-01

    A new class of branching models, the general collision branching processes with two parameters, is considered in this paper. For such models, it is necessary to evaluate the absorbing probabilities and mean extinction times for both absorbing states. Regularity and uniqueness criteria are firstly established. Explicit expressions are then obtained for the extinction probability vector, the mean extinction times and the conditional mean extinction times. The explosion behavior of these models is investigated and an explicit expression for mean explosion time is established. The mean global holding time is also obtained. It is revealed that these properties are substantially different between the super-explosive and sub-explosive cases.

  10. Note---The Mean-Coefficient-of-Variation Rule: The Lognormal Case

    OpenAIRE

    Haim Levy

    1991-01-01

    The mean-variance (M-V) rule may lead to paradoxical results which may be resolved by employing the mean coefficient of variation (M-C) rule. It is shown that the M-C rule constitutes an optimal decision rule for lognormal distributions.

  11. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    NARCIS (Netherlands)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-01-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[SΛ] of S with respect to a conditioning random variable Λ. The choice of Λ is crucial in order to

  12. General collision branching processes with two parameters

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    A new class of branching models,the general collision branching processes with two parameters,is considered in this paper.For such models,it is necessary to evaluate the absorbing probabilities and mean extinction times for both absorbing states.Regularity and uniqueness criteria are firstly established.Explicit expressions are then obtained for the extinction probability vector,the mean extinction times and the conditional mean extinction times.The explosion behavior of these models is investigated and an explicit expression for mean explosion time is established.The mean global holding time is also obtained.It is revealed that these properties are substantially different between the super-explosive and sub-explosive cases.

  13. Integral points in two-parameter orbits

    CERN Document Server

    Corvaja, Pietro; Tucker, Thomas J; Zannier, Umberto

    2012-01-01

    Let K be a number field, let f: P_1 --> P_1 be a nonconstant rational map of degree greater than 1, let S be a finite set of places of K, and suppose that u, w in P_1(K) are not preperiodic under f. We prove that the set of (m,n) in N^2 such that f^m(u) is S-integral relative to f^n(w) is finite and effectively computable. This may be thought of as a two-parameter analog of a result of Silverman on integral points in orbits of rational maps. This issue can be translated in terms of integral points on an open subset of P_1^2; then one can apply a modern version of the method of Runge, after increasing the number of components at infinity by iterating the rational map. Alternatively, an ineffective result comes from a well-known theorem of Vojta.

  14. BAYESIAN PREDICTION FOR THE TWO-PARAMETER EXPONENTIAL DISTRIBUTION BASED ON TYPE Ⅱ DOUBLY CENSORING%Ⅱ型双删失场合双参数指数分布的贝叶斯预测

    Institute of Scientific and Technical Information of China (English)

    李艳玲; 赵选民; 谢文贤

    2005-01-01

    The twoparameter exponential distribution is proposed to be an underlying mode l,and prediction bounds for future observations are obtained by using Bayesian a pproach.Prediction intervals are derived for unobserved lifetimes in onesample prediction and twosample prediction based on type Ⅱ doubly censored samples. A numerical example is given to illustrate the procedures,prediction intervals a re investigated via Monte Carlo method,and the accuracy of prediction intervals is presented.

  15. 双参数威布尔分布在核电站数据处理中的应用%Application of Two-Parameter Weibull Distribution in Nuclear Power Plant Data Processing

    Institute of Scientific and Technical Information of China (English)

    刘方亮; 刘井泉; 刘伟

    2011-01-01

    核电站设备可靠性数据的处理是电站进行以可靠性为中心的维修(RCM)和寿期管理(LCM)的基础.在核电站失效数据的实际处理过程中,常会面临失效样本少、维修导致数据分布不独立等问题.为解决上述问题,本文提出以双参数威布尔分布作为寿命模型、采用贝叶斯方法来处理小样本失效数据的方法,并结合核电站运行数据进行验证.结果表明,本方法在处理样本较少以及存在维修老化问题时,具有更好的适用性和准确度.%The equipment reliability data processing is the basis of reliability centered maintenance (RCM) and life cycle management (LCM) in nuclear power plant. However, in actual failure data processing, the problems such as small-sample and non-independent data caused by maintenance are put forward. To resolve the problems, a processing method combined double-parameter Weibull distribution as the life model and Bayesian method for small samples was proposed, and was validated using actual nuclear power plant operating data. The results show that the processing method has better applicability and accuracy to deal with the situation of small samples and the problems of repairing and aging in nuclear power plant.

  16. Inferences on the Difference and Ratio of the Means of Two Independent Two-Parameter Exponential Distribution ∗%两个独立服从双参数指数分布产品平均寿命比率的统计推断

    Institute of Scientific and Technical Information of China (English)

    史建红; 林红梅

    2013-01-01

      本文利用广义p值和广义置信区间理论,研究了两独立服从双参数指数分布产品平均寿命比率的统计推断问题。给出了平均寿命比率的广义置信区间,并对该区间的覆盖率和区间长度进行了数据模拟,模拟结果与已有文献中的近似置信区间进行了比较,结果显示本文给出的广义置信区间的区间覆盖率和区间长度都要优于近似置信区间,特别是在小样本的情况下。%Methods for interval estimation and hypothesis testing about the ratio of expected lifetimes of two independently distributed two-parameter exponential distribution based on the concept of generalized variable approach are proposed. As assessed by simulation, the coverage probabilities of the proposed approach are found to be very close to the nominal level even for small samples. The proposed new approaches are conceptually simple and are easy to use. Similar procedures are developed for constructing confidence intervals and hypothesis testing about the difference between means of two independent two-parameter exponential distribution.

  17. Fitting Ranked Linguistic Data with Two-Parameter Functions

    Directory of Open Access Journals (Sweden)

    Wentian Li

    2010-07-01

    Full Text Available It is well known that many ranked linguistic data can fit well with one-parameter models such as Zipf’s law for ranked word frequencies. However, in cases where discrepancies from the one-parameter model occur (these will come at the two extremes of the rank, it is natural to use one more parameter in the fitting model. In this paper, we compare several two-parameter models, including Beta function, Yule function, Weibull function—all can be framed as a multiple regression in the logarithmic scale—in their fitting performance of several ranked linguistic data, such as letter frequencies, word-spacings, and word frequencies. We observed that Beta function fits the ranked letter frequency the best, Yule function fits the ranked word-spacing distribution the best, and Altmann, Beta, Yule functions all slightly outperform the Zipf’s power-law function in word ranked- frequency distribution.

  18. Lognormality of gradients of diffusive scalars in homogeneous, two-dimensional mixing systems

    Science.gov (United States)

    Kerstein, A. R.; Ashurst, W. T.

    1984-12-01

    Kolmogorov's third hypothesis, as extended by Gurvich and Yaglom, is found to be obeyed by a diffusive scalar for a class of homogeneous, two-dimensional mixing models. The mixing models all involve the advection of fluid by discrete vortices distributed in a square region with periodic boundary conditions. By computer simulation, it is found that the squared gradient of a diffusive scalar so advected is lognormally distributed, obeys the predicted scaling when a spatial smoothing is applied, and exhibits a power-law range in the spatial autocorrelation. In addition, it is found that the scaling property cuts off at the Batchelor length, as predicted by Gibson. Since the mixing models employed do not incorporate the dynamical features of high-Reynolds-number turbulence, these results suggest that scalar lognormality and associated scaling behavior may be more robust or persistent than the scaling laws of the flow field.

  19. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  20. Parameter estimation and forecasting for multiplicative log-normal cascades.

    Science.gov (United States)

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  1. Gaussian and Lognormal Models of Hurricane Gust Factors

    Science.gov (United States)

    Merceret, Frank

    2009-01-01

    A document describes a tool that predicts the likelihood of land-falling tropical storms and hurricanes exceeding specified peak speeds, given the mean wind speed at various heights of up to 500 feet (150 meters) above ground level. Empirical models to calculate mean and standard deviation of the gust factor as a function of height and mean wind speed were developed in Excel based on data from previous hurricanes. Separate models were developed for Gaussian and offset lognormal distributions for the gust factor. Rather than forecasting a single, specific peak wind speed, this tool provides a probability of exceeding a specified value. This probability is provided as a function of height, allowing it to be applied at a height appropriate for tall structures. The user inputs the mean wind speed, height, and operational threshold. The tool produces the probability from each model that the given threshold will be exceeded. This application does have its limits. They were tested only in tropical storm conditions associated with the periphery of hurricanes. Winds of similar speed produced by non-tropical system may have different turbulence dynamics and stability, which may change those winds statistical characteristics. These models were developed along the Central Florida seacoast, and their results may not accurately extrapolate to inland areas, or even to coastal sites that are different from those used to build the models. Although this tool cannot be generalized for use in different environments, its methodology could be applied to those locations to develop a similar tool tuned to local conditions.

  2. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Science.gov (United States)

    Shao, Kan; Gift, Jeffrey S; Setzer, R Woodrow

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose-response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean±standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the "hybrid" method and relative deviation approach, we first evaluate six representative continuous dose-response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates.

  3. The Razor's Edge of Collapse: The Transition Point from Lognormal to Powerlaw in Molecular Cloud PDFs

    CERN Document Server

    Burkhart, Blakesley; Collins, David

    2016-01-01

    We derive an analytic expression for the transitional column density value ($s_t$) between the lognormal and power-law form of the probability distribution function (PDF) in star-forming molecular clouds. Our expression for $s_t$ depends on the mean column density, the variance of the lognormal portion of the PDF, and the slope of the power-law portion of the PDF. We show that $s_t$ can be related to physical quantities such as the sonic Mach number of the flow and the power-law index for a self-gravitating isothermal sphere. This implies that the transition point between the lognormal and power-law density/column density PDF represents the critical density where turbulent and thermal pressure balance, the so-called "post-shock density." We test our analytic prediction for the transition column density using dust PDF observations reported in the literature as well as numerical MHD simulations of self-gravitating supersonic turbulence with the Enzo code. We find excellent agreement between the analytic $s_t$ a...

  4. SAMPLING INSPECTION OF RELIABILITY IN (LOG)NORMAL CASE WITH TYPE I CENSORING

    Institute of Scientific and Technical Information of China (English)

    Wu Qiguang; Lu Jianhua

    2006-01-01

    This article proposes a statistical method for working out reliability sampling plans under Type Ⅰ censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.

  5. Right Skewed Distribution of Activity Times in PERT

    Directory of Open Access Journals (Sweden)

    N.Ravi Shankar,

    2011-04-01

    Full Text Available A usual supposition in project management is that the distribution for most activities in a project network is right skewed. The prime objective of this paper is to find new path float in Program Evaluation and Review Technique (PERT for right skewed distribution of activity times in a project network. The new path float concept will bring useful planning information to the decision managers and the planners in the project construction. Our new path float in PERT are compared with normal, lognormal approximations with two parameters and also with beta approximations with three parameters. The comparison reveals that beta approximations with three parameters performs better than normal and lognormal approximations suggested.

  6. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep;

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff...

  7. Transformation of state space for two-parameter Markov processes

    Institute of Scientific and Technical Information of China (English)

    周健伟

    1996-01-01

    Let X=(X) be a two-parameter *-Markov process with a transition function (p1, p2, p), where X, takes values in the state space (Er,), T=[0,)2. For each r T, let f, be a measurable transformation of (E,) into the state space (E’r, ). Set Y,=f,(X,), r T. A sufficient condition is given for the process Y=(Yr) still to be a two-parameter *-Markov process with a transition function in terms of transition function (p1, p2, p) and fr. For *-Markov families of two-parameter processes with a transition function, a similar problem is also discussed.

  8. Outage Analysis of Ultra-Wideband System in Lognormal Multipath Fading and Square-Shaped Cellular Configurations

    Directory of Open Access Journals (Sweden)

    Pirinen Pekka

    2006-01-01

    Full Text Available Generic ultra-wideband (UWB spread-spectrum system performance is evaluated in centralized and distributed spatial topologies comprising square-shaped indoor cells. Statistical distributions for link distances in single-cell and multicell configurations are derived. Cochannel-interference-induced outage probability is used as a performance measure. The probability of outage varies depending on the spatial distribution statistics of users (link distances, propagation characteristics, user activities, and receiver settings. Lognormal fading in each channel path is incorporated in the model, where power sums of multiple lognormal signal components are approximated by a Fenton-Wilkinson approach. Outage performance of different spatial configurations is outlined numerically. Numerical results show the strong dependence of outage probability on the link distance distributions, number of rake fingers, and path losses.

  9. 双边定数截尾情形下一个两参数有浴盆形状失效率的寿命分布参数的Bayes估计%Bayesian Estimation for the Two-parameter Bathtub-shaped Lifetime Distribution Under Double Type-Ⅱ Censored Sample

    Institute of Scientific and Technical Information of China (English)

    田霆; 刘次华; 陈家清

    2012-01-01

    在双边定数截尾情形下,给出了一个2参数有浴盆形状失效率的寿命分布参数的极大似然估计和在平方损失下基于无信息先验下和共轭先验信息下的Bayes估计,通过大量的Monte-Carlo 数值模拟试验,对这2种情况下的估计的结果与极大似然估计作了比较.当样本n较大时,r愈大,s愈小,即丢失数据的数目愈小,T先验的Bayes估计与极大似然估计差不多,更接近于真值,都比无先验的Bayes估计要好.%When product life time follows the two-parameter bathtub-shaped life distribution,this article provide the maximum likelihood estimation and the Bayes estimation considering squared loss based on the non-informative prior and conjugate prior for the parameter under double type-Ⅱ Censored sample. Moreover,these estimators are compared with the maximum likelihood estimation by the Monte-Carlo simulation. When the sample size n is more greater, r is more biggers, is more smaller. Namely, the number of missing data is more smaller. The Bayes estimation of T prior is as much as the MLE, is close to the true data. They are more precision than the Bayes estimation of non-informative prior.

  10. Bubbling and bistability in two parameter discrete systems

    Indian Academy of Sciences (India)

    G Ambika; N V Sujatha

    2000-05-01

    We present a graphical analysis of the mechanisms underlying the occurrences of bubbling sequences and bistability regions in the bifurcation scenario of a special class of one dimensional two parameter maps. The main result of the analysis is that whether it is bubbling or bistability is decided by the sign of the third derivative at the inflection point of the map function.

  11. Optimal Two Parameter Bounds for the Seiffert Mean

    Directory of Open Access Journals (Sweden)

    Hui Sun

    2013-01-01

    Full Text Available We obtain sharp bounds for the Seiffert mean in terms of a two parameter family of means. Our results generalize and extend the recent bounds presented in the Journal of Inequalities and Applications (2012 and Abstract and Applied Analysis (2012.

  12. The IMACS Cluster Building Survey: IV. The Log-normal Star Formation History of Galaxies

    CERN Document Server

    Gladders, Michael D; Dressler, Alan; Poggianti, Bianca; Vulcani, Benedetta; Abramson, Louis

    2013-01-01

    We present here a simple model for the star formation history of galaxies that is successful in describing both the star formation rate density over cosmic time, as well as the distribution of specific star formation rates of galaxies at the current epoch, and the evolution of this quantity in galaxy populations to a redshift of z=1. We show first that the cosmic star formation rate density is remarkably well described by a simple log-normal in time. We next postulate that this functional form for the ensemble is also a reasonable description for the star formation histories of individual galaxies. Using the measured specific star formation rates for galaxies at z~0 from Paper III in this series, we then construct a realisation of a universe populated by such galaxies in which the parameters of the log-normal star formation history of each galaxy are adjusted to match the specific star formation rates at z~0 as well as fitting, in ensemble, the cosmic star formation rate density from z=0 to z=8. This model pr...

  13. The Lognormal Race: A Cognitive-Process Model of Choice and Latency with Desirable Psychometric Properties.

    Science.gov (United States)

    Rouder, Jeffrey N; Province, Jordan M; Morey, Richard D; Gomez, Pablo; Heathcote, Andrew

    2015-06-01

    We present a cognitive process model of response choice and response time performance data that has excellent psychometric properties and may be used in a wide variety of contexts. In the model there is an accumulator associated with each response option. These accumulators have bounds, and the first accumulator to reach its bound determines the response time and response choice. The times at which accumulator reaches its bound is assumed to be lognormally distributed, hence the model is race or minima process among lognormal variables. A key property of the model is that it is relatively straightforward to place a wide variety of models on the logarithm of these finishing times including linear models, structural equation models, autoregressive models, growth-curve models, etc. Consequently, the model has excellent statistical and psychometric properties and can be used in a wide range of contexts, from laboratory experiments to high-stakes testing, to assess performance. We provide a Bayesian hierarchical analysis of the model, and illustrate its flexibility with an application in testing and one in lexical decision making, a reading skill.

  14. Variational Bayes for Regime-Switching Log-Normal Models

    Directory of Open Access Journals (Sweden)

    Hui Zhao

    2014-07-01

    Full Text Available The power of projection using divergence functions is a major theme in information geometry. One version of this is the variational Bayes (VB method. This paper looks at VB in the context of other projection-based methods in information geometry. It also describes how to apply VB to the regime-switching log-normal model and how it provides a computationally fast solution to quantify the uncertainty in the model specification. The results show that the method can recover exactly the model structure, gives the reasonable point estimates and is very computationally efficient. The potential problems of the method in quantifying the parameter uncertainty are discussed.

  15. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    [Xe−θX]/L(θ)=x. The asymptotic formulas involve the Lambert W function. The established relations are used to provide two different numerical methods for evaluating the left tail probability of lognormal sum Sn=X1+⋯+Xn: a saddlepoint approximation and an exponential twisting importance sampling estimator. For the latter we...... demonstrate logarithmic efficiency. Numerical examples for the cdf Fn(x) and the pdf fn(x) of Sn are given in a range of values of σ2,n,x motivated from portfolio Value-at-Risk calculations....

  16. Asymptotic Ergodic Capacity Analysis of Composite Lognormal Shadowed Channels

    KAUST Repository

    Ansari, Imran Shafique

    2015-05-01

    Capacity analysis of composite lognormal (LN) shadowed links, such as Rician-LN, Gamma-LN, and Weibull-LN, is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single composite link transmission system is presented in terms of well- known elementary functions. Capitalizing on these new moments expressions, we present asymptotically tight lower bounds for the ergodic capacity at high SNR. All the presented results are verified via computer-based Monte-Carlo simulations. © 2015 IEEE.

  17. Two-parameter Levy processes along decreasing paths

    CERN Document Server

    Covo, Shai

    2010-01-01

    Let {X_{t_1,t_2}: t_1,t_2 >= 0} be a two-parameter L\\'evy process on R^d. We study basic properties of the one-parameter process {X_{x(t),y(t)}: t \\in T} where x and y are, respectively, nondecreasing and nonincreasing nonnegative continuous functions on the interval T. We focus on and characterize the case where the process has stationary increments.

  18. Beam Elements on Linear Variable Two-Parameter Elastic Foundation

    Directory of Open Access Journals (Sweden)

    Iancu-Bogdan Teodoru

    2008-01-01

    Full Text Available The traditional way to overcome the shortcomings of the Winkler foundation model is to incorporate spring coupling by assemblages of mechanical elements such as springs, flexural elements (beams in one-dimension, 1-D, plates in 2-D, shear-only layers and deformed, pretensioned membranes. This is the class of two-parameter foundations ? named like this because they have the second parameter which introduces interactions between adjacent springs, in addition to the first parameter from the ordinary Winkler?s model. This class of models includes Wieghardt, Filonenko-Borodich, Hetényi and Pasternak foundations. Mathematically, the equations to describe the reaction of the two-parameter foundations are equilibrium, and the only difference is the definition of the parameters. In order to analyse the bending behavior of a Euler-Bernoulli beam resting on linear variable two-parameter elastic foundation a (displacement Finite Element (FE formulation, based on the cubic displacement function of the governing differential equation, is introduced.

  19. Use of the truncated shifted Pareto distribution in assessing size distribution of oil and gas fields

    Science.gov (United States)

    Houghton, J.C.

    1988-01-01

    The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.

  20. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  1. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    Let X be lognormal(μ,σ2) with density f(x), let θ>0 and define L(θ)=Ee−θX. We study properties of the exponentially tilted density (Esscher transform) fθ(x)=e−θxf(x)/L(θ), in particular its moments, its asymptotic form as θ→∞ and asymptotics for the saddlepoint θ(x) determined by E[Xe−θX]/L(θ)=x....... demonstrate logarithmic efficiency. Numerical examples for the cdf Fn(x) and the pdf fn(x) of Sn are given in a range of values of σ2,n,x motivated from portfolio Value-at-Risk calculations....

  2. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  3. Multivariate poisson-lognormal model for modeling related factors in crash frequency by severity

    Directory of Open Access Journals (Sweden)

    Mehdi Tazhibi

    2013-01-01

    Full Text Available Aims: Traditionally, roadway safety analyses have used univariate distributions to model crash data for each level of severity separately. This paper uses the multivariate Poisson lognormal (MVPLN models to estimate the expected crash frequency by two levels of severity and then compares those estimates with the univariate Poisson-lognormal (UVPLN and the univariate Poisson (UVP models. Materials and Methods: The parameters estimation is done by Bayesian method for crash data at two levels of severity at the intersection of Isfahan city for 6 months. Results: The results showed that there was over-dispersion issue in data. The UVP model is not able to overcome this problem while the MVPLN model can account for over-dispersion. Also, the estimates of the extra Poisson variation parameters in the MVPLN model were smaller than the UVPLN model that causes improvement in the precision of the MNPLN model. Hence, the MVPLN model is better fitted to the data set. Also, results showed effect of the total Average annual daily traffic (AADT on the property damage only crash was significant in the all of models but effect of the total left turn AADT on the injuries and fatalities crash was significant just in the UVP model. Hence, holding all other factors fixed more property damage only crashes were expected on more the total AADT. For example, under MVPLN model an increase of 1000 vehicles in (average the total AADT was predicted to result in 31% more property damage only crash. Conclusion: Hence, reduction of total AADT was predicted to be highly cost-effective, in terms of the crash cost reductions over the long run.

  4. Two-parameter asymptotics in magnetic Weyl calculus

    Science.gov (United States)

    Lein, Max

    2010-12-01

    This paper is concerned with small parameter asymptotics of magnetic quantum systems. In addition to a semiclassical parameter ɛ, the case of small coupling λ to the magnetic vector potential naturally occurs in this context. Magnetic Weyl calculus is adapted to incorporate both parameters, at least one of which needs to be small. Of particular interest is the expansion of the Weyl product which can be used to expand the product of operators in a small parameter, a technique which is prominent to obtain perturbation expansions. Three asymptotic expansions for the magnetic Weyl product of two Hörmander class symbols are proven as (i) ɛ ≪ 1 and λ ≪ 1, (ii) ɛ ≪ 1 and λ = 1, as well as (iii) ɛ = 1 and λ ≪ 1. Expansions (i) and (iii) are impossible to obtain with ordinary Weyl calculus. Furthermore, I relate the results derived by ordinary Weyl calculus with those obtained with magnetic Weyl calculus by one- and two-parameter expansions. To show the power and versatility of magnetic Weyl calculus, I derive the semirelativistic Pauli equation as a scaling limit from the Dirac equation up to errors of fourth order in 1/c.

  5. Cosmology on all scales: a two-parameter perturbation expansion

    CERN Document Server

    Goldberg, Sophia R; Malik, Karim A

    2016-01-01

    We propose and construct a two-parameter perturbative expansion around a Friedmann-Lema\\^{i}tre-Robertson-Walker geometry that can be used to model high-order gravitational effects in the presence of non-linear structure. This framework reduces to the weak-field and slow-motion post-Newtonian treatment of gravity in the appropriate limits, but also includes the low-amplitude large-scale fluctuations that are important for cosmological modelling. We derive a set of field equations that can be applied to the late Universe, where non-linear structure exists on supercluster scales, and perform a detailed investigation of the associated gauge problem. This allows us to identify a consistent set of perturbed quantities in both the gravitational and matter sectors, and to construct a set of gauge-invariant quantities that correspond to each of them. The field equations, written in terms of these quantities, take on a relatively simple form, and allow the effects of small-scale structure on the large-scale properties...

  6. Mirror symmetry for two-parameter models, 1

    CERN Document Server

    Candelas, Philip; Font, A; Katz, S; Morrison, Douglas Robert Ogston; Candelas, Philip; Ossa, Xenia de la; Font, Anamaria; Katz, Sheldon; Morrison, David R.

    1994-01-01

    We study, by means of mirror symmetry, the quantum geometry of the K\\"ahler-class parameters of a number of Calabi-Yau manifolds that have $b_{11}=2$. Our main interest lies in the structure of the moduli space and in the loci corresponding to singular models. This structure is considerably richer when there are two parameters than in the various one-parameter models that have been studied hitherto. We describe the intrinsic structure of the point in the (compactification of the) moduli space that corresponds to the large complex structure or classical limit. The instanton expansions are of interest owing to the fact that some of the instantons belong to families with continuous parameters. We compute the Yukawa couplings and their expansions in terms of instantons of genus zero. By making use of recent results of Bershadsky et al. we compute also the instanton numbers for instantons of genus one. For particular values of the parameters the models become birational to certain models with one parameter. The co...

  7. High SNR BER comparison of coherent and differentially coherent modulation schemes in lognormal fading channels

    KAUST Repository

    Song, Xuegui

    2014-09-01

    Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes.

  8. THE INITIAL MASS FUNCTION MODELED BY A LEFT TRUNCATED BETA DISTRIBUTION

    Energy Technology Data Exchange (ETDEWEB)

    Zaninetti, Lorenzo, E-mail: zaninetti@ph.unito.it [Dipartimento di Fisica, Via Pietro Giuria 1, I-10125 Torino (Italy)

    2013-03-10

    The initial mass function for stars is usually fitted by three straight lines, which means it has seven parameters. The presence of brown dwarfs (BDs) increases the number of straight lines to four and the number of parameters to nine. Another common fitting function is the lognormal distribution, which is characterized by two parameters. This paper is devoted to demonstrating the advantage of introducing a left truncated beta probability density function, which is characterized by four parameters. The constant of normalization, the mean, the mode, and the distribution function are calculated for the left truncated beta distribution. The normal beta distribution that results from convolving independent normally distributed and beta distributed components is also derived. The chi-square test and the Kolmogorov-Smirnov test are performed on a first sample of stars and BDs that belongs to the massive young cluster NGC 6611, and on a second sample that represents the masses of the stars of the cluster NGC 2362.

  9. The initial mass function modeled by a left truncated beta distribution

    CERN Document Server

    Zaninetti, L

    2013-01-01

    The initial mass function (IMF) for the stars is usually fitted by three straight lines, which means seven parameters. The presence of brown dwarfs (BD) increases to four the straight lines and to nine the parameters. Another common fitting function is the lognormal distribution, which is characterized by two parameters. This paper is devoted to demonstrating the advantage of introducing a left truncated beta probability density function, which is characterized by four parameters. The constant of normalization, the mean, the mode and the distribution function are calculated for the left truncated beta distribution. The normal-beta (NB) distribution which results from convolving independent normally distributed and beta distributed components is also derived. The chi-square test and the K-S test are performed on a first sample of stars and BDs which belongs to the massive young cluster NGC 6611 and on a second sample which represents the star's masses of the cluster NGC 2362.

  10. Efficient, uninformative sampling of limb darkening coefficients for two-parameter laws

    CERN Document Server

    Kipping, David M

    2013-01-01

    Stellar limb darkening affects a wide range of astronomical measurements and is frequently modeled with a parametric model using polynomials in the cosine of the angle between the line of sight and the emergent intensity. Two-parameter laws are particularly popular for cases where one wishes to fit freely for the limb darkening coefficients (i.e. an uninformative prior) due to the compact prior volume and the fact more complex models rarely obtain unique solutions with present data. In such cases, we show that the two limb darkening coefficients are constrained by three physical boundary conditions, describing a triangular region in the two-dimensional parameter space. We show that uniformly distributed samples may be drawn from this region with optimal efficiency by a technique developed by computer graphical programming: triangular sampling. Alternatively, one can use make draws using a uniform, bivariate Dirichlet distribution. We provide simple expressions for these parametrizations for both techniques ap...

  11. A meta-analysis of estimates of the AIDS incubation distribution.

    Science.gov (United States)

    Cooley, P C; Myers, L E; Hamill, D N

    1996-06-01

    Information from 12 studies is combined to estimate the AIDS incubation distribution with greater precision than is possible from a single study. The analysis uses a hierarchy of parametric models based on a four-parameter generalized F distribution. This general model contains four standard two-parameter distributions as special cases. The cases are the Weibull, gamma, log-logistic, lognormal distributions. These four special cases subsume three distinct asymptotic hazard behaviors. As time increases beyond the median of approximately 10 years, the hazard can increase to infinity (Weibull), can plateau at some constant level (gamma), or can decrease to zero (log-logistic and lognormal). The Weibull, gamma and 'log-logistic distributions' which represent the three distinct asymptotic hazard behaviors, all fit the data as well as the generalized F distribution at the 25 percent significance level. Hence, we conclude that incubation data is still too limited to ascertain the specific hazard assumption that should be utilized in studies of the AIDS epidemic. Accordingly, efforts to model the AIDS epidemic (e.g., back-calculation approaches) should allow the incubation distribution to take several forms to adequately represent HIV estimation uncertainty. It is recommended that, at a minimum, the specific Weibull, gamma and log-logistic distributions estimated in this meta-analysis should all be used in modeling the AIDS epidemic, to reflect this uncertainty.

  12. Discrete Lognormal Model as an Unbiased Quantitative Measure of Scientific Performance Based on Empirical Citation Data

    Science.gov (United States)

    Moreira, Joao; Zeng, Xiaohan; Amaral, Luis

    2013-03-01

    Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants

  13. Scaling Relations of Lognormal Type Growth Process with an Extremal Principle of Entropy

    Directory of Open Access Journals (Sweden)

    Zi-Niu Wu

    2017-01-01

    Full Text Available The scale, inflexion point and maximum point are important scaling parameters for studying growth phenomena with a size following the lognormal function. The width of the size function and its entropy depend on the scale parameter (or the standard deviation and measure the relative importance of production and dissipation involved in the growth process. The Shannon entropy increases monotonically with the scale parameter, but the slope has a minimum at p 6/6. This value has been used previously to study spreading of spray and epidemical cases. In this paper, this approach of minimizing this entropy slope is discussed in a broader sense and applied to obtain the relationship between the inflexion point and maximum point. It is shown that this relationship is determined by the base of natural logarithm e ' 2.718 and exhibits some geometrical similarity to the minimal surface energy principle. The known data from a number of problems, including the swirling rate of the bathtub vortex, more data of droplet splashing, population growth, distribution of strokes in Chinese language characters and velocity profile of a turbulent jet, are used to assess to what extent the approach of minimizing the entropy slope can be regarded as useful.

  14. Strength and fracture toughness of heterogeneous blocks with joint lognormal modulus and failure strain

    Science.gov (United States)

    Dimas, Leon S.; Veneziano, Daniele; Buehler, Markus J.

    2016-07-01

    We obtain analytical approximations to the probability distribution of the fracture strengths of notched one-dimensional rods and two-dimensional plates in which the stiffness (Young's modulus) and strength (failure strain) of the material vary as jointly lognormal random fields. The fracture strength of the specimen is measured by the elongation, load, and toughness at two critical stages: when fracture initiates at the notch tip and, in the 2D case, when fracture propagates through the entire specimen. This is an extension of a previous study on the elastic and fracture properties of systems with random Young's modulus and deterministic material strength (Dimas et al., 2015a). For 1D rods our approach is analytical and builds upon the ANOVA decomposition technique of (Dimas et al., 2015b). In 2D we use a semi-analytical model to derive the fracture initiation strengths and regressions fitted to simulation data for the effect of crack arrest during fracture propagation. Results are validated through Monte Carlo simulation. Randomness of the material strength affects in various ways the mean and median values of the initial strengths, their log-variances, and log-correlations. Under low spatial correlation, material strength variability can significantly increase the effect of crack arrest, causing ultimate failure to be a more predictable and less brittle failure mode than fracture initiation. These insights could be used to guide design of more fracture resistant composites, and add to the design features that enhance material performance.

  15. The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random Field Coefficients

    Directory of Open Access Journals (Sweden)

    Michal Beres

    2017-01-01

    Full Text Available This article presents a study of the Stochastic Galerkin Method (SGM applied to the Darcy flow problem with a log-normally distributed random material field given by a mean value and an autocovariance function. We divide the solution of the problem into two parts. The first one is the decomposition of a random field into a sum of products of a random vector and a function of spatial coordinates; this can be achieved using the Karhunen-Loeve expansion. The second part is the solution of the problem using SGM. SGM is a simple extension of the Galerkin method in which the random variables represent additional problem dimensions. For the discretization of the problem, we use a finite element basis for spatial variables and a polynomial chaos discretization for random variables. The results of SGM can be utilised for the analysis of the problem, such as the examination of the average flow, or as a tool for the Bayesian approach to inverse problems.

  16. Unification of the Two-Parameter Equation of State and the Principle of Corresponding States

    DEFF Research Database (Denmark)

    Mollerup, Jørgen

    1998-01-01

    A two-parameter equation of state is a two-parameter corresponding states model. A two-parameter corresponding states model is composed of two scale factor correlations and a reference fluid equation of state. In a two-parameter equation of state the reference equation of state is the two......-parameter equation of state itself. If we retain the scale factor correlations derived from a two-parameter equation of state, but replace the two-parameter equation of state with a more accurate pure component equation of state for the reference fluid, we can improve the existing models of equilibrium properties...... without refitting any model parameters, and without imposing other restrictions as regards to species and mixing rules as already imposed by the two-parameter equation of state. The theory and procedure is outlined in the paper....

  17. Background Noise Distribution before and afterHigh-Resolution Processing in Ship-borne Radar

    Institute of Scientific and Technical Information of China (English)

    ZHANGZhong

    2005-01-01

    When high-resolution algorithm is applied in ship-borne radar~ high-resolution algorithm's nonlinearity and distributional characteristics before highresolution processing determine background clutter's distributional characteristics after high-resolution and detector design afterwards. Because background noise before high-resolution has physical significance, the statistical model of first-order Bragg lines and second order components of sea clutter is put forward. Then by using higher-order cumulative quantity's statistical verification of actually measured data, it is concluded that background noise before high-resolution conforms to normal distribution in ship-borne radar. The non-linearity of high-resolution algorithm determines that background noise after high-resolution processing conforms to non-normal distribution. Non-normal distributed clutter mainly include Weibull, Lognormal and K clutter. Rayleigh clutter can be seen as special case of Weibull clutter. These clutter have differently statistical characteristics and can be discriminated by clutter characteristics recognition. The numerical domain's distribution after high-resolution processing is determined by improved minimum entropy clutter characteristics recognition method based on rule AIC, namely two-parameter domain scanning method. This identification method has higher recognition rate. It is verified that background noise after high-resolution by pre-whitenedMUSIC conforms to lognormal distribution.

  18. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  19. Thermodynamic geometry, condensation and Debye model of two-parameter deformed statistics

    Science.gov (United States)

    Mohammadzadeh, Hosein; Azizian-Kalandaragh, Yashar; Cheraghpour, Narges; Adli, Fereshteh

    2017-08-01

    We consider the statistical distribution function of a two parameter deformed system, namely qp-deformed bosons and fermions. Using a thermodynamic geometry approach, we derive the thermodynamic curvature of an ideal gas with particles obeying qp-bosons and qp-fermions. We show that the intrinsic statistic interaction of qp-bosons is attractive in all physical ranges, while it is repulsive for qp-fermions. Also, the thermodynamic curvature of qp-boson gas is singular at a specified value of fugacity and therefore, a phase transition such as Bose-Einstein condensation can take place. In the following, we compare the experimental and theoretical results of temperature-dependent specific heat capacity of some metallic materials in the framework of q and qp-deformed algebras.

  20. The lognormal perfusion model for disruption replenishment measurements of blood flow: in vivo validation.

    Science.gov (United States)

    Hudson, John M; Leung, Kogee; Burns, Peter N

    2011-10-01

    Dynamic contrast enhanced ultrasound (DCE-US) is evolving as a promising tool to noninvasively quantify relative tissue perfusion in organs and solid tumours. Quantification using the method of disruption replenishment is best performed using a model that accurately describes the replenishment of microbubble contrast agents through the ultrasound imaging plane. In this study, the lognormal perfusion model was validated using an exposed in vivo rabbit kidney model. Compared against an implanted transit time flow meter, longitudinal relative flow measurement was (×3) less variable and correlated better when quantification was performed with the lognormal perfusion model (Spearman r = 0.90, 95% confidence interval [CI] = 0.05) vs. the prevailing mono-exponential model (Spearman r = 0.54, 95% CI = 0.18). Disruption-replenishment measurements using the lognormal perfusion model were reproducible in vivo to within 12%.

  1. Outage Performance Analysis of Cooperative Diversity with MRC and SC in Correlated Lognormal Channels

    Directory of Open Access Journals (Sweden)

    Skraparlis D

    2009-01-01

    Full Text Available Abstract The study of relaying systems has found renewed interest in the context of cooperative diversity for communication channels suffering from fading. This paper provides analytical expressions for the end-to-end SNR and outage probability of cooperative diversity in correlated lognormal channels, typically found in indoor and specific outdoor environments. The system under consideration utilizes decode-and-forward relaying and Selection Combining or Maximum Ratio Combining at the destination node. The provided expressions are used to evaluate the gains of cooperative diversity compared to noncooperation in correlated lognormal channels, taking into account the spectral and energy efficiency of the protocols and the half-duplex or full-duplex capability of the relay. Our analysis demonstrates that correlation and lognormal variances play a significant role on the performance gain of cooperative diversity against noncooperation.

  2. Selection of statistical distributions for prediction of steam generator tube degradation

    Energy Technology Data Exchange (ETDEWEB)

    Stavropoulos, K.D.; Gorman, J.A. [Dominion Engr., Inc., McLean, VA (United States); Staehle, R.W. [Univ. of Minnesota, Minneapolis, MN (United States); Welty, C.S. Jr. [Electric Power Research Institute, Palo Alto, CA (United States)

    1992-12-31

    This paper presents the first part of a project directed at developing methods for characterizing and predicting the progression of degradation of PWR steam generator tubes. This first part covers the evaluation of statistical distributions for use in such analyses. The data used in the evaluation of statistical distributions included data for primary water stress corrosion cracking (PWSCC) at roll transitions and U-bends, and intergranular attack/stress corrosion cracking (IGA/SCC) at tube sheet and tube support plate crevices. Laboratory data for PWSCC of reverse U-bends were also used. The review of statistical distributions indicated that the Weibull distribution provides an easy to use and effective method. Another statistical function, the log-normal, was found to provide essentially equivalent results. Two parameter fits, without an initiation time, were found to provide the most reliable predictions.

  3. Subcarrier MPSK/MDPSK modulated optical wireless communications in lognormal turbulence

    KAUST Repository

    Song, Xuegui

    2015-03-01

    Bit-error rate (BER) performance of subcarrier Mary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) is analyzed for optical wireless communications over the lognormal turbulence channels. Both exact BER and approximate BER expressions are presented. We demonstrate that the approximate BER, which is obtained by dividing the symbol error rate by the number of bits per symbol, can be used to estimate the BER performance with acceptable accuracy. Through our asymptotic analysis, we derive closed-form asymptotic BER performance loss expression for MDPSK with respect to MPSK in the lognormal turbulence channels. © 2015 IEEE.

  4. Maximum Likelihood Estimates of Parameters in Various Types of Distribution Fitted to Important Data Cases.

    OpenAIRE

    Hirose, Hideo

    1998-01-01

    TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...

  5. Maximum Likelihood Estimates of Parameters in Various Types of Distribution Fitted to Important Data Cases.

    OpenAIRE

    Hirose, Hideo

    1998-01-01

    TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...

  6. The effect of ignoring individual heterogeneity in Weibull log-normal sire frailty models.

    Science.gov (United States)

    Damgaard, L H; Korsgaard, I R; Simonsen, J; Dalsgaard, O; Andersen, A H

    2006-06-01

    The objective of this study was, by means of simulation, to quantify the effect of ignoring individual heterogeneity in Weibull sire frailty models on parameter estimates and to address the consequences for genetic inferences. Three simulation studies were evaluated, which included 3 levels of individual heterogeneity combined with 4 levels of censoring (0, 25, 50, or 75%). Data were simulated according to balanced half-sib designs using Weibull log-normal animal frailty models with a normally distributed residual effect on the log-frailty scale. The 12 data sets were analyzed with 2 models: the sire model, equivalent to the animal model used to generate the data (complete sire model), and a corresponding model in which individual heterogeneity in log-frailty was neglected (incomplete sire model). Parameter estimates were obtained from a Bayesian analysis using Gibbs sampling, and also from the software Survival Kit for the incomplete sire model. For the incomplete sire model, the Monte Carlo and Survival Kit parameter estimates were similar. This study established that when unobserved individual heterogeneity was ignored, the parameter estimates that included sire effects were biased toward zero by an amount that depended in magnitude on the level of censoring and the size of the ignored individual heterogeneity. Despite the biased parameter estimates, the ranking of sires, measured by the rank correlations between true and estimated sire effects, was unaffected. In comparison, parameter estimates obtained using complete sire models were consistent with the true values used to simulate the data. Thus, in this study, several issues of concern were demonstrated for the incomplete sire model.

  7. Verification of LHS distributions.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton

    2006-04-01

    This document provides verification test results for normal, lognormal, and uniform distributions that are used in Sandia's Latin Hypercube Sampling (LHS) software. The purpose of this testing is to verify that the sample values being generated in LHS are distributed according to the desired distribution types. The testing of distribution correctness is done by examining summary statistics, graphical comparisons using quantile-quantile plots, and format statistical tests such as the Chisquare test, the Kolmogorov-Smirnov test, and the Anderson-Darling test. The overall results from the testing indicate that the generation of normal, lognormal, and uniform distributions in LHS is acceptable.

  8. Two-parameter deformed supersymmetric oscillators with SUq1/q2(n|m)-covariance

    CERN Document Server

    Algin, A; Arikan, A S; Algin, Abdullah; Arik, Metin; Arikan, Ali Serdar

    2003-01-01

    A two-parameter deformed superoscillator system with SUq1/q2(n|m)-covariance is presented and used to construct a two-parameter deformed N=2 SUSY algebra. The Fock space representation of the algebra is discussed and the deformed Hamiltonian for such generalized superoscillators is obtained.

  9. The bispectrum covariance beyond Gaussianity: A log-normal approach

    CERN Document Server

    Martin, Sandra; Simon, Patrick

    2011-01-01

    To investigate and specify the statistical properties of cosmological fields with particular attention to possible non-Gaussian features, accurate formulae for the bispectrum and the bispectrum covariance are required. The bispectrum is the lowest-order statistic providing an estimate for non-Gaussianities of a distribution, and the bispectrum covariance depicts the errors of the bispectrum measurement and their correlation on different scales. Currently, there do exist fitting formulae for the bispectrum and an analytical expression for the bispectrum covariance, but the former is not very accurate and the latter contains several intricate terms and only one of them can be readily evaluated from the power spectrum of the studied field. Neglecting all higher-order terms results in the Gaussian approximation of the bispectrum covariance. We study the range of validity of this Gaussian approximation for two-dimensional non-Gaussian random fields. For this purpose, we simulate Gaussian and non-Gaussian random fi...

  10. Pricing FX Options in the Heston/CIR Jump-Diffusion Model with Log-Normal and Log-Uniform Jump Amplitudes

    Directory of Open Access Journals (Sweden)

    Rehez Ahlip

    2015-01-01

    model for the exchange rate with log-normal jump amplitudes and the volatility model with log-uniformly distributed jump amplitudes. We assume that the domestic and foreign stochastic interest rates are governed by the CIR dynamics. The instantaneous volatility is correlated with the dynamics of the exchange rate return, whereas the domestic and foreign short-term rates are assumed to be independent of the dynamics of the exchange rate and its volatility. The main result furnishes a semianalytical formula for the price of the foreign exchange European call option.

  11. Log-normal censored regression model detecting prognostic factors in gastric cancer: A study of 3018 cases

    Institute of Scientific and Technical Information of China (English)

    Bin-Bin Wang; Cai-Gang Liu; Ping Lu; A Latengbaolide; Yang Lu

    2011-01-01

    AIM: To investigate the efficiency of Cox proportional hazard model in detecting prognostic factors for gastric cancer.METHODS: We used the log-normal regression model to evaluate prognostic factors in gastric cancer and compared it with the Cox model. Three thousand and eighteen gastric cancer patients who received a gastrectomy between 1980 and 2004 were retrospectively evaluated. Clinic-pathological factors were included in a log-normal model as well as Cox model. The akaike information criterion (AIC) was employed to compare the efficiency of both models. Univariate analysis indicated that age at diagnosis, past history, cancer location, distant metastasis status, surgical curative degree, combined other organ resection, Borrmann type, Lauren's classification, pT stage, total dissected nodes and pN stage were prognostic factors in both log-normal and Cox models.RESULTS: In the final multivariate model, age at diagnosis,past history, surgical curative degree, Borrmann type, Lauren's classification, pT stage, and pN stage were significant prognostic factors in both log-normal and Cox models. However, cancer location, distant metastasis status, and histology types were found to be significant prognostic factors in log-normal results alone.According to AIC, the log-normal model performed better than the Cox proportional hazard model (AIC value:2534.72 vs 1693.56).CONCLUSION: It is suggested that the log-normal regression model can be a useful statistical model to evaluate prognostic factors instead of the Cox proportional hazard model.

  12. Evidence for two Lognormal States in Multi-wavelength Flux Variation of FSRQ PKS 1510-089

    CERN Document Server

    Kushwaha, Pankaj; Misra, Ranjeev; Sahayanathan, S; Singh, K P; Baliyan, K S

    2016-01-01

    We present a systematic characterization of multi-wavelength emission from blazar PKS 1510-089 using well-sampled data at infrared(IR)-optical, X-ray and $\\gamma$-ray energies. The resulting flux distributions, except at X-rays, show two distinct lognormal profiles corresponding to a high and a low flux level. The dispersions exhibit energy dependent behavior except for the LAT $\\gamma$-ray and optical B-band. During the low level flux states, it is higher towards the peak of the spectral energy distribution, with $\\gamma$-ray being intrinsically more variable followed by IR and then optical, consistent with mainly being a result of varying bulk Lorentz factor. On the other hand, the dispersions during the high state are similar in all bands expect optical B-band, where thermal emission still dominates. The centers of distributions are a factor of $\\sim 4$ apart, consistent with anticipation from studies of extragalactic $\\gamma$-ray background with the high state showing a relatively harder mean spectral ind...

  13. QUASI-DIAGONALIZATION FOR A SINGULARLY PERTURBED DIFFERENTIAL SYSTEM WITH TWO PARAMETERS

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    By two successive linear transformations,a singularly perturbed differential system with two parameters is quasi-diagonalized. The method of variation of constants and the principle of contraction map are used to prove the existence of the transformations.

  14. Quasi-sure Product Variation of Two-parameter Smooth Martingales on the Wiener Space

    Institute of Scientific and Technical Information of China (English)

    Ji Cheng LIU; Jia Gang REN

    2006-01-01

    In this paper, we prove that the process of product variation of a two-parameter smooth martingale admits an ∞ modification, which can be constructed as the quasi-sure limit of sum of the corresponding product variation.

  15. Reconstruction of Gaussian and log-normal fields with spectral smoothness

    CERN Document Server

    Oppermann, Niels; Bell, Michael R; Enßlin, Torsten A

    2012-01-01

    We develop a method to infer log-normal random fields from measurement data affected by Gaussian noise. The log-normal model is well suited to describe strictly positive signals with fluctuations whose amplitude varies over several orders of magnitude. We use the formalism of minimum Gibbs free energy to derive an algorithm that uses the signal's correlation structure to regularize the reconstruction. The correlation structure, described by the signal's power spectrum, is thereby reconstructed from the same data set. We further introduce a prior for the power spectrum that enforces spectral smoothness. The appropriateness of this prior in different scenarios is discussed and its effects on the reconstruction's results are demonstrated. We validate the performance of our reconstruction algorithm in a series of one- and two-dimensional test cases with varying degrees of non-linearity and different noise levels.

  16. Two-parameter semigroups, evolutions and their applications to Markov and diffusion fields on the plane

    Directory of Open Access Journals (Sweden)

    Yu. Mishura

    1996-01-01

    Full Text Available We study two-parameter coordinate-wise C0-semigroups and their generators, as well as two-parameter evolutions and differential equations up to the second order for them. These results are applied to obtain the Hille-Yosida theorem for homogeneous Markov fields of the Feller type and to establish forward, backward, and mixed Kolmogorov equations for nonhomogeneous diffusion fields on the plane.

  17. On the Efficient Simulation of Outage Probability in a Log-normal Fading Environment

    KAUST Repository

    Rached, Nadhir Ben

    2017-02-15

    The outage probability (OP) of the signal-to-interference-plus-noise ratio (SINR) is an important metric that is used to evaluate the performance of wireless systems. One difficulty toward assessing the OP is that, in realistic scenarios, closed-form expressions cannot be derived. This is for instance the case of the Log-normal environment, in which evaluating the OP of the SINR amounts to computing the probability that a sum of correlated Log-normal variates exceeds a given threshold. Since such a probability does not admit a closed-form expression, it has thus far been evaluated by several approximation techniques, the accuracies of which are not guaranteed in the region of small OPs. For these regions, simulation techniques based on variance reduction algorithms is a good alternative, being quick and highly accurate for estimating rare event probabilities. This constitutes the major motivation behind our work. More specifically, we propose a generalized hybrid importance sampling scheme, based on a combination of a mean shifting and a covariance matrix scaling, to evaluate the OP of the SINR in a Log-normal environment. We further our analysis by providing a detailed study of two particular cases. Finally, the performance of these techniques is performed both theoretically and through various simulation results.

  18. Pareto's Law of Income Distribution: Evidence for Germany, the United Kingdom, and the United States

    CERN Document Server

    Clementi, F

    2005-01-01

    We analyze three sets of income data: the US Panel Study of Income Dynamics (PSID), the British Household Panel Survey (BHPS), and the German Socio-Economic Panel (GSOEP). It is shown that the empirical income distribution is consistent with a two-parameter lognormal function for the low-middle income group (97%-99% of the population), and with a Pareto or power law function for the high income group (1%-3% of the population). This mixture of two qualitatively different analytical distributions seems stable over the years covered by our data sets, although their parameters significantly change in time. It is also found that the probability density of income growth rates almost has the form of an exponential function.

  19. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    NARCIS (Netherlands)

    van Rijssel, Jozef; Kuipers, Bonny W M; Erne, Ben

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal d

  20. Analysis of rabbit doe longevity using a semiparametric log-Normal animal frailty model with time-dependent covariates

    Directory of Open Access Journals (Sweden)

    Damgaard Lars

    2006-04-01

    Full Text Available Abstract Data on doe longevity in a rabbit population were analysed using a semiparametric log-Normal animal frailty model. Longevity was defined as the time from the first positive pregnancy test to death or culling due to pathological problems. Does culled for other reasons had right censored records of longevity. The model included time dependent covariates associated with year by season, the interaction between physiological state and the number of young born alive, and between order of positive pregnancy test and physiological state. The model also included an additive genetic effect and a residual in log frailty. Properties of marginal posterior distributions of specific parameters were inferred from a full Bayesian analysis using Gibbs sampling. All of the fully conditional posterior distributions defining a Gibbs sampler were easy to sample from, either directly or using adaptive rejection sampling. The marginal posterior mean estimates of the additive genetic variance and of the residual variance in log frailty were 0.247 and 0.690.

  1. Analysis of rabbit doe longevity using a semiparametric log-Normal animal frailty model with time-dependent covariates.

    Science.gov (United States)

    Sánchez, Juan Pablo; Korsgaard, Inge Riis; Damgaard, Lars Holm; Baselga, Manuel

    2006-01-01

    Data on doe longevity in a rabbit population were analysed using a semiparametric log-Normal animal frailty model. Longevity was defined as the time from the first positive pregnancy test to death or culling due to pathological problems. Does culled for other reasons had right censored records of longevity. The model included time dependent covariates associated with year by season, the interaction between physiological state and the number of young born alive, and between order of positive pregnancy test and physiological state. The model also included an additive genetic effect and a residual in log frailty. Properties of marginal posterior distributions of specific parameters were inferred from a full Bayesian analysis using Gibbs sampling. All of the fully conditional posterior distributions defining a Gibbs sampler were easy to sample from, either directly or using adaptive rejection sampling. The marginal posterior mean estimates of the additive genetic variance and of the residual variance in log frailty were 0.247 and 0.690.

  2. Lognormal firing rate distribution reveals prominent fluctuation-driven regime in spinal motor networks

    DEFF Research Database (Denmark)

    Petersen, Peter C.; Berg, Rune W.

    2016-01-01

    When spinal circuits generate rhythmic movements it is important that the neuronal activity remains within stable bounds to avoid saturation and to preserve responsiveness. Here, we simultaneously record from hundreds of neurons in lumbar spinal circuits of turtles and establish the neuronal frac...

  3. Property insurance loss distributions

    Science.gov (United States)

    Burnecki, Krzysztof; Kukla, Grzegorz; Weron, Rafał

    2000-11-01

    Property claim services (PCS) provides indices for losses resulting from catastrophic events in the US. In this paper, we study these indices and take a closer look at distributions underlying insurance claims. Surprisingly, the lognormal distribution seems to give a better fit than the Paretian one. Moreover, lagged autocorrelation study reveals a mean-reverting structure of indices returns.

  4. Theory of two-parameter Markov chain with an application in warranty study

    CERN Document Server

    Calvache, Álvaro

    2012-01-01

    In this paper we present the classical results of Kolmogorov's backward and forward equations to the case of a two-parameter Markov process. These equations relates the infinitesimal transition matrix of the two-parameter Markov process. However, solving these equations is not possible and we require a numerical procedure. In this paper, we give an alternative method by use of double Laplace transform of the transition probability matrix and of the infinitesimal transition matrix of the process. An illustrative example is presented for the method proposed. In this example, we consider a two-parameter warranty model, in which a system can be any of these states: working, failure. We calculate the transition density matrix of these states and also the cost of the warranty for the proposed model.

  5. Application of continuous normal-lognormal bivariate density functions in a sensitivity analysis of municipal solid waste landfill.

    Science.gov (United States)

    Petrovic, Igor; Hip, Ivan; Fredlund, Murray D

    2016-09-01

    The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The

  6. Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2012-08-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr−1, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m−2 respectively, which show good consistency with model estimates from other studies.

  7. On the low SNR capacity of log-normal turbulence channels with full CSI

    KAUST Repository

    Benkhelifa, Fatma

    2014-09-01

    In this paper, we characterize the low signal-To-noise ratio (SNR) capacity of wireless links undergoing the log-normal turbulence when the channel state information (CSI) is perfectly known at both the transmitter and the receiver. We derive a closed form asymptotic expression of the capacity and we show that it scales essentially as λ SNR where λ is the water-filling level satisfying the power constraint. An asymptotically closed-form expression of λ is also provided. Using this framework, we also propose an on-off power control scheme which is capacity-achieving in the low SNR regime.

  8. A Jacobi-Davidson type method for a right definite two-parameter eigenvalue problem

    NARCIS (Netherlands)

    Hochstenbach, M.; Plestenjak, B.

    2001-01-01

    We present a new numerical iterative method for computing selected eigenpairs of a right definite two-parameter eigenvalue problem. The method works even without good initial approximations and is able to tackle large problems that are too expensive for existing methods. The new method is similar

  9. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    Science.gov (United States)

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  10. Codimension-Two Bifurcation Analysis in Hindmarsh-Rose Model with Two Parameters

    Institute of Scientific and Technical Information of China (English)

    DUAN Li-Xia; LU Qi-Shao

    2005-01-01

    @@ Bifurcation phenomena in a Hindmarsh-Rose neuron model are investigated. Special attention is paid to the bifurcation structures off two parameters, where codimension-two generalized-Hopf bifurcation and fold-Hopf bifurcation occur. The classification offiring patterns as well as the transition mechanism in different regions on the parameter plane are obtained.

  11. EQUIDISTANT TOOTH GENERATION ON NONCYLINDRICAL SURFACES FOR TWO-PARAMETER GEARING

    Directory of Open Access Journals (Sweden)

    Yuriy GUTSALENKO,

    2011-11-01

    Full Text Available The questions of design research of noncylindrical tooth surfaces for two-parameter gearing on example of bevel gears with constant normal pitch forgearing variators are been considered. Engineering is based on the special applied development of the mathematical theory of multiparametric mappings of space.

  12. SINGULARLY PERTURBED SOLUTION FOR THIRD ORDER NONLINEAR EQUATIONS WITH TWO PARAMETERS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A class of singularly perturbed boundary value problems for nonlinear equation of the third order with two parameters is considered. Under suitable conditions, using the theory of differential inequalities the existence and asymptotic behavior of the solution for boundary value problem are studied.

  13. Application of a two-parameter quantum algebra to rotational spectroscopy of nuclei

    Science.gov (United States)

    Barbier, R.; Kibler, M.

    1996-10-01

    A two-parameter quantum algebra Uqp( u2) is briefly investigated in this paper. The basic ingredients of a model based on the Uqp( u2) symmetry, the qp-rotator model, are presented in detail. Some general tendencies arising from the application of this model to the description of rotational bands of various atomic nuclei are summarized.

  14. A generalization of the power law distribution with nonlinear exponent

    Science.gov (United States)

    Prieto, Faustino; Sarabia, José María

    2017-01-01

    The power law distribution is usually used to fit data in the upper tail of the distribution. However, commonly it is not valid to model data in all the range. In this paper, we present a new family of distributions, the so-called Generalized Power Law (GPL), which can be useful for modeling data in all the range and possess power law tails. To do that, we model the exponent of the power law using a non-linear function which depends on data and two parameters. Then, we provide some basic properties and some specific models of that new family of distributions. After that, we study a relevant model of the family, with special emphasis on the quantile and hazard functions, and the corresponding estimation and testing methods. Finally, as an empirical evidence, we study how the debt is distributed across municipalities in Spain. We check that power law model is only valid in the upper tail; we show analytically and graphically the competence of the new model with municipal debt data in the whole range; and we compare the new distribution with other well-known distributions including the Lognormal, the Generalized Pareto, the Fisk, the Burr type XII and the Dagum models.

  15. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  16. A SINGULARLY PERTURBED PROBLEM OF THIRD ORDER EQUATION WITH TWO PARAMETERS

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    A singularly perturbed problem of third order equation with two parameters is studied. Using singular perturbation method, the structure of asymptotic solutions to the problem is discussed under three possible cases of two related small parameters. The results obtained reveal the different structures and limit behaviors of the solutions in three different cases. And in comparison with the exact solutions of the autonomous equation they are relatively perfect.

  17. Two Different Bifurcation Scenarios in Neural Firing Rhythms Discovered in Biological Experiments by Adjusting Two Parameters

    Institute of Scientific and Technical Information of China (English)

    WV Xiao-Bo; MO Juan; YANG Ming-Hao; ZHENG Qiao-Hua; GU Hua-Guang; HEN Wei

    2008-01-01

    @@ Two different bifurcation scenarios, one is novel and the other is relatively simpler, in the transition procedures of neural firing patterns are studied in biological experiments on a neural pacemaker by adjusting two parameters. The experimental observations are simulated with a relevant theoretical model neuron. The deterministic non-periodic firing pattern lying within the novel bifurcation scenario is suggested to be a new case of chaos, which has not been observed in previous neurodynamical experiments.

  18. 两参数Hardy-Hilbert不等式%Hardy-Hilbert's Inequalities With Two Parameters

    Institute of Scientific and Technical Information of China (English)

    徐景实

    2007-01-01

    In this paper, the author discusses Hardy-Hilbert's inequalities with two parameters and their variants. These are generalizations of Hardy-Hilbert's inequalities with one parameter in recent literatures.%讨论了两参数Hardy-Hilbert不等式和它们的一些变形.这些不等式推广了近年文献中的单参数Hardy-Hilbert不等式.

  19. FINITE SINGULARITIES OF TOTAL MULTIPLICITY FOUR FOR A PARTICULAR SYSTEM WITH TWO PARAMETERS

    Directory of Open Access Journals (Sweden)

    Simona Cristina Nartea

    2011-07-01

    Full Text Available A particular Lotka-Volterra system with two parameters describingthe dynamics of two competing species is analyzed from the algebraicviewpoint. This study involves the invariants and the comitants of the system determinated by the application of the affine transformations group. First, the conditions for the existence of four (different or equal finite singularities for the general system are proofed, then is studied the particular case.

  20. The distribution of all French communes: A composite parametric approach

    Science.gov (United States)

    Calderín-Ojeda, Enrique

    2016-05-01

    The distribution of the size of all French settlements (communes) from 1962 to 2012 is examined by means of a three-parameter composite Lognormal-Pareto distribution. This model is based on a Lognormal density up to an unknown threshold value and a Pareto density thereafter. Recent findings have shown that the untruncated settlement size data is in excellent agreement with the Lognormal distribution in the lower and central parts of the empirical distribution, but it follows a power law in the upper tail. For that reason, this probabilistic family, that nests both models, seems appropriate to describe urban agglomeration in France. The outcomes of this paper reveal that for the early periods (1962-1975) the upper quartile of the commune size data adheres closely to a power law distribution, whereas for later periods (2006-2012) most of the city size dynamics is explained by a Lognormal model.

  1. Determining prescription durations based on the parametric waiting time distribution

    DEFF Research Database (Denmark)

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-01-01

    ). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide...... two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users......, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. RESULTS: Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal...

  2. Bayesian CMB foreground separation with a correlated log-normal model

    CERN Document Server

    Oppermann, Niels

    2014-01-01

    The extraction of foreground and CMB maps from multi-frequency observations relies mostly on the different frequency behavior of the different components. Existing Bayesian methods additionally make use of a Gaussian prior for the CMB whose correlation structure is described by an unknown angular power spectrum. We argue for the natural extension of this by using non-trivial priors also for the foreground components. Focusing on diffuse Galactic foregrounds, we propose a log-normal model including unknown spatial correlations within each component and cross-correlations between the different foreground components. We present case studies at low resolution that demonstrate the superior performance of this model when compared to an analysis with flat priors for all components.

  3. On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-05-01

    Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high signal-to-noise (SNR). In this work, we propose simple closed-form asymptotic expressions of the ergodic capacity of dual-branch correlated Log- Normal corresponding to selection combining, and switch-and-stay combining. Furthermore, we capitalize on these new results to find new asymptotic ergodic capacity of correlated dual- branch free-space optical communication system under the impact of pointing error with both heterodyne and intensity modulation/direct detection. © 2015 IEEE.

  4. Impact of microbial count distributions on human health risk estimates

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Nauta, Maarten

    2015-01-01

    -lognormal distribution. We show that the impact of the choice of different probability distributions to describe concentrations at retail on risk estimates is dependent both on concentration and prevalence levels. We also show that the use of an LOQ should be done consciously, especially when zero-inflation is not used...... on risk estimates, at two different concentration scenarios and at a range of prevalence levels. By using five different parametric distributions, we investigate whether different characteristics of a good fit are crucial for an accurate risk estimate. Among the factors studied are the importance......-inflated Poisson-lognormal distributed data and an existing QMRA model from retail to consumer level, it was possible to assess the difference between expected risk and the risk estimated with using a lognormal, a zero-inflated lognormal, a Poisson-gamma, a zero-inflated Poisson-gamma and a zero-inflated Poisson...

  5. Fitting of Finite Mixture Distributions to Motor Insurance Claims

    Directory of Open Access Journals (Sweden)

    P. Sattayatham

    2012-01-01

    Full Text Available Problem statement: The modeling of claims is an important task of actuaries. Our problem is in modelling the actual motor insurance claim data set. In this study, we show that the actual motor insurance claim can be fitted by a finite mixture model. Approach: Firstly, we analyse the actual data set and then we choose the finite mixture Lognormal distributions as our model. The estimated parameters of the model are obtained from the EM algorithm. Then, we use the K-S and A-D test for showing how well the finite mixture Lognormal distributions fit the actual data set. We also mention the bootstrap technique in estimating the parameters. Results: From the tests, we found that the finite mixture lognormal distributions fit the actual data set with significant level 0.10. Conclusion: The finite mixture Lognormal distributions can be fitted to motor insurance claims and this fitting is better when the number of components (k are increase.

  6. Prestack migration velocity analysis based on simplifi ed two-parameter moveout equation

    Institute of Scientific and Technical Information of China (English)

    Chen Hai-Feng; Li Xiang-Yang; Qian Zhong-Ping; Song Jian-Jun; Zhao Gui-Ling

    2016-01-01

    Stacking velocityVC2, vertical velocity ratioγ0, effective velocity ratioγef, and anisotropic parameterχef are correlated in the PS-converted-wave (PS-wave) anisotropic prestack Kirchhoff time migration (PKTM) velocity model and are thus difficult to independently determine. We extended the simplified two-parameter (stacking velocity VC2 and anisotropic parameterkef) moveout equation from stacking velocity analysis to PKTM velocity model updating and formed a new four-parameter (stacking velocityVC2, vertical velocity ratioγ0, effective velocity ratioγef, and anisotropic parameterkef) PS-wave anisotropic PKTM velocity model updating and processfl ow based on the simplifi ed two-parameter moveout equation. In the proposed method, first, the PS-wave two-parameter stacking velocity is analyzed to obtain the anisotropic PKTM initial velocity and anisotropic parameters; then, the velocity and anisotropic parameters are corrected by analyzing the residual moveout on common imaging point gathers after prestack time migration. The vertical velocity ratioγ0 of the prestack time migration velocity model is obtained with an appropriate method utilizing the P- and PS-wave stacked sections after level calibration. The initial effective velocity ratioγef is calculated using the Thomsen (1999) equation in combination with the P-wave velocity analysis; ultimately, the final velocity model of the effective velocity ratioγef is obtained by percentage scanning migration. This method simplifi es the PS-wave parameter estimation in high-quality imaging, reduces the uncertainty of multiparameter estimations, and obtains good imaging results in practice.

  7. A two parameter ratio-product-ratio estimator using auxiliary information

    CERN Document Server

    Chami, Peter S; Thomas, Doneal

    2012-01-01

    We propose a two parameter ratio-product-ratio estimator for a finite population mean in a simple random sample without replacement following the methodology in Ray and Sahai (1980), Sahai and Ray (1980), Sahai and Sahai (1985) and Singh and Ruiz Espejo (2003). The bias and mean square error of our proposed estimator are obtained to the first degree of approximation. We derive conditions for the parameters under which the proposed estimator has smaller mean square error than the sample mean, ratio and product estimators. We carry out an application showing that the proposed estimator outperforms the traditional estimators using groundwater data taken from a geological site in the state of Florida.

  8. Two-parameters quasi-filled function algorithm for nonlinear integer programming

    Institute of Scientific and Technical Information of China (English)

    WANG Wei-xiang; SHANG You-lin; ZHANG Lian-sheng

    2006-01-01

    A quasi-filled function for nonlinear integer programming problem is given in this paper. This function contains two parameters which are easily to be chosen. Theoretical properties of the proposed quasi-filled function are investigated. Moreover,we also propose a new solution algorithm using this quasi-filled function to solve nonlinear integer programming problem in this paper. The examples with 2 to 6 variables are tested and computational results indicated the efficiency and reliability of the proposed quasi-filled function algorithm.

  9. Two-Parameter Radon Transformation of the Wigner Operator and Its Inverse

    Institute of Scientific and Technical Information of China (English)

    范洪义; 程海凌

    2001-01-01

    Using the technique of integration within an ordered product of operators, we reveal that a new quantum mechanical representation |x, μ,v〉exists, the eigenvector of operator μQ + νP (linear combination of coordinate Q and momentum P) with eigenvalue x, which is inherent to the two-parameter (μ, ν) Radon transformation of the Wigner operator. It turns out that the projection operator |x,μ,ν>

  10. A Two-parameter bicovariant differential calculus on the (1 + 2)-dimensional q-superspace

    Science.gov (United States)

    Yasar, Ergün

    2016-01-01

    We construct a two-parameter bicovariant differential calculus on ℛq1/2 with the help of the covariance point of view using the Hopf algebra structure of ℛq1/2. To achieve this, we first use the consistency of calculus and the approach of R-matrix which satisfies both ungraded and graded Yang-Baxter equations. In particular, based on this differential calculus, we investigate Cartan-Maurer forms for this q-superspace. Finally, we obtain the quantum Lie superalgebra corresponding the Cartan-Maurer forms.

  11. Representations of coherent and squeezed states in an extended two-parameter Fock space

    Institute of Scientific and Technical Information of China (English)

    M. K. Tavassoly; M. H. Lake

    2012-01-01

    Recently an f-deformed Fock space which is spanned by |n〉λ was introduced.These bases are the eigenstates of a deformed non-Hermitian Hamiltonian.In this contribution,we will use rather new nonorthogonal basis vectors for the construction of coherent and squeezed states,which in special case lead to the earlier known states.For this purpose,we first generalize the previously introduced Fock space spanned by |n〉λ bases,to a new one,spanned by extended two-parameters bases |n〉λ1,λ2.These bases are now the eigenstates of a non-Hermitian Hamiltonian Hλ1,λ2 =a(+)1,λ2a +1/2,where a(+)λ1,λ2 =a(+) + λ1a + λ2 and a are,respectively,the deformed creation and ordinary bosonic annihilation operators.The bases |n〉λ1,λ2 are nonorthogonal (squeezed states),but normalizable.Then,we deduce the new representations of coherent and squeezed states in our two-parameter Fock space.Finally,we discuss the quantum statistical properties,as well as the non-classical properties of the obtained states numerically.

  12. Chaos and stability in a two-parameter family of convex billiard tables

    CERN Document Server

    Bálint, Péter; Hernández-Tahuilán, Jorge; Sanders, David P

    2010-01-01

    We study, by numerical simulations and semi-rigorous arguments, a two-parameter family of convex, two-dimensional billiard tables, generalizing the one-parameter class of oval billiards of Benettin--Strelcyn [Phys. Rev. A 17, 773 (1978)]. We observe interesting dynamical phenomena when the billiard tables are continuously deformed from the integrable circular billiard to different versions of completely-chaotic stadia. In particular, we conjecture that a new class of ergodic billiard tables is obtained in certain regions of the two-dimensional parameter space, when the billiards are close to skewed stadia. We provide heuristic arguments supporting this conjecture, and give numerical confirmation using the powerful method of Lyapunov-weighted dynamics.

  13. A two-parameter nondiffusive heat conduction model for data analysis in pump-probe experiments

    Science.gov (United States)

    Ma, Yanbao

    2014-12-01

    Nondiffusive heat transfer has attracted intensive research interests in last 50 years because of its importance in fundamental physics and engineering applications. It has unique features that cannot be described by the Fourier law. However, current studies of nondiffusive heat transfer still focus on studying the effective thermal conductivity within the framework of the Fourier law due to a lack of a well-accepted replacement. Here, we show that nondiffusive heat conduction can be characterized by two inherent material properties: a diffusive thermal conductivity and a ballistic transport length. We also present a two-parameter heat conduction model and demonstrate its validity in different pump-probe experiments. This model not only offers new insights of nondiffusive heat conduction but also opens up new avenues for the studies of nondiffusive heat transfer outside the framework of the Fourier law.

  14. Two-parameter non-linear spacetime perturbations gauge transformations and gauge invariance

    CERN Document Server

    Bruni, M; Sopuerta, C F; Bruni, Marco; Gualtieri, Leonardo; Sopuerta, Carlos F.

    2003-01-01

    An implicit fundamental assumption in relativistic perturbation theory is that there exists a parametric family of spacetimes that can be Taylor expanded around a background. The choice of the latter is crucial to obtain a manageable theory, so that it is sometime convenient to construct a perturbative formalism based on two (or more) parameters. The study of perturbations of rotating stars is a good example: in this case one can treat the stationary axisymmetric star using a slow rotation approximation (expansion in the angular velocity Omega), so that the background is spherical. Generic perturbations of the rotating star (say parametrized by lambda) are then built on top of the axisymmetric perturbations in Omega. Clearly, any interesting physics requires non-linear perturbations, as at least terms lambda Omega need to be considered. In this paper we analyse the gauge dependence of non-linear perturbations depending on two parameters, derive explicit higher order gauge transformation rules, and define gaug...

  15. Quantum transport and two-parameter scaling at the surface of a weak topological insulator.

    Science.gov (United States)

    Mong, Roger S K; Bardarson, Jens H; Moore, Joel E

    2012-02-17

    Weak topological insulators have an even number of Dirac cones in their surface spectrum and are thought to be unstable to disorder, which leads to an insulating surface. Here we argue that the presence of disorder alone will not localize the surface states; rather, the presence of a time-reversal symmetric mass term is required for localization. Through numerical simulations, we show that in the absence of the mass term the surface always flow to a stable metallic phase and the conductivity obeys a one-parameter scaling relation, just as in the case of a strong topological insulator surface. With the inclusion of the mass, the transport properties of the surface of a weak topological insulator follow a two-parameter scaling form.

  16. On Calculating the Hougaard Measure of Skewness in a Nonlinear Regression Model with Two Parameters

    Directory of Open Access Journals (Sweden)

    S. A. EL-Shehawy

    2009-01-01

    Full Text Available Problem statement: This study presented an alternative computational algorithm for determining the values of the Hougaard measure of skewness as a nonlinearity measure in a Nonlinear Regression model (NLR-model with two parameters. Approach: These values indicated a degree of a nonlinear behavior in the estimator of the parameter in a NLR-model. Results: We applied the suggested algorithm on an example of a NLR-model in which there is a conditionally linear parameter. The algorithm is mainly based on many earlier studies in measures of nonlinearity. The algorithm was suited for implementation using computer algebra systems such as MAPLE, MATLAB and MATHEMATICA. Conclusion/Recommendations: The results with the corresponding output the same considering example will be compared with the results in some earlier studies.

  17. Matching the Evolution of the Stellar Mass Function Using Log-normal Star Formation Histories

    CERN Document Server

    Abramson, Louis E; Dressler, Alan; Oemler, Augustus; Poggianti, Bianca; Vulcani, Benedetta

    2014-01-01

    We show that a model consisting of individual, log-normal star formation histories for a volume-limited sample of $z\\approx0$ galaxies reproduces the evolution of the total and quiescent stellar mass functions at $z\\lesssim2.5$ and stellar masses $M_*\\geq10^{10}\\,{\\rm M_\\odot}$. This model has previously been shown to reproduce the star formation rate/stellar mass relation (${\\rm SFR}$--$M_*$) over the same interval, is fully consistent with the observed evolution of the cosmic ${\\rm SFR}$ density at $z\\leq8$, and entails no explicit "quenching" prescription. We interpret these results/features in the context of other models demonstrating a similar ability to reproduce the evolution of (1) the cosmic ${\\rm SFR}$ density, (2) the total/quiescent stellar mass functions, and (3) the ${\\rm SFR}$--$M_*$ relation, proposing that the key difference between modeling approaches is the extent to which they stress/address diversity in the (starforming) galaxy population. Finally, we suggest that observations revealing t...

  18. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio

    2016-03-18

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

  19. Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-02-07

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance.

  20. Energy-harvesting in cooperative AF relaying networks over log-normal fading channels

    KAUST Repository

    Rabie, Khaled M.

    2016-07-26

    Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE.

  1. T-stress estimation by the two-parameter approach for a specimen with a V-shaped notch

    Science.gov (United States)

    Bouledroua, O.; Elazzizi, A.; Hadj Meliani, M.; Pluvinage, G.; Matvienko, Y. G.

    2017-05-01

    In the present research, T-stress solutions are provided for a V-shaped notch in the case of surface defects in a pressurised pipeline. The V-shaped notch is analyzed with the use of the finite element method by the Castem2000 commercial software to determine the stress distribution ahead of the notch tip. The notch aspect ratio is varied. In contrast to a crack, it is found that the T-stress is not constant and depends on the distance from the notch tip. To estimate the T-stress in the case of a notch, a novel method is developed, inspired by the volumetric method approach proposed by Pluvinage. The method is based on averaging the T-stress over the effective distance ahead of the notch tip. The effective distance is determined by the point with the minimum stress gradient in the fracture process zone. This approach is successfully used to quantify the constraints of the notch-tip fields for various geometries and loading conditions. Moreover, the proposed T-stress estimation creates a basis for analyzing the crack path under mixed-mode loading from the viewpoint of the two-parameter fracture mechanics.

  2. A Polynomial Distribution Applied to Income and Wealth Distribution

    OpenAIRE

    Elvis Oltean; Kusmartsev, Fedor V.

    2013-01-01

    Income and wealth distribution affect stability of a society to a large extent and high inequality affects it negatively. Moreover, in the case of developed countries, recently has been proven that inequality is closely related to all negative phenomena affecting society. So far, Econophysics papers tried to analyse income and wealth distribution by employing distributions such as Fermi-Dirac, Bose-Einstein, Maxwell-Boltzmann, lognormal (Gibrat), and exponential. Generally, distributions desc...

  3. Sub-tangentially loaded and damped Beck's columns on two-parameter elastic foundation

    Science.gov (United States)

    Lee, Jun-Seok; Kim, Nam-Il; Kim, Moon-Young

    2007-10-01

    The dynamic stability of the damped Beck's column on two-parameter elastic foundation is investigated by using Hermitian beam elements. For this purpose, based on the extended Hamilton's principle, the dimensionless finite element (FE) formulation using the Hermitian interpolation function is presented. First, the mass matrix, the external and internal damping matrices, the elastic and the geometric stiffness matrices, Winkler and Pasternak foundation matrices, and the load correction stiffness matrix due to the sub-tangential follower force are obtained. Then, evaluation procedure for the flutter and divergence loads of the non-conservative system and the time history analysis using the Newmark- β method are shortly described. Finally, the influences of various parameters on the dynamic stability of non-conservative systems are newly addressed: (1) variation of the second flutter load due to sub-tangentiality, (2) influences of the external and the internal damping on flutter loads by analysis of complex natural frequencies, (3) the effect of the growth rate of motion in a finite time interval using time history analysis, and (4) fluctuation of divergence and flutter loads due to Winkler and Pasternak foundations.

  4. Explicit formula for the Holevo bound for two-parameter qubit-state estimation problem

    Science.gov (United States)

    Suzuki, Jun

    2016-04-01

    The main contribution of this paper is to derive an explicit expression for the fundamental precision bound, the Holevo bound, for estimating any two-parameter family of qubit mixed-states in terms of quantum versions of Fisher information. The obtained formula depends solely on the symmetric logarithmic derivative (SLD), the right logarithmic derivative (RLD) Fisher information, and a given weight matrix. This result immediately provides necessary and sufficient conditions for the following two important classes of quantum statistical models; the Holevo bound coincides with the SLD Cramér-Rao bound and it does with the RLD Cramér-Rao bound. One of the important results of this paper is that a general model other than these two special cases exhibits an unexpected property: the structure of the Holevo bound changes smoothly when the weight matrix varies. In particular, it always coincides with the RLD Cramér-Rao bound for a certain choice of the weight matrix. Several examples illustrate these findings.

  5. Representations of Coherent and Squeezed States in an Extended Two-parameters Fock Space

    CERN Document Server

    Tavassoly, M K

    2012-01-01

    Recently a $f$-deformed Fock space which is spanned by $|n>_{\\lambda}$ has been introduced. These bases are indeed the eigen-states of a deformed non-Hermitian Hamiltonian. In this contribution, we will use a rather new non-orthogonal basis vectors for the construction of coherent and squeezed states, which in special case lead to the earlier known states. For this purpose, we first generalize the previously introduced Fock space spanned by $|n>_{\\lambda}$ bases, to a new one, spanned by an extended two-parameters bases $|n>_{\\lambda_{1},\\lambda_{2}}$. These bases are now the eigen-states of a non-Hermitian Hamiltonian $H_{\\lambda_{1},\\lambda_{2}}=a^{\\dagger}_{\\lambda_{1},\\lambda_{2}}a+1/2$, where $a^{\\dagger}_{\\lambda_{1},\\lambda_{2}}=a^{\\dagger}+\\lambda_{1}a + \\lambda_{2}$ and $a$ are respectively, the deformed creation and ordinary bosonic annihilation operators. The bases $|n>_{\\lambda_{1},\\lambda_{2}}$ are non-orthogonal (squeezed states), but normalizable. Then, we deduce the new representations of cohe...

  6. Regional parent flood frequency distributions in Europe - Part 2: Climate and scale controls

    Science.gov (United States)

    Salinas, J. L.; Castellarin, A.; Kohnová, S.; Kjeldsen, T. R.

    2014-11-01

    This study aims to better understand the effect of catchment scale and climate on the statistical properties of regional flood frequency distributions. A database of L-moment ratios of annual maximum series (AMS) of peak discharges from Austria, Italy and Slovakia, involving a total of 813 catchments with more than 25 yr of record length is presented, together with mean annual precipitation (MAP) and basin area as catchment descriptors surrogates of climate and scale controls. A purely data-based investigation performed on the database shows that the generalized extreme value (GEV) distribution provides a better representation of the averaged sample L-moment ratios compared to the other distributions considered, for catchments with medium to higher values of MAP independently of catchment area, while the three-parameter lognormal distribution is probably a more appropriate choice for drier (lower MAP) intermediate-sized catchments, which presented higher skewness values. Sample L-moment ratios do not follow systematically any of the theoretical two-parameter distributions. In particular, the averaged values of L-coefficient of skewness (L-Cs) are always larger than Gumbel's fixed L-Cs. The results presented in this paper contribute to the progress in defining a set of "process-driven" pan-European flood frequency distributions and to assess possible effects of environmental change on its properties.

  7. Regional parent flood frequency distributions in Europe – Part 2: Climate and scale controls

    Directory of Open Access Journals (Sweden)

    J. L. Salinas

    2014-11-01

    Full Text Available This study aims to better understand the effect of catchment scale and climate on the statistical properties of regional flood frequency distributions. A database of L-moment ratios of annual maximum series (AMS of peak discharges from Austria, Italy and Slovakia, involving a total of 813 catchments with more than 25 yr of record length is presented, together with mean annual precipitation (MAP and basin area as catchment descriptors surrogates of climate and scale controls. A purely data-based investigation performed on the database shows that the generalized extreme value (GEV distribution provides a better representation of the averaged sample L-moment ratios compared to the other distributions considered, for catchments with medium to higher values of MAP independently of catchment area, while the three-parameter lognormal distribution is probably a more appropriate choice for drier (lower MAP intermediate-sized catchments, which presented higher skewness values. Sample L-moment ratios do not follow systematically any of the theoretical two-parameter distributions. In particular, the averaged values of L-coefficient of skewness (L-Cs are always larger than Gumbel's fixed L-Cs. The results presented in this paper contribute to the progress in defining a set of "process-driven" pan-European flood frequency distributions and to assess possible effects of environmental change on its properties.

  8. Two-parameter quantum affine algebra of type G{sub 2}{sup (1)}, Drinfeld realization and vertex representation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Yun, E-mail: ygao@yorku.ca [Department of Mathematics and Statistics, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3 (Canada); Hu, Naihong, E-mail: nhhu@math.ecnu.edu.cn [Department of Mathematics, East China Normal University, Shanghai 200241 (China); Zhang, Honglian, E-mail: hlzhangmath@shu.edu.cn [Department of Mathematics, Shanghai University, Shanghai 200444 (China)

    2015-01-15

    In this paper, we define the two-parameter quantum affine algebra for type G{sub 2}{sup (1)} and give the (r, s)-Drinfeld realization of U{sub r,s}(G{sub 2}{sup (1)}), as well as establish and prove its Drinfeld isomorphism. We construct and verify explicitly the level-one vertex representation of two-parameter quantum affine algebra U{sub r,s}(G{sub 2}{sup (1)}), which also supports an evidence in nontwisted type G{sub 2}{sup (1)} for the uniform defining approach via the two-parameter τ-invariant generating functions proposed in Hu and Zhang [Generating functions with τ-invariance and vertex representations of two-parameter quantum affine algebras U{sub r,s}(g{sup ^}): Simply laced cases e-print http://arxiv.org/abs/1401.4925 ].

  9. On two-parameter models of photon cross sections: application to dual-energy CT imaging.

    Science.gov (United States)

    Williamson, Jeffrey F; Li, Sicong; Devic, Slobodan; Whiting, Bruce R; Lerma, Fritz A

    2006-11-01

    The goal of this study is to evaluate the theoretically achievable accuracy in estimating photon cross sections at low energies (20-1000 keV) from idealized dual-energy x-ray computed tomography (CT) images. Cross-section estimation from dual-energy measurements requires a model that can accurately represent photon cross sections of any biological material as a function of energy by specifying only two characteristic parameters of the underlying material, e.g., effective atomic number and density. This paper evaluates the accuracy of two commonly used two-parameter cross-section models for postprocessing idealized measurements derived from dual-energy CT images. The parametric fit model (PFM) accounts for electron-binding effects and photoelectric absorption by power functions in atomic number and energy and scattering by the Klein-Nishina cross section. The basis-vector model (BVM) assumes that attenuation coefficients of any biological substance can be approximated by a linear combination of mass attenuation coefficients of two dissimilar basis substances. Both PFM and BVM were fit to a modern cross-section library for a range of elements and mixtures representative of naturally occurring biological materials (Z = 2-20). The PFM model, in conjunction with the effective atomic number approximation, yields estimated the total linear cross-section estimates with mean absolute and maximum error ranges of 0.6%-2.2% and 1%-6%, respectively. The corresponding error ranges for BVM estimates were 0.02%-0.15% and 0.1%-0.5%. However, for photoelectric absorption frequency, the PFM absolute mean and maximum errors were 10.8%-22.4% and 29%-50%, compared with corresponding BVM errors of 0.4%-11.3% and 0.5%-17.0%, respectively. Both models were found to exhibit similar sensitivities to image-intensity measurement uncertainties. Of the two models, BVM is the most promising approach for realizing dual-energy CT cross-section measurement.

  10. Constraints on the multi-lognormal magnetic fields from the observations of the cosmic microwave background and the matter power spectrum

    CERN Document Server

    Yamazaki, Dai G; Takahashi, Keitaro

    2013-01-01

    Primordial magnetic fields (PMFs), which were generated in the early universe before recombination, affect the motion of plasma and then the cosmic microwave background (CMB) and the matter power spectrum (MPS). We consider constraints on PMFs with a characteristic correlation length from the observations of the anisotropies of CMB (WMAP, QUAD, ACT, SPT, and ACBAR) and MPS. The spectrum of PMFs is modeled with multi-lognormal distributions (MLND), rather than power-law distribution, and we derive constraints on the strength $|\\mathbf{B}_k|$ at each wavenumber $k$ along with the standard cosmological parameters in the flat Universe and the foreground sources. We obtain upper bounds on the field strengths at $k=10^{-1}, 10^{-2},10^{-4}$ and $10^{-5}$ Mpc$^{-1}$ as 4.7 nG, 2.1 nG, 5.3 nG and 10.9 nG ($2\\sigma$ C.L.) respectively, while the field strength at $k=10^{-3} $Mpc$^{-1}$ turns out to have a finite value as $|\\mathbf{B}_{k = 10^{-3}}| = 6.2 \\pm 1.3 $ nG ($1\\sigma$ C.L.). This finite value is attributed t...

  11. Data assimilation in a coupled physical-biogeochemical model of the California Current System using an incremental lognormal 4-dimensional variational approach: Part 1-Model formulation and biological data assimilation twin experiments

    Science.gov (United States)

    Song, Hajoon; Edwards, Christopher A.; Moore, Andrew M.; Fiechter, Jerome

    2016-10-01

    A quadratic formulation for an incremental lognormal 4-dimensional variational assimilation method (incremental L4DVar) is introduced for assimilation of biogeochemical observations into a 3-dimensional ocean circulation model. L4DVar assumes that errors in the model state are lognormally rather than Gaussian distributed, and implicitly ensures that state estimates are positive definite, making this approach attractive for biogeochemical variables. The method is made practical for a realistic implementation having a large state vector through linear assumptions that render the cost function quadratic and allow application of existing minimization techniques. A simple nutrient-phytoplankton-zooplankton-detritus (NPZD) model is coupled to the Regional Ocean Modeling System (ROMS) and configured for the California Current System. Quadratic incremental L4DVar is evaluated in a twin model framework in which biological fields only are in error and compared to G4DVar which assumes Gaussian distributed errors. Five-day assimilation cycles are used and statistics from four years of model integration analyzed. The quadratic incremental L4DVar results in smaller root-mean-squared errors and better statistical agreement with reference states than G4DVar while maintaining a positive state vector. The additional computational cost and implementation effort are trivial compared to the G4DVar system, making quadratic incremental L4DVar a practical and beneficial option for realistic biogeochemical state estimation in the ocean.

  12. Sales Distribution of Consumer Electronics

    CERN Document Server

    Hisano, Ryohei

    2010-01-01

    Using the uniform most powerful unbiased test, we observed the sales distribution of consumer electronics in Japan on a daily basis and report that it follows both a lognormal distribution and a power-law distribution and depends on the state of the market. We show that these switches occur quite often. The underlying sales dynamics found between both periods nicely matched a multiplicative process. However, even though the multiplicative term in the process displays a size-dependent relationship when a steady lognormal distribution holds, it shows a size-independent relationship when the power-law distribution holds. This difference in the underlying dynamics is responsible for the difference in the two observed distributions.

  13. On two-parameter equations of state and the limitations of a hard sphere Peng-Robinson equation

    Science.gov (United States)

    Harmens, A.; Jeremiah, Dawn E.

    Simple two-parameter equations of state are exceptionally effective for calculations on systems of small, uncomplicated molecules. They are therefore extremely useful for vapour-liquid equilibrium calculations in cryogenic and light hydrocarbon process design. In a search for further improvement three two-parameter equations of state with a co-volume repulsion term and three with a hard sphere repulsion term have been investigated. Their characteristic constants at the critical point have been compared. The procedure for fitting the two parameters to empirical data in the subcritical region was analysed. A perturbed hard sphere equation with a Peng-Robinson attraction term was shown to be unsuitable for application over a wide range of p, T conditions. A similar equation with a Redlich-Kwong attraction term gives good service in the cryogenic range.

  14. Entanglement for a two-parameter class of states in a high-dimension bipartite quantum system

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The entanglement for a two-parameter class of states in a high-dimension (m  n,n≥m≥3) bipartite quantum system is discussed.The negativity (N) and the relative entropy of entanglement (Er) are calculated,and the analytic expressions are obtained.Finally the relation between the negativity and the relative entropy of entanglement is discussed.The result demonstrates that all PPT states of the two-parameter class of states are separable,and all entangled states are NPT states.Different from the 2 ? n quantum system,the negativity for a two-parameter class of states in high dimension is not always greater than or equal to the relative entropy of entanglement.The more general relation expression is mN/2≥Er.

  15. Performance Analysis of a DS-CDMA Cellular System with Effects of Soft Handoff in Log-Normal Shadowing Channels

    Institute of Scientific and Technical Information of China (English)

    YANG Feng-rui; LUO Hong; ZHOU Jie; HISAKAZU Kikuchi

    2004-01-01

    Next generation wireless communication is based on a global system of fixed and wireless mobile services that are transportable across different network back-bones, network service providers and network geographical boundaries.This paper presents an approach to investigate the effects of soft handover and perfect power control on the forward link in a DS-CDMA cellular system. Especially, the relationships between the size of handover zone and the capacity gain are evaluated under the log-normal shadow channel. Then the optimization of maximum forward capacity is very necessary to be done with the maximum size of soft handover zone to the various system characteristics.

  16. A two-parameter family of exact asymptotically flat solutions to the Einstein-scalar field equations

    Energy Technology Data Exchange (ETDEWEB)

    Nikonov, V V; Tchemarina, Ju V; Tsirulev, A N [Department of Mathematics, Tver State University, Sadovyi per. 35, Tver 170002 (Russian Federation)], E-mail: tsirulev@tversu.ru

    2008-07-07

    We consider a static spherically symmetric real scalar field, minimally coupled to Einstein gravity. A two-parameter family of exact asymptotically flat solutions is obtained by using the inverse problem method. This family includes non-singular solutions, black holes and naked singularities. For each of these solutions the respective potential is partially negative but positive near spatial infinity. (comments, replies and notes)

  17. ON SHARP MARKOV PROPERTY OF TWO-PARAMETER MARKOV PROCESSES%两参数Markov过程的Sharp Markov性

    Institute of Scientific and Technical Information of China (English)

    陈雄

    2011-01-01

    It is proven that for two-parameter Markov processes, only when they are the finite or countable unions of rectangles, will they show Sharp Markov property.%证明了两参数Markov过程仅在正位矩形的有限并或可列并区域上才具有Sharp Markov性.

  18. Two-parameter sample path large deviations for infinite-server queues

    Directory of Open Access Journals (Sweden)

    Jose H. Blanchet

    2014-09-01

    Full Text Available Let Qλ(t,y be the number of people present at time t with y units of remaining service time in an infinite server system with arrival rate equal to λ>0. In the presence of a non-lattice renewal arrival process and assuming that the service times have a continuous distribution, we obtain a large deviations principle for Qλ( · /λ under the topology of uniform convergence on [0,T]×[0,∞. We illustrate our results by obtaining the most likely path, represented as a surface, to ruin in life insurance portfolios, and also we obtain the most likely surfaces to overflow in the setting of loss queues.

  19. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    Science.gov (United States)

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  20. Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading

    KAUST Repository

    Rabie, Khaled M.

    2017-08-15

    Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels, which represent outdoor environments. In contrast, this paper is dedicated to analyze the performance of dual-hop relaying systems with EH over indoor channels characterized by log-normal fading. Both half-duplex (HD) and full-duplex (FD) relaying mechanisms are studied in this work with decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols. In addition, three EH schemes are investigated, namely, time switching relaying, power splitting relaying and ideal relaying receiver which serves as a lower bound. The system performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions. Monte Carlo simulations are provided throughout to validate the accuracy of our analysis. Results reveal that, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can generally outperform HD relaying schemes as long as the loop-back interference in FD is relatively small. Furthermore, increasing the variance of the log-normal channel has shown to deteriorate the performance in all the relaying and EH protocols considered.

  1. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    Science.gov (United States)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  2. A study of the personal income distribution in Australia

    CERN Document Server

    Banerjee, A; Matteo, T D; Banerjee, Anand; Yakovenko, Victor M.

    2006-01-01

    We analyze the data on personal income distribution from the Australian Bureau of Statistics. We compare fits of the data to the exponential, log-normal, and gamma distributions. The exponential function gives a good (albeit not perfect) description of 98% of the population in the lower part of the distribution. The log-normal and gamma functions do not improve the fit significantly, despite having more parameters, and mimic the exponential function. We find that the probability density at zero income is not zero, which contradicts the log-normal and gamma distributions, but is consistent with the exponential one. The high-resolution histogram of the probability density shows a very sharp and narrow peak at low incomes, which we interpret as the result of a government policy on income redistribution.

  3. Can Data Recognize Its Parent Distribution?

    Energy Technology Data Exchange (ETDEWEB)

    A.W.Marshall; J.C.Meza; and I. Olkin

    1999-05-01

    This study is concerned with model selection of lifetime and survival distributions arising in engineering reliability or in the medical sciences. We compare various distributions, including the gamma, Weibull and lognormal, with a new distribution called geometric extreme exponential. Except for the lognormal distribution, the other three distributions all have the exponential distribution as special cases. A Monte Carlo simulation was performed to determine sample sizes for which survival distributions can distinguish data generated by their own families. Two methods for decision are by maximum likelihood and by Kolmogorov distance. Neither method is uniformly best. The probability of correct selection with more than one alternative shows some surprising results when the choices are close to the exponential distribution.

  4. A Study of Cirrus Ice Particle Size Distribution Using TC4 Observations

    Science.gov (United States)

    Tian, Lin; Heymsfield, Gerald M.; Li, Lihua; Heymsfield, Andrew J.; Bansemer, Aaron; Twohy, Cynthia H.; Srivastava, Ramesh C.

    2010-01-01

    An analysis of two days of in situ observations of ice particle size spectra, in convectively generated cirrus, obtained during NASA s Tropical Composition, Cloud, and Climate Coupling (TC4) mission is presented. The observed spectra are examined for their fit to the exponential, gamma, and lognormal function distributions. Characteristic particle size and concentration density scales are determined using two (for the exponential) or three (for the gamma and lognormal functions) moments of the spectra. It is shown that transformed exponential, gamma, and lognormal distributions should collapse onto standard curves. An examination of the transformed spectra, and of deviations of the transformed spectra from the standard curves, shows that the lognormal function provides a better fit to the observed spectra.

  5. Bulk stress distributions in the pore space of sphere-packed beds under Darcy flow conditions

    Science.gov (United States)

    Pham, Ngoc H.; Voronov, Roman S.; Tummala, Naga Rajesh; Papavassiliou, Dimitrios V.

    2014-03-01

    In this paper, bulk stress distributions in the pore space of columns packed with spheres are numerically computed with lattice Boltzmann simulations. Three different ideally packed and one randomly packed configuration of the columns are considered under Darcy flow conditions. The stress distributions change when the packing type changes. In the Darcy regime, the normalized stress distribution for a particular packing type is independent of the pressure difference that drives the flow and presents a common pattern. The three parameter (3P) log-normal distribution is found to describe the stress distributions in the randomly packed beds within statistical accuracy. In addition, the 3P log-normal distribution is still valid when highly porous scaffold geometries rather than sphere beds are examined. It is also shown that the 3P log-normal distribution can describe the bulk stress distribution in consolidated reservoir rocks like Berea sandstone.

  6. Bulk stress distributions in the pore space of sphere-packed beds under Darcy flow conditions.

    Science.gov (United States)

    Pham, Ngoc H; Voronov, Roman S; Tummala, Naga Rajesh; Papavassiliou, Dimitrios V

    2014-03-01

    In this paper, bulk stress distributions in the pore space of columns packed with spheres are numerically computed with lattice Boltzmann simulations. Three different ideally packed and one randomly packed configuration of the columns are considered under Darcy flow conditions. The stress distributions change when the packing type changes. In the Darcy regime, the normalized stress distribution for a particular packing type is independent of the pressure difference that drives the flow and presents a common pattern. The three parameter (3P) log-normal distribution is found to describe the stress distributions in the randomly packed beds within statistical accuracy. In addition, the 3P log-normal distribution is still valid when highly porous scaffold geometries rather than sphere beds are examined. It is also shown that the 3P log-normal distribution can describe the bulk stress distribution in consolidated reservoir rocks like Berea sandstone.

  7. Global evidence on the distribution of economic profit rates

    Science.gov (United States)

    Williams, Michael A.; Baek, Grace; Park, Leslie Y.; Zhao, Wei

    2016-09-01

    Gibrat (1931) initiated the study of the distribution of firms' profit rates, suggesting the distribution was log-normal. Although initial empirical work supported that finding, a consensus has developed in the literature that the distribution of firm profit rates is best approximated by the Laplace distribution. Using a richer database than prior studies and testing for more theoretical distributions, we find that the distribution of firm profit rates is best approximated by the heavier-tailed Cauchy distribution.

  8. An alternative factorization of the quantum harmonic oscillator and two-parameter family of self-adjoint operators

    Energy Technology Data Exchange (ETDEWEB)

    Arcos-Olalla, Rafael, E-mail: olalla@fisica.ugto.mx [Departamento de Física, DCI Campus León, Universidad de Guanajuato, Apdo. Postal E143, 37150 León, Gto. (Mexico); Reyes, Marco A., E-mail: marco@fisica.ugto.mx [Departamento de Física, DCI Campus León, Universidad de Guanajuato, Apdo. Postal E143, 37150 León, Gto. (Mexico); Rosu, Haret C., E-mail: hcr@ipicyt.edu.mx [IPICYT, Instituto Potosino de Investigacion Cientifica y Tecnologica, Apdo. Postal 3-74 Tangamanga, 78231 San Luis Potosí, S.L.P. (Mexico)

    2012-10-01

    We introduce an alternative factorization of the Hamiltonian of the quantum harmonic oscillator which leads to a two-parameter self-adjoint operator from which the standard harmonic oscillator, the one-parameter oscillators introduced by Mielnik, and the Hermite operator are obtained in certain limits of the parameters. In addition, a single Bernoulli-type parameter factorization, which is different from the one introduced by M.A. Reyes, H.C. Rosu, and M.R. Gutiérrez [Phys. Lett. A 375 (2011) 2145], is briefly discussed in the final part of this work. -- Highlights: ► Factorizations with operators which are not mutually adjoint are presented. ► New two-parameter and one-parameter self-adjoint oscillator operators are introduced. ► Their eigenfunctions are two- and one-parameter deformed Hermite functions.

  9. 两个参数的新Hilbert型积分不等式%Hilbert-type Integral Inequality with Two Parameters

    Institute of Scientific and Technical Information of China (English)

    付向红

    2011-01-01

    By introducing the weight function,we obtain a Hilbert-type integral inequality with two parameters and the equivalent form with a best constant factor.%通过引入权函数的方法,利用带权的H lder不等式,得到一个带两个参数的并具有最佳常数的新Hilbert型积分不等式及其等价形式.

  10. Two-Parameter Stochastic Resonance in a Model of Electrodissolution of Fe in H2SO4

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Stochastic resonance (SR) is shown in a two-parameter system, a model of electrodissolution of Fe in H2SO4. Modulation of two different parameters by a periodic signal in one parameter and noise in the other parameter is found to give rise to SR. The result indicates that the noise can enlarge a weak periodic signal and lead the system to order. The scenario and novel aspects of SR in this system are discussed.

  11. On the generation of log-Levy distributions and extreme randomness

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo [Department of Technology Management, Holon Institute of Technology, PO Box 305, Holon 58102 (Israel); Klafter, Joseph, E-mail: eliazar@post.tau.ac.il, E-mail: klafter@post.tau.ac.il [School of Chemistry, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978 (Israel)

    2011-10-14

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Levy distributions. The log-Levy distributions are the Levy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Levy distributions emerge universally-the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot's extreme randomness. (paper)

  12. Aplicación del modelo log-normal para la predicción de activos del Banco Sabadell

    OpenAIRE

    Debón Aucejo, Ana María; Cortés López, Juan Carlos; Moreno Navarro, Carla

    2008-01-01

    El objetivo de este trabajo es predecir el valor de una acción. Para ello, se han utilizado cotizaciones intradía durante el primer trimestre del año 2007 de activos del Banco Sabadell (BS). En primer lugar, obtenemos el ajuste del modelo log-normal a partir del histórico del activo desde el 28 de diciembre al 20 de marzo de 2007. En segundo lugar, se calcula el precio que alcanzará la acción BS a 21 de marzo de 2007 con el modelo. Después, se ha obtenido para esa fecha una predicción por int...

  13. On the origin of logarithmic-normal distributions: An analytical derivation, and its application to nucleation and growth processes

    Science.gov (United States)

    Bergmann, Ralf B.; Bill, Andreas

    2008-06-01

    The logarithmic-normal (lognormal) distribution is one of the most frequently observed distributions in nature and describes a large number of physical, biological and even sociological phenomena. The origin of this distribution is therefore of broad interest but a general derivation from basic principles is still lacking. Using random nucleation and growth to describe crystallization processes we derive the time development of grain-size distributions. Our derivation provides, for the first time, an analytical expression of the size distribution in the form of a lognormal type distribution. We apply our results to the grain-size distribution of solid phase crystallized Si-films.

  14. Development of a Compound Distribution Markov Chain Model for Stochastic Generation of Rainfall with Long Term Persistence

    Science.gov (United States)

    Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George

    2015-04-01

    One of the overriding issues in the rainfall simulation is the underestimation of observed rainfall variability in longer timescales (e.g. monthly, annual and multi-year), which usually results into under-estimation of reservoir reliability in urban water planning. This study has developed a Compound Distribution Markov Chain (CDMC) model for stochastic generation of daily rainfall. We used two parameters of Markov Chain process (transition probabilities of wet-to-wet and dry-to-dry days) for simulating rainfall occurrence and two parameters of gamma distribution (calculated from mean and standard deviation of wet-day rainfall) for simulating wet-day rainfall amounts. While two models with deterministic parameters underestimated long term variability, our investigation found that the long term variability of rainfall in the model is predominantly governed by the long term variability of gamma parameters, rather than the variability of Markov Chain parameters. Therefore, in the third approach, we developed the CDMC model with deterministic parameters of Markov Chain process, but stochastic parameters of gamma distribution by sampling the mean and standard deviation of wet-day rainfall from their log-normal and bivariate-normal distribution. We have found that the CDMC is able to replicate both short term and long term rainfall variability, when we calibrated the model at two sites in east coast of Australia using three types of daily rainfall data - (1) dynamically downscaled, 10 km resolution gridded data produced by NSW/ACT Regional Climate Modelling project, (2) 5 km resolution gridded data by Australian Water Availability Project and (3) point scale raingauge stations data by Bureau of Meteorology, Australia. We also examined the spatial variability of parameters and their link with local orography at our field site. The suitability of the model in runoff generation and urban reservoir-water simulation will be discussed.

  15. New Statistical Perspective to The Cosmic Void Distribution

    CERN Document Server

    Pycke, Jean-Renaud

    2016-01-01

    In this study, we obtain the size distribution of voids as a 3-parameter redshift independent log-normal void probability function (VPF) directly from the Cosmic Void Catalog (CVC). Although many statistical models of void distributions are based on the counts in randomly placed cells, the log-normal VPF that we here obtain is independent of the shape of the voids due to the parameter-free void finder of the CVC. We use three void populations drawn from the CVC generated by the Halo Occupation Distribution (HOD) Mocks which are tuned to three mock SDSS samples to investigate the void distribution statistically and the effects of the environments on the size distribution. As a result, it is shown that void size distributions obtained from the HOD Mock samples are satisfied by the 3-parameter log-normal distribution. In addition, we find that there may be a relation between hierarchical formation, skewness and kurtosis of the log-normal distribution for each catalog. We also show that the shape of the 3-paramet...

  16. A Finite Element Study of the Bending Behavior of Beams Resting on Two-Parameter Elastic Foundation

    Directory of Open Access Journals (Sweden)

    Iancu-Bogdan Teodoru

    2006-01-01

    Full Text Available Although the Winkler’s model is a poor representation of the many practical subgrade or subbase materials, it is widely used in soil-structure problems for almost one and a half century. The foundations represented by Winkler model can not sustain shear stresses, and hence discontinuity of adjacent spring displacements can occur. This is the prime shortcoming of this foundation model which in practical applications may result in significant inaccuracies in the evaluated structural response. In order to overcome these problem many researchers have been proposed various mechanical foundation models considering interaction with the surroundings. Among them we shall mention the class of two-parameter foundations -- named like this because they have the second parameter which introduces interactions between adjacent springs, in addition to the first parameter from the ordinary Winkler’s model. This class of models includes Filonenko-Borodich, Pasternak, generalized, and Vlasov foundations. Mathematically, the equations to describe the reaction of the two-parameter foundations arc equilibrium ones, and the only difference is the definition of the parameters. For the convenience of discussion, the Pasternak foundation is adopted in present paper. In order to analyse the bending behavior of a Euler-Bernoulli beam resting on two-parameter elastic foundation a (displacement Finite Element (FE formulation, based on the cubic displacement function of the governing differential equation, is introduced. The resulting effects of shear stiffness of the Pasternak model on the mechanical quantities are discussed in comparison with those of the Winkler’s model. Some numerical case studies illustrate the accuracy of the formulation and the importance of the soil shearing effect in the vertical direction, associated with continuous elastic foundation.

  17. The Two-Parameter Brane Sigma-Model: M*, M' solutions and M-theory solutions dependent on exotic coordinates

    CERN Document Server

    Cook, Paul P

    2016-01-01

    We investigate two-parameter solutions of sigma-models on two dimensional symmetric spaces contained in E11. Embedding such sigma-model solutions in space-time gives solutions of M* and M'-theory where the metric depends on general travelling wave functions, as opposed to harmonic functions typical in general relativity, supergravity and M-theory. Weyl reflection allows such solutions to be mapped to M-theory solutions where the wave functions depend explicitly on extra coordinates contained in the fundamental representation of E11.

  18. Multiplexing technique using amplitude-modulated chirped fibre Bragg gratings with applications in two-parameter sensing

    Science.gov (United States)

    Wong, Allan C. L.; Childs, Paul A.; Peng, Gang-Ding

    2007-11-01

    A multiplexing technique using amplitude-modulated chirped fibre Bragg gratings (AMCFBGs) is presented. This technique realises the multiplexing of spectrally overlapped AMCFBGs with identical centre Bragg wavelength and bandwidth. Since it is fully compatible with the wavelength division multiplexing scheme, the number of gratings that can be multiplexed can be increased by several times. The discrete wavelet transform is used to demodulate such multiplexed signal. A wavelet denoising technique is applied to the multiplexed signal in conjunction with the demodulation. Strain measurements are performed to experimentally demonstrate the feasibility of this multiplexing technique. The absolute error and crosstalk are measured. An application to simultaneous two-parameter sensing is also demonstrated.

  19. A two-parameter criterion for classifying the explodability of massive stars by the neutrino-driven mechanism

    CERN Document Server

    Ertl, T; Woosley, S E; Sukhbold, T; Ugliano, M

    2015-01-01

    Thus far, judging the fate of a massive star (either a neutron star (NS) or a black hole) solely by its structure prior to core collapse has been ambiguous. Our work and previous attempts find a non-monotonic variation of successful and failed supernovae with zero-age main-sequence mass, for which no single structural parameter can serve as a good predictive measure. However, we identify two parameters computed from the pre-collapse structure of the progenitor, which in combination allow for a clear separation of exploding and non-exploding cases with only few exceptions (~1--2.5%) in our set of 621 investigated stellar models. One parameter is M4, defining the enclosed mass for a dimensionless entropy per nucleon of s = 4, and the other is mu4 = dm/dr|_{s=4}, being the mass-derivative at this location. The two parameters mu4 and M4*mu4 can be directly linked to the mass-infall rate, Mdot, of the collapsing star and the electron-type neutrino luminosity of the accreting proto-NS, L_nue ~ M_ns*Mdot, which play...

  20. The analysis and application of simplified two-parameter moveout equation for C-waves in VTI anisotropy media

    Institute of Scientific and Technical Information of China (English)

    Li Xiao-Ming; Chen Shuang-Quan; Li Xiang-Yang

    2013-01-01

    Several parameters are needed to describe the converted-wave (C-wave) moveout in processing multi-component seismic data, because of asymmetric raypaths and anisotropy. As the number of parameters increases, the converted wave data processing and analysis becomes more complex. This paper develops a new moveout equation with two parameters for C-waves in vertical transverse isotropy (VTI) media. The two parameters are the C-wave stacking velocity (VC2) and the squared velocity ratio (γvti) between the horizontal P-wave velocity and C-wave stacking velocity. The new equation has fewer parameters, but retains the same applicability as previous ones. The applicability of the new equation and the accuracy of the parameter estimation are checked using model and real data. The form of the new equation is the same as that for layered isotropic media. The new equation can simplify the procedure for C-wave processing and parameter estimation in VTI media, and can be applied to real C-wave processing and interpretation. Accurate VC2 andγvti can be deduced from C-wave data alone using the double-scanning method, and the velocity ratio model is suitable for event matching between P-and C-wave data.

  1. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    Energy Technology Data Exchange (ETDEWEB)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H., E-mail: B.H.Erne@uu.nl

    2015-04-15

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal distribution of the magnetic dipole moment. Here, we test this assumption for different types of superparamagnetic iron oxide nanoparticles in the 5–20 nm range, by multimodal fitting of magnetization curves using the MINORIM inversion method. The particles are studied while in dilute colloidal dispersion in a liquid, thereby preventing hysteresis and diminishing the effects of magnetic anisotropy on the interpretation of the magnetization curves. For two different types of well crystallized particles, the magnetic distribution is indeed log-normal, as expected from the physical size distribution. However, two other types of particles, with twinning defects or inhomogeneous oxide phases, are found to have a bimodal magnetic distribution. Our qualitative explanation is that relatively low fields are sufficient to begin aligning the particles in the liquid on the basis of their net dipole moment, whereas higher fields are required to align the smaller domains or less magnetic phases inside the particles. - Highlights: • Multimodal fits of dilute ferrofluids reveal when the particles are multidomain. • No a priori shape of the distribution is assumed by the MINORIM inversion method. • Well crystallized particles have log-normal TEM and magnetic size distributions. • Defective particles can combine a monomodal size and a bimodal dipole moment.

  2. Application of Two-Parameter Extrapolation for Solution of Boundary-Value Problem on Semi-Axis

    CERN Document Server

    Zhidkov, E P

    2000-01-01

    A method for refining approximate eigenvalues and eigenfunctions for a boundary-value problem on a half-axis is suggested. To solve the problem numerically, one has to solve a problem on a finite segment [0,R] instead of the original problem on the interval [0,\\infty). This replacement leads to eigenvalues' and eigenfunctions' errors. To choose R beforehand for obtaining their required accuracy is often impossible. Thus, one has to resolve the problem on [0,R] with larger R. If there are two eigenvalues or two eigenfunctions that correspond to different segments, the suggested method allows one to improve the accuracy of the eigenvalue and the eigenfunction for the original problem by means of extrapolation along the segment. This approach is similar to Richardson's method. Moreover, a two-parameter extrapolation is described. It is combination of the extrapolation along the segment and Richardson's extrapolation along a discretization step.

  3. Combining sigma-lognormal modeling and classical features for analyzing graphomotor performances in kindergarten children.

    Science.gov (United States)

    Duval, Thérésa; Rémi, Céline; Plamondon, Réjean; Vaillant, Jean; O'Reilly, Christian

    2015-10-01

    This paper investigates the advantage of using the kinematic theory of rapid human movements as a complementary approach to those based on classical dynamical features to characterize and analyze kindergarten children's ability to engage in graphomotor activities as a preparation for handwriting learning. This study analyzes nine different movements taken from 48 children evenly distributed among three different school grades corresponding to pupils aged 3, 4, and 5 years. On the one hand, our results show that the ability to perform graphomotor activities depends on kindergarten grades. More importantly, this study shows which performance criteria, from sophisticated neuromotor modeling as well as more classical kinematic parameters, can differentiate children of different school grades. These criteria provide a valuable tool for studying children's graphomotor control learning strategies. On the other hand, from a practical point of view, it is observed that school grades do not clearly reflect pupils' graphomotor performances. This calls for a large-scale investigation, using a more efficient experimental design based on the various observations made throughout this study regarding the choice of the graphic shapes, the number of repetitions and the features to analyze. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. A Theoretical Study of Distribution of First Passage Times of Biomolecular Folding and Reactions with Application to Single Molecules

    Science.gov (United States)

    Wang, Jin; Leite, Vitor; Stell, George; Lee, Chi-Lun

    2002-03-01

    We study the distribution of first passage times of biomolecular folding and reactions through the general framework of energy landscape theory. Both the analytical and lattice simulation results show above cirtain specific temperature, the distribution of first passage time is log-normal, while under the same temperature, the distribution starts to develop fatty tails and deviate from the log-normal distribution, indicating intermittency whereas rare events might dominate the whole process. A power law distribution of first passage time was found analytically in this situation. Applications and connections to experiments on single molecule reaction dynamics are studied.

  5. Simulation of mineral dust aerosol with piecewise log-normal approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2011-09-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Module (CanAM4-PAM. The total simulated annual mean dust burden is 37.8 mg m−2 for year 2000, which is consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with several satellite observations and shows good agreements. The model yields a dust AOD of 0.042 and total AOD of 0.126 for the year 2000. The simulated aerosol direct radiative forcings (ADRF of dust and total aerosol over ocean are −1.24 W m−2 and −4.76 W m−2 respectively, which show good consistency with satellite estimates for the year 2001.

  6. Comment on ``Particle size distribution effects on sintering rates'' [J. Appl. Phys. 60, 383 (1986)

    Science.gov (United States)

    Harrett, T.

    1987-06-01

    A recent paper in the Journal of Applied Physics analyzed the dependence of sintering rate on power particle-size distribution to derive a basic relative rate constant for the process. The derivation involved rather cumbersome numerical quadratures of the lognormal functions concerned. This procedure is unnecessary, since an exact closed-form result, as given in this Comment, is easily obtained. Some fairly obvious incidental errors in the original presentation are also corrected. Several other lognormal distribution integrals, apparently unlisted in previous literature and which might prove similarly useful in connection with distribution problems, are also presented.

  7. Forms and genesis of species abundance distributions

    Directory of Open Access Journals (Sweden)

    Evans O. Ochiaga

    2015-12-01

    Full Text Available Species abundance distribution (SAD is one of the most important metrics in community ecology. SAD curves take a hollow or hyperbolic shape in a histogram plot with many rare species and only a few common species. In general, the shape of SAD is largely log-normally distributed, although the mechanism behind this particular SAD shape still remains elusive. Here, we aim to review four major parametric forms of SAD and three contending mechanisms that could potentially explain this highly skewed form of SAD. The parametric forms reviewed here include log series, negative binomial, lognormal and geometric distributions. The mechanisms reviewed here include the maximum entropy theory of ecology, neutral theory and the theory of proportionate effect.

  8. Inertial particles distribute in turbulence as Poissonian points with random intensity inducing clustering and supervoiding

    CERN Document Server

    Schmidt, Lukas; Holzner, Markus

    2016-01-01

    This work considers the distribution of inertial particles in turbulence using the point-particle approximation. We demonstrate that the random point process formed by the positions of particles in space is a Poisson point process with log-normal random intensity ("log Gaussian Cox process" or LGCP). The probability of having a finite number of particles in a small volume is given in terms of the characteristic function of a log-normal distribution. Corrections due to discreteness of the number of particles to the previously derived statistics of particle concentration in the continuum limit are provided. These are relevant for dealing with experimental or numerical data. The probability of having regions without particles, i.e. voids, is larger for inertial particles than for tracer particles where voids are distributed according to Poisson processes. Further, the probability of having large voids decays only log-normally with size. This shows that particles cluster, leaving voids behind. At scales where the...

  9. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates.

    Science.gov (United States)

    Wagner, Peter J

    2012-02-23

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.

  10. Modeling light-tailed and right-skewed data with a new asymmetric distribution

    OpenAIRE

    Cadena, Meitner

    2016-01-01

    A new three-parameter cumulative distribution function defined on (α,∞), for some α ≥ 0, with asymmetric probability density function and showing exponential decays at its both tails, is introduced. The new distribution is near to familiar distributions like the gamma and log-normal distributions, but this new one shows propre elements and does not generalize neither of these distributions. Hence, the new distribution constitutes a new alternative to fit lighted-tail behaviors of high extreme...

  11. Vapor intrusion in soils with multimodal pore-size distribution

    OpenAIRE

    Alfaro Soto Miguel; Hung Kiang Chang

    2016-01-01

    The Johnson and Ettinger [1] model and its extensions are at this time the most widely used algorithms for estimating subsurface vapor intrusion into buildings (API [2]). The functions which describe capillary pressure curves are utilized in quantitative analyses, although these are applicable for porous media with a unimodal or lognormal pore-size distribution. However, unaltered soils may have a heterogeneous pore distribution and consequently a multimodal pore-size distribution [3], which ...

  12. Power-like Tail Observed in Weight Distributions of Schoolchildren

    CERN Document Server

    Kuninaka, Hiroto

    2015-01-01

    We investigated the statistical properties of the weight distributions of Japanese children who were born in 1996, from recent data. The weights of 16- and 17-year-old male children have a lognormal distribution with a power-like tail, which is best modeled by the double Pareto distribution. The emergence of the power-like tail may be attributed to the low probability that an obese person will attain a normal weight.

  13. Construction of a two-parameter empirical model of left ventricle wall motion using cardiac tagged magnetic resonance imaging data

    Directory of Open Access Journals (Sweden)

    Shi Jack J

    2012-10-01

    the spatial and temporal evolution of the LV wall motion using a two-parameter formulation in conjunction with tMRI-based visualization of the LV wall in the transverse planes of the apex, mid-ventricle and base. In healthy hearts, the analytical model will potentially allow deriving biomechanical entities, such as strain, strain rate or torsion, which are typically used as diagnostic, prognostic or predictive markers of cardiovascular diseases including diabetes.

  14. The mathematical formula of the intravaginal ejaculation latency time (IELT) distribution of lifelong premature ejaculation differs from the IELT distribution formula of men in the general male population

    Science.gov (United States)

    Janssen, Paddy K.C.

    2016-01-01

    Purpose To find the most accurate mathematical description of the intravaginal ejaculation latency time (IELT) distribution in the general male population. Materials and Methods We compared the fitness of various well-known mathematical distributions with the IELT distribution of two previously published stopwatch studies of the Caucasian general male population and a stopwatch study of Dutch Caucasian men with lifelong premature ejaculation (PE). The accuracy of fitness is expressed by the Goodness of Fit (GOF). The smaller the GOF, the more accurate is the fitness. Results The 3 IELT distributions are gamma distributions, but the IELT distribution of lifelong PE is another gamma distribution than the IELT distribution of men in the general male population. The Lognormal distribution of the gamma distributions most accurately fits the IELT distribution of 965 men in the general population, with a GOF of 0.057. The Gumbel Max distribution most accurately fits the IELT distribution of 110 men with lifelong PE with a GOF of 0.179. There are more men with lifelong PE ejaculating within 30 and 60 seconds than can be extrapolated from the probability density curve of the Lognormal IELT distribution of men in the general population. Conclusions Men with lifelong PE have a distinct IELT distribution, e.g., a Gumbel Max IELT distribution, that can only be retrieved from the general male population Lognormal IELT distribution when thousands of men would participate in a IELT stopwatch study. The mathematical formula of the Lognormal IELT distribution is useful for epidemiological research of the IELT. PMID:26981594

  15. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    Energy Technology Data Exchange (ETDEWEB)

    Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F. [Medical Physics Graduate Program, Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States); Siebers, Jeffrey V. [Department of Radiation Oncology, University of Virginia, Charlottesville, Virginia 22908 (United States)

    2016-01-15

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracy of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on

  16. Data assimilation in a coupled physical-biogeochemical model of the California Current System using an incremental lognormal 4-dimensional variational approach: Part 2-Joint physical and biological data assimilation twin experiments

    Science.gov (United States)

    Song, Hajoon; Edwards, Christopher A.; Moore, Andrew M.; Fiechter, Jerome

    2016-10-01

    Coupled physical and biological data assimilation is performed within the California Current System using model twin experiments. The initial condition of physical and biological variables is estimated using the four-dimensional variational (4DVar) method under the Gaussian and lognormal error distributions assumption, respectively. Errors are assumed to be independent, yet variables are coupled by assimilation through model dynamics. Using a nutrient-phytoplankton-zooplankton-detritus (NPZD) model coupled to an ocean circulation model (the Regional Ocean Modeling System, ROMS), the coupled data assimilation procedure is evaluated by comparing results to experiments with no assimilation and with assimilation of physical data and biological data separately. Independent assimilation of physical (biological) data reduces the root-mean-squared error (RMSE) of physical (biological) state variables by more than 56% (43%). However, the improvement in biological (physical) state variables is less than 7% (13%). In contrast, coupled data assimilation improves both physical and biological components by 57% and 49%, respectively. Coupled data assimilation shows robust performance with varied observational errors, resulting in significantly smaller RMSEs compared to the free run. It still produces the estimation of observed variables better than that from the free run even with the physical and biological model error, but leads to higher RMSEs for unobserved variables. A series of twin experiments illustrates that coupled physical and biological 4DVar assimilation is computationally efficient and practical, capable of providing the reliable estimation of the coupled system with the same and ready to be examined in a realistic configuration.

  17. Growth models and the expected distribution of fluctuating asymmetry

    Science.gov (United States)

    Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John

    2003-01-01

    Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.

  18. Probability distributions of the electroencephalogram envelope of preterm infants.

    Science.gov (United States)

    Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro

    2015-06-01

    To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Size distribution of Portuguese firms between 2006 and 2012

    Science.gov (United States)

    Pascoal, Rui; Augusto, Mário; Monteiro, A. M.

    2016-09-01

    This study aims to describe the size distribution of Portuguese firms, as measured by annual sales and total assets, between 2006 and 2012, giving an economic interpretation for the evolution of the distribution along the time. Three distributions are fitted to data: the lognormal, the Pareto (and as a particular case Zipf) and the Simplified Canonical Law (SCL). We present the main arguments found in literature to justify the use of distributions and emphasize the interpretation of SCL coefficients. Methods of estimation include Maximum Likelihood, modified Ordinary Least Squares in log-log scale and Nonlinear Least Squares considering the Levenberg-Marquardt algorithm. When applying these approaches to Portuguese's firms data, we analyze if the evolution of estimated parameters in both lognormal power and SCL is in accordance with the known existence of a recession period after 2008. This is confirmed for sales but not for assets, leading to the conclusion that the first variable is a best proxy for firm size.

  20. Statistical Distributions of Ambient Air Pollutants in Shanghai,China

    Institute of Scientific and Technical Information of China (English)

    HAI-DONG KAN; BING-HENG CHEN

    2004-01-01

    To determine the best statistical distribution of concentration data of major air pollutants in Shanghai. Methods Four types of theoretic distributions (lognormal, gamma, Pearson V and extreme value) were chosen to fit daily average concentration data of PM10, SO2 and NO2 from June 1, 2000 to May 31, 2003 in Shanghai by using the maximum likelihood method. The fit results were evaluated by Chi-square test. Results The best-fit distributions for PM10, SO2 and NO2 concentrations in Shanghai were lognormal, Pearson V, and extreme value distributions, respectively. Conclusion The results can be further applied to local air pollution prediction and control, e.g., the probabilities exceeding the air quality standard and emission source reduction of air pollutant concentration to meet the standard.

  1. Area and Flux Distributions of Active Regions, Sunspot Groups, and Sunspots: A Multi-Database Study

    CERN Document Server

    Muñoz-Jaramillo, Andrés; Windmueller, John C; Amouzou, Ernest C; Longcope, Dana W; Tlatov, Andrey G; Nagovitsyn, Yury A; Pevtsov, Alexei A; Chapman, Gary A; Cookson, Angela M; Yeates, Anthony R; Watson, Fraser T; Balmaceda, Laura A; DeLuca, Edward E; Martens, Petrus C H

    2014-01-01

    In this work we take advantage of eleven different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions -- where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) $10^{21}$Mx ($10^{22}$Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behaviour of a power-law distribution (when extended into smaller fluxes), making our results compatible with the results of Parnell et al.\\ (200...

  2. Business size distributions

    Science.gov (United States)

    D'Hulst, R.; Rodgers, G. J.

    2001-10-01

    In a recent work, we introduced two models for the dynamics of customers trying to find the business that best corresponds to their expectation for the price of a commodity. In agreement with the empirical data, a power-law distribution for the business sizes was obtained, taking the number of customers of a business as a proxy for its size. Here, we extend one of our previous models in two different ways. First, we introduce a business aggregation rate that is fitness dependent, which allows us to reproduce a spread in empirical data from one country to another. Second, we allow the bankruptcy rate to take a different functional form, to be able to obtain a log-normal distribution with power-law tails for the size of the businesses.

  3. Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: Basic theory and applications.

    Science.gov (United States)

    Doungmo Goufo, Emile Franc

    2016-08-01

    After having the issues of singularity and locality addressed recently in mathematical modelling, another question regarding the description of natural phenomena was raised: How influent is the second parameter β of the two-parameter Mittag-Leffler function Eα,β(z), z∈ℂ? To answer this question, we generalize the newly introduced one-parameter derivative with non-singular and non-local kernel [A. Atangana and I. Koca, Chaos, Solitons Fractals 89, 447 (2016); A. Atangana and D. Bealeanu (e-print)] by developing a similar two-parameter derivative with non-singular and non-local kernel based on Eα , β(z). We exploit the Agarwal/Erdelyi higher transcendental functions together with their Laplace transforms to explicitly establish the Laplace transform's expressions of the two-parameter derivatives, necessary for solving related fractional differential equations. Explicit expression of the associated two-parameter fractional integral is also established. Concrete applications are done on atmospheric convection process by using Lorenz non-linear simple system. Existence result for the model is provided and a numerical scheme established. As expected, solutions exhibit chaotic behaviors for α less than 0.55, and this chaos is not interrupted by the impact of β. Rather, this second parameter seems to indirectly squeeze and rotate the solutions, giving an impression of twisting. The whole graphics seem to have completely changed its orientation to a particular direction. This is a great observation that clearly shows the substantial impact of the second parameter of Eα , β(z), certainly opening new doors to modeling with two-parameter derivatives.

  4. INPUT MODELLING USING STATISTICAL DISTRIBUTIONS AND ARENA SOFTWARE

    Directory of Open Access Journals (Sweden)

    Elena Iuliana GINGU (BOTEANU

    2015-05-01

    Full Text Available The paper presents a method of choosing properly the probability distributions for failure time in a flexible manufacturing system. Several well-known distributions often provide good approximation in practice. The commonly used continuous distributions are: Uniform, Triangular, Beta, Normal, Lognormal, Weibull, and Exponential. In this article is studied how to use the Input Analyzer in the simulation language Arena to fit probability distributions to data, or to evaluate how well a particular distribution. The objective was to provide the selection of the most appropriate statistical distributions and to estimate parameter values of failure times for each machine of a real manufacturing line.

  5. The Czech Wage Distribution and the Minimum Wage Impacts: the Empirical Analysis

    Directory of Open Access Journals (Sweden)

    Kateřina Duspivová

    2013-06-01

    Full Text Available A well-fi tting wage distribution is a crucial precondition for economic modeling of the labour market processes.In the fi rst part, this paper provides the evidence that – as for wages in the Czech Republic – the most oft enused log-normal distribution failed and the best-fi tting one is the Dagum distribution. Th en we investigatethe role of wage distribution in the process of the economic modeling. By way of an example of the minimumwage impacts on the Czech labour market, we examine the response of Meyer and Wise’s (1983 model to theDagum and log-normal distributions. Th e results suggest that the wage distribution has important implicationsfor the eff ects of the minimum wage on the shape of the lower tail of the measured wage distribution andis thus an important feature for interpreting the eff ects of minimum wages.

  6. Revisiting the thermal and superthermal two-class distribution of incomes. A critical perspective

    Science.gov (United States)

    Schneider, Markus P. A.

    2015-01-01

    This paper offers a two-pronged critique of the empirical investigation of the income distribution performed by physicists over the past decade. Their finding rely on the graphical analysis of the observed distribution of normalized incomes. Two central observations lead to the conclusion that the majority of incomes are exponentially distributed, but neither each individual piece of evidence nor their concurrent observation robustly proves that the thermal and superthermal mixture fits the observed distribution of incomes better than reasonable alternatives. A formal analysis using popular measures of fit shows that while an exponential distribution with a power-law tail provides a better fit of the IRS income data than the log-normal distribution (often assumed by economists), the thermal and superthermal mixture's fit can be improved upon further by adding a log-normal component. The economic implications of the thermal and superthermal distribution of incomes, and the expanded mixture are explored in the paper.

  7. THE DENSITY DISTRIBUTION IN TURBULENT BISTABLE FLOWS

    Energy Technology Data Exchange (ETDEWEB)

    Gazol, Adriana [Centro de Radioastronomia y Astrofisica, UNAM, A. P. 3-72, c.p. 58089 Morelia, Michoacan (Mexico); Kim, Jongsoo, E-mail: a.gazol@crya.unam.mx, E-mail: jskim@kasi.re.kr [Korea Astronomy and Space Science Institute, 61-1, Hwaam-Dong, Yuseong-Ku, Daejeon 305-348 (Korea, Republic of)

    2013-03-01

    We numerically study the volume density probability distribution function (n-PDF) and the column density probability distribution function ({Sigma}-PDF) resulting from thermally bistable turbulent flows. We analyze three-dimensional hydrodynamic models in periodic boxes of 100 pc by side, where turbulence is driven in the Fourier space at a wavenumber corresponding to 50 pc. At low densities (n {approx}< 0.6 cm{sup -3}), the n-PDF is well described by a lognormal distribution for an average local Mach number ranging from {approx}0.2 to {approx}5.5. As a consequence of the nonlinear development of thermal instability (TI), the logarithmic variance of the distribution of the diffuse gas increases with M faster than in the well-known isothermal case. The average local Mach number for the dense gas (n {approx}> 7.1 cm{sup -3}) goes from {approx}1.1 to {approx}16.9 and the shape of the high-density zone of the n-PDF changes from a power law at low Mach numbers to a lognormal at high M values. In the latter case, the width of the distribution is smaller than in the isothermal case and grows slower with M. At high column densities, the {Sigma}-PDF is well described by a lognormal for all of the Mach numbers we consider and, due to the presence of TI, the width of the distribution is systematically larger than in the isothermal case but follows a qualitatively similar behavior as M increases. Although a relationship between the width of the distribution and M can be found for each one of the cases mentioned above, these relations are different from those of the isothermal case.

  8. From Asymetrical Growth Rate Distributions to Multiple Normal Distributions

    Science.gov (United States)

    Žekić, A. A.; Mitrović, M. M.

    2007-04-01

    Growth rate dispersion (GRD) represents the variations in the growth rates of different crystals of the same material, grown under the same conditions. Mostly, these dispersions are decribed by asymetrical distributions with one maximum, and skeved to the right (log-normal and gamma). Recently, it was shown that the GRD of a crystals can be described by distributions with more maxima (multiple normal distribution). The analysis of the number and the height of the growth rate dispersions maxima has been performed. This analysis is shown the first or the second maximum has the maximal height, which results in asymetry of the distributions. This is the reason for the right GRD skeveness for the small number of crystal growth rates analysed. The results are discussed in accordance with crystal growth theories.

  9. Statistical Evidence for the Preference of Frailty Distributions with Regularly-Varying-at-Zero Densities

    DEFF Research Database (Denmark)

    Missov, Trifon I.; Schöley, Jonas

    to this criterion admissible distributions are, for example, the gamma, the beta, the truncated normal, the log-logistic and the Weibull, while distributions like the log-normal and the inverse Gaussian do not satisfy this condition. In this article we show that models with admissible frailty distributions......Missov and Finkelstein (2011) prove an Abelian and its corresponding Tauberian theorem regarding distributions for modeling unobserved heterogeneity in fixed-frailty mixture models. The main property of such distributions is the regular variation at zero of their densities. According...... and a Gompertz baseline provide a better fit to adult human mortality data than the corresponding models with non-admissible frailty distributions. We implement estimation procedures for mixture models with a Gompertz baseline and frailty that follows a gamma, truncated normal, log-normal, or inverse Gaussian...

  10. Tamaño de muestra requerido para estimar la media aritmética de una distribución lognormal

    OpenAIRE

    2012-01-01

    Se presentan fórmulas cerradas para calcular el tamaño de la muestra requerido en la estimación de la media aritmética de una distribución lognormal para datos censurados y no censurados. Las fórmulas son el resultado del ajuste de modelos no lineales para los tamaños de la muestra exactos reportados por Pérez (1995) en función de la desviación geométrica estándar, el porcentaje de diferencia a la verdadera media aritmética y niveles de confianza del 90 %, 95% y 99 %. Las fórmulas presentadas...

  11. A decade plus of snow distribution observations in a mountain catchment: assessing variability, self-similarity, and the representativeness of an index site

    Science.gov (United States)

    Winstral, A. H.; Marks, D. G.

    2012-12-01

    This study presents an analysis of eleven years of manually sampled snow depth and SWE data at the drift-dominated Reynolds Mountain East catchment (0.36 km^2) in southwestern Idaho, U.S.A. The dataset includes eleven mid-winter surveys and ten surveys that targeted peak accumulation in the early spring. Depths were sampled on the same 30-meter grid covering the entire catchment in each survey. Densities were sampled at a coarser resolution using a depth-stratified random sampling scheme. In 19 of the 21 surveys, snow density increased with increasing depth until an upper limit was attained in the drifts. The coefficient of variation (CV) for mid-winter snow depths varied from 0.46 to 0.75 and was significantly related to seasonal wind speeds (p = 0.02). Energy inputs, correlated inversely to accumulation rates in this catchment, caused variability to increase as melt increased through the season. The CV for all three surveys that took place after peak accumulation exceeded 1.0. Inter-seasonal distributions were strongly correlated - correlation coefficients ranged from 0.70 to 0.97 with a mean of 0.84. An index site with similar site characteristics to NRCS Snotel sites gave reasonable approximations of average catchment SWE in drier years, however as snowfall increased this site increasingly over-estimated basin-wide SWE. Though others have found snow distributions to be reasonably approximated by two-parameter lognormal distributions, Kolmogorov-Smirnov goodness of fit tests rejected this hypothesis (p < 0.01) in 20 of the 21 observed distributions.

  12. Explaining the Power-law Distribution of Human Mobility Through Transportation Modality Decomposition

    CERN Document Server

    Zhao, Kai; Hui, Pan; Rao, Weixiong; Tarkoma, Sasu

    2014-01-01

    Human mobility has been empirically observed to exhibit Levy flight characteristics and behaviour with power-law distributed jump size. The fundamental mechanisms behind this behaviour has not yet been fully explained. In this paper, we analyze urban human mobility and we propose to explain the Levy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bicycle, Train/Subway or Car/Taxi/Bus. Our analysis is based on two real-life GPS datasets containing approximately 10 and 20 million GPS samples with transportation mode information. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an e...

  13. London house prices are power-law distributed

    CERN Document Server

    MacKay, Niall

    2010-01-01

    In this pilot study we explore the house price distributions for London, Manchester, Bristol, Newcastle, Birmingham and Leeds. We find Pareto (power law) behaviour in their upper tails, which is clearly distinct from lognormal behaviour in the cases of London and Manchester. We propose an index of Housing Wealth Inequality based on the Pareto exponent and analogous to the Gini coefficient, and comment on its possible uses.

  14. Modelando la distribución del número de co-autores por artículo Modeling the distribution of co-authorships by paper

    Directory of Open Access Journals (Sweden)

    Rubén Urbizagástegui Alvarado

    2011-04-01

    Full Text Available Este artículo explora la modelación del número de autores que colaboran en la publicación de un artículo. Con ese fin se analizaron la distribución geométrica, la distribución Poisson truncada, la distribución Poisson lognormal y la distribución Gauss Poisson inversa generalizada en la literatura producida sobre la Ley de Lotka desde 1922 hasta junio de 2010. Se encontró que la distribución Poisson lognormal estimó más cercanamente el valor total de los documentos observados seguida del modelo Poisson truncado, la distribución geométrica y finalmente el modelo Gauss Poisson inversa generalizada.In the literature produced about Lotka's law from 1922 to June 2010, the geometric distribution, truncated Poisson distribution, Poisson lognormal distribution, and the generalized inverse Gaussian Poisson distribution are studied to statistically model the number the authors who collaborate in the publication of an article. It was found that the Poisson lognormal distribution more closely estimated the total value of documents, followed by the truncated Poisson model, the geometric distribution, and finally the generalized inverse Gaussian Poisson distribution.

  15. Relation between Fresnel transform of input light field and the two-parameter Radon transform of Wigner function of the field

    Institute of Scientific and Technical Information of China (English)

    Fan Hong-Yi; Hu Li-Yun

    2009-01-01

    This paper proves a new theorem on the relationship between optical field Wigner function's two-parameter Radon transform and optical Fresnel transform of the field, I.e., when an input field ψ(x') propagates through an optical [D (-B) (-C) A] system, the energy density of the output field is equal to the Radon transform of the Wigner function of the input field, where the Radon transform parameters are D, B. It prove this theorem in both spatial-domain and frequency-domain, in the latter case the Radon transform parameters are A, C 7.

  16. 一种新的恒加试验下双参数指数型产品的可靠性分析%Reliability Analysis for Two-parameter Exponential Unit under Content-stress Accelerated Life Test

    Institute of Scientific and Technical Information of China (English)

    谭伟; 师义民; 孙玉东

    2012-01-01

    针对双参数指数型产品,在具有二项移走(即在每个观测时刻产品的移走数服从二项分布)的分组寿命试验下,研究了分组时刻的确定方法,推导出门限参数、寿命参数和移走概率的极大似然估计。进而,讨论了双参数指数型产品在具有二项移走的恒加寿命分组试验下的可靠性分析问题。利用加速寿命方程,给出了双参数指数型产品的可靠性估计。最后给出随机模拟例子验证了结论的正确性。%Based on two-parameter exponential grouped life test with binomial removals (The number of samples removed obey the binomial distribution at each observation time), the paper studied the method to determine the grouped time, derived out and removal probability. Then, under the situation the MLE of threshold parameter, lifetime parameter of constant-stress accelerated grouped life test with binomial removals, by the aid of the accelerated life equation, the paper gave out the reliability estimation of two-parameter exponential unit. Finally, simulation example is given to verify the correctness of the result.

  17. A Polynomial Distribution Applied to Income and Wealth Distribution

    Directory of Open Access Journals (Sweden)

    Elvis Oltean

    2013-08-01

    Full Text Available Income and wealth distribution affect stability of a society to a large extent and high inequality affects it negatively. Moreover, in the case of developed countries, recently has been proven that inequality is closely related to all negative phenomena affecting society. So far, Econophysics papers tried to analyse income and wealth distribution by employing distributions such as Fermi-Dirac, Bose-Einstein, Maxwell-Boltzmann, lognormal (Gibrat, and exponential. Generally, distributions describe mostly income and less wealth distribution for low and middle income segment of population, which accounts about 90% of the population. Our approach is based on a totally new distribution, not used so far in the literature regarding income and wealth distribution. Using cumulative distribution method, we find that polynomial functions, regardless of their degree (first, second, or higher, can describe with very high accuracy both income and wealth distribution. Moreover, we find that polynomial functions describe income and wealth distribution for entire population including upper income segment for which traditionally Pareto distribution is used.

  18. Development of a two-parameter slit-scan flow cytometer for screening of normal and aberrant chromosomes: application to a karyotype of Sus scrofa domestica (pig)

    Science.gov (United States)

    Hausmann, Michael; Doelle, Juergen; Arnold, Armin; Stepanow, Boris; Wickert, Burkhard; Boscher, Jeannine; Popescu, Paul C.; Cremer, Christoph

    1992-07-01

    Laser fluorescence activated slit-scan flow cytometry offers an approach to a fast, quantitative characterization of chromosomes due to morphological features. It can be applied for screening of chromosomal abnormalities. We give a preliminary report on the development of the Heidelberg slit-scan flow cytometer. Time-resolved measurement of the fluorescence intensity along the chromosome axis can be registered simultaneously for two parameters when the chromosome axis can be registered simultaneously for two parameters when the chromosome passes perpendicularly through a narrowly focused laser beam combined by a detection slit in the image plane. So far automated data analysis has been performed off-line on a PC. In its final performance, the Heidelberg slit-scan flow cytometer will achieve on-line data analysis that allows an electro-acoustical sorting of chromosomes of interest. Interest is high in the agriculture field to study chromosome aberrations that influence the size of litters in pig (Sus scrofa domestica) breeding. Slit-scan measurements have been performed to characterize chromosomes of pigs; we present results for chromosome 1 and a translocation chromosome 6/15.

  19. Two-parameter logistic and Weibull equations provide better fits to survival data from isogenic populations of Caenorhabditis elegans in axenic culture than does the Gompertz model.

    Science.gov (United States)

    Vanfleteren, J R; De Vreese, A; Braeckman, B P

    1998-11-01

    We have fitted Gompertz, Weibull, and two- and three-parameter logistic equations to survival data obtained from 77 cohorts of Caenorhabditis elegans in axenic culture. Statistical analysis showed that the fitting ability was in the order: three-parameter logistic > two-parameter logistic = Weibull > Gompertz. Pooled data were better fit by the logistic equations, which tended to perform equally well as population size increased, suggesting that the third parameter is likely to be biologically irrelevant. Considering restraints imposed by the small population sizes used, we simply conclude that the two-parameter logistic and Weibull mortality models for axenically grown C. elegans generally provided good fits to the data, whereas the Gompertz model was inappropriate in many cases. The survival curves of several short- and long-lived mutant strains could be predicted by adjusting only the logistic curve parameter that defines mean life span. We conclude that life expectancy is genetically determined; the life span-altering mutations reported in this study define a novel mean life span, but do not appear to fundamentally alter the aging process.

  20. Evolving Molecular Cloud Structure and the Column Density Probability Distribution Function

    CERN Document Server

    Ward, Rachel L; Sills, Alison

    2014-01-01

    The structure of molecular clouds can be characterized with the probability distribution function (PDF) of the mass surface density. In particular, the properties of the distribution can reveal the nature of the turbulence and star formation present inside the molecular cloud. In this paper, we explore how these structural characteristics evolve with time and also how they relate to various cloud properties as measured from a sample of synthetic column density maps of molecular clouds. We find that, as a cloud evolves, the peak of its column density PDF will shift to surface densities below the observational threshold for detection, resulting in an underlying lognormal distribution which has been effectively lost at late times. Our results explain why certain observations of actively star-forming, dynamically older clouds, such as the Orion molecular cloud, do not appear to have any evidence of a lognormal distribution in their column density PDFs. We also study the evolution of the slope and deviation point ...

  1. Outage and Capacity Performance Evaluation of Distributed MIMO Systems over a Composite Fading Channel

    Directory of Open Access Journals (Sweden)

    Wenjie Peng

    2014-01-01

    Full Text Available The exact closed-form expressions regarding the outage probability and capacity of distributed MIMO (DMIMO systems over a composite fading channel are derived. This is achieved firstly by using a lognormal approximation to a gamma-lognormal distribution when a mobile station (MS in the cell is in a fixed position, and the so-called maximum ratio transmission/selected combining (MRT-SC and selected transmission/maximum ratio combining (ST-MRC schemes are adopted in uplink and downlink, respectively. Then, based on a newly proposed nonuniform MS cell distribution model, which is more consistent with the MS cell hotspot distribution in an actual communication environment, the average outage probability and capacity formulas are further derived. Finally, the accuracy of the approximation method and the rationality of the corresponding theoretical analysis regarding the system performance are proven and illustrated by computer simulations.

  2. The Intrinsic Eddington Ratio Distribution of Active Galactic Nuclei in Star-forming Galaxies from the Sloan Digital Sky Survey

    CERN Document Server

    Jones, M L; Black, C S; Hainline, K N; DiPompeo, M A; Goulding, A D

    2016-01-01

    An important question in extragalactic astronomy concerns the distribution of black hole accretion rates of active galactic nuclei (AGN). Based on observations at X-ray wavelengths, the observed Eddington ratio distribution appears as a power law, while optical studies have often yielded a lognormal distribution. There is increasing evidence that these observed discrepancies may be due to contamination by star formation and other selection effects. Using a sample of galaxies from the Sloan Digital Sky Survey Data Release 7, we test if an intrinsic Eddington ratio distribution that takes the form of a Schechter function is consistent with previous work that suggests that young galaxies in optical surveys have an observed lognormal Eddington ratio distribution. We simulate the optical emission line properties of a population of galaxies and AGN using a broad instantaneous luminosity distribution described by a Schechter function near the Eddington limit. This simulated AGN population is then compared to observe...

  3. Fréchet Distribution Applied to Salary Incomes in Spain from 1999 to 2014. An Engineering Approach to Changes in Salaries’ Distribution

    Directory of Open Access Journals (Sweden)

    Santiago Pindado

    2017-05-01

    Full Text Available The official data in relation to salaries paid in Spain from 1999 to 2014 has been analyzed. The inadequate data format does not reflect the whole salary distribution. Fréchet distributions have been fitted to the data. This simple distribution has similar accuracy in relation to the data when compared to other distributions (Log-Normal, Gamma, Dagum, GB2. Analysis of the data through the fitted Fréchet distributions reveals a tendency towards more balanced (i.e., less skewed salary distributions from 2002 to 2014 in Spain.

  4. 表生风化作用下多年冻土土壤的理论粒径分布%Theoretical Grain Size Distribution of Permafrost Soils as a Generalized Consequence of Hypergene Processes

    Institute of Scientific and Technical Information of China (English)

    Igor E. Guryanov

    2004-01-01

    The paper discusses the distinctive features of grain size distribution of permafrost soils formed under conditions of continental lithogenesis and cryogenic weathering of rocks. As a functional consequence of surface erosion of mineral particles, the log-normal distribution of the density function of grain size is derived confirmed for any conditions and sediment types.

  5. Modelling and validation of particle size distributions of supported nanoparticles using the pair distribution function technique

    Energy Technology Data Exchange (ETDEWEB)

    Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; Martinez-Inesta, Maria

    2017-04-13

    The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.

  6. Inferring local competition intensity from patch size distributions: a test using biological soil crusts

    Science.gov (United States)

    Bowker, Matthew A.; Maestre, Fernando T.

    2012-01-01

    Dryland vegetation is inherently patchy. This patchiness goes on to impact ecology, hydrology, and biogeochemistry. Recently, researchers have proposed that dryland vegetation patch sizes follow a power law which is due to local plant facilitation. It is unknown what patch size distribution prevails when competition predominates over facilitation, or if such a pattern could be used to detect competition. We investigated this question in an alternative vegetation type, mosses and lichens of biological soil crusts, which exhibit a smaller scale patch-interpatch configuration. This micro-vegetation is characterized by competition for space. We proposed that multiplicative effects of genetics, environment and competition should result in a log-normal patch size distribution. When testing the prevalence of log-normal versus power law patch size distributions, we found that the log-normal was the better distribution in 53% of cases and a reasonable fit in 83%. In contrast, the power law was better in 39% of cases, and in 8% of instances both distributions fit equally well. We further hypothesized that the log-normal distribution parameters would be predictably influenced by competition strength. There was qualitative agreement between one of the distribution's parameters (μ) and a novel intransitive (lacking a 'best' competitor) competition index, suggesting that as intransitivity increases, patch sizes decrease. The correlation of μ with other competition indicators based on spatial segregation of species (the C-score) depended on aridity. In less arid sites, μ was negatively correlated with the C-score (suggesting smaller patches under stronger competition), while positive correlations (suggesting larger patches under stronger competition) were observed at more arid sites. We propose that this is due to an increasing prevalence of competition transitivity as aridity increases. These findings broaden the emerging theory surrounding dryland patch size distributions

  7. Understanding star formation in molecular clouds III. Probability distribution functions of molecular lines in Cygnus X

    CERN Document Server

    Schneider, N; Motte, F; Ossenkopf, V; Klessen, R S; Simon, R; Fechtenbaum, S; Herpin, F; Tremblin, P; Csengeri, T; Myers, P C; Hill, T; Cunningham, M; Federrath, C

    2015-01-01

    Column density (N) PDFs serve as a powerful tool to characterize the physical processes that influence the structure of molecular clouds. Star-forming clouds can best be characterized by lognormal PDFs for the lower N range and a power-law tail for higher N, commonly attributed to turbulence and self-gravity and/or pressure, respectively. We report here on PDFs obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region and compare to a PDF derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av~1-30, but is cut for higher Av due to optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up for Av~1-15, followed by excess up to Av~40. Above that value, all CO PDFs drop, most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av~15 and 400, respectively. The PDF from dust is lognormal for Av~2-15 and has a power-law tail up to Av~500. Absolute values for the molecular lin...

  8. Distribution of lanthanoids, Be, Bi, Ga, Te, Tl, Th and U on the territory of Bulgaria using Populus nigra 'Italica' as an indicator

    Energy Technology Data Exchange (ETDEWEB)

    Djingova, R.; Ivanova, Ju. [Department of Analytical Chemistry, Faculty of Chemistry, University of Sofia, 1, J. Bouchier Blvd, 1126 Sofia (Bulgaria); Wagner, G. [University of Saarland, Center of Environmental Research, Institute of Biogeography, D-66041 Saarbruecken (Germany); Korhammer, S.; Markert, B. [International Graduate school IHI-Zittau, Chair of Environmental High Technology, Markt 23, D-02763 Zittau (Germany)

    2001-12-03

    The concentrations of lanthanoids, Be, Bi, Ga, Te, Tl, Th and U have been determined using ICP-MS for 100 standardized samples of poplar leaves collected from the territory of Bulgaria. The investigated elements are log-normally distributed on the territory. Using cluster analysis of the analytical data the samples were grouped according soil type on which the plants are growing.

  9. Abundant Symmetries and Exact Compacton-Like Structures in the Two-Parameter Family of the Estevez-Mansfield-Clarkson Equations

    Institute of Scientific and Technical Information of China (English)

    YAN Zhen-Ya

    2002-01-01

    The two-parameter family of Estevez-Mansfield-Clarkson equations with fully nonlinear dispersion (called E(m, n) equations), (uzm)zzr + γ(unzur)z + urr = 0 which is a generalized model of the integrable Estevez-MansfieldClarkson equation u + γ(uzuzr +uzzur) +urr = 0, is presented. Five types of symmetries of the E(m, n) equation are obtained by making use of the direct reduction method. Using these obtained reductions and some simple tranaformations,we obtain the solitary-like wave solutions of E(1, n) equation. In addition, we also find the compacton solutions (which are solitary waves with the property that after colliding with other compacton solutions, they reemerge with the same coherent shape) orE(3, 2) equation and E(m, m- 1) for its potentials, say, uz, and compacton-like solutions of E(m, m- 1)equations, respectively. Whether there exist compacton-like solutions of the other E(m, n) equation with m ≠ n + 1 is still an open problem.

  10. 带线性约束的新两参数估计%New Two Parameters Estimation for the Linear Model with Linear Restrictions

    Institute of Scientific and Technical Information of China (English)

    郭淑妹; 顾勇为; 郭杰

    2013-01-01

      针对带约束的最小二乘估计在参数估计中处理复共线性的不足,引入随机线性约束,提出了约束新两参数估计。并且得到在均方误差下,约束新两参数估计与约束最小二乘估计,约束岭估计和约束Liu估计相比的优良性。%In order to overcome the shortage of the multicollinearity in ordinary restricted least square estimation with parameter estimate based on the stochastic linear restrictions,a new estimation as restricted linear new two parameters estimation is proposed. In the mean squared error sense,compared with the properties the ordinary restricted least squares estimation,and the restricted ridge estimation,the method we proposed was superior.

  11. Ultrasonic backscattering in tissue: characterization through Nakagami-generalized inverse Gaussian distribution.

    Science.gov (United States)

    Agrawal, Rajeev; Karmeshu

    2007-02-01

    Ultrasonic tissue characterization through composite probability distributions such as Nakagami-lognormal, Nakagami-gamma, Nakagami-inverse Gaussian has been found to be useful. Such a probabilistic description also depicts heavy tails which arise from multiple scattering in tissue besides local and global variations in scattering cross-sections. A new composite probability distribution, viz. Nakagami-generalized inverse Gaussian distribution (NGIGD) with four parameters is proposed which under different limiting conditions results in approximating the known distributions. A salient aspect of the new distribution is that the probability density function (pdf) of NGIGD variate is available in closed form and is analytically tractable.

  12. Quasispecies distribution of Eigen model

    Institute of Scientific and Technical Information of China (English)

    Chen Jia; Li Sheng; Ma Hong-Ru

    2007-01-01

    We have studied sharp peak landscapes of the Eigen model from a new perspective about how the quasispecies are distributed in the sequence space. To analyse the distribution more carefully, we bring in two tools. One tool is the variance of Hamming distance of the sequences at a given generation. It not only offers us a different avenue for accurately locating the error threshold and illustrates how the configuration of the distribution varies with copying fidelity q in the sequence space, but also divides the copying fidelity into three distinct regimes. The other tool is the similarity network of a certain Hamming distance do, by which we can gain a visual and in-depth result about how the sequences are distributed. We find that there are several local similarity optima around the centre (global similarity optimum) in the distribution of the sequences reproduced near the threshold. Furthermore, it is interesting that the distribution of clustering coefficient C(k) follows lognormal distribution and the curve of clustering coefficient C of the network versus d0 appears to be linear near the threshold.

  13. Quasispecies distribution of Eigen model

    Science.gov (United States)

    Chen, Jia; Li, Sheng; Ma, Hong-Ru

    2007-09-01

    We have studied sharp peak landscapes of the Eigen model from a new perspective about how the quasispecies are distributed in the sequence space. To analyse the distribution more carefully, we bring in two tools. One tool is the variance of Hamming distance of the sequences at a given generation. It not only offers us a different avenue for accurately locating the error threshold and illustrates how the configuration of the distribution varies with copying fidelity q in the sequence space, but also divides the copying fidelity into three distinct regimes. The other tool is the similarity network of a certain Hamming distance d0, by which we can gain a visual and in-depth result about how the sequences are distributed. We find that there are several local similarity optima around the centre (global similarity optimum) in the distribution of the sequences reproduced near the threshold. Furthermore, it is interesting that the distribution of clustering coefficient C(k) follows lognormal distribution and the curve of clustering coefficient C of the network versus d0 appears to be linear near the threshold.

  14. The duration distribution of Swift Gamma-Ray Bursts

    CERN Document Server

    Horvath, I

    2016-01-01

    Decades ago two classes of gamma-ray bursts were identified and delineated as having durations shorter and longer than about 2 s. Subsequently indications also supported the existence of a third class. Using maximum likelihood estimation we analyze the duration distribution of 888 Swift BAT bursts observed before October 2015. Fitting three log-normal functions to the duration distribution of the bursts provides a better fit than two log-normal distributions, with 99.9999% significance. Similarly to earlier results, we found that a fourth component is not needed. The relative frequencies of the distribution of the groups are 8% for short, 35% for intermediate and 57% for long bursts which correspond to our previous results. We analyse the redshift distribution for the 269 GRBs of the 888 GRBs with known redshift. We find no evidence for the previously suggested difference between the long and intermediate GRBs' redshift distribution. The observed redshift distribution of the 20 short GRBs differs with high si...

  15. A New Lifetime Distribution and Its Power Transformation

    Directory of Open Access Journals (Sweden)

    Ammar M. Sarhan

    2014-01-01

    Full Text Available New one-parameter and two-parameter distributions are introduced in this paper. The failure rate of the one-parameter distribution is unimodal (upside-down bathtub, while the failure rate of the two-parameter distribution can be decreasing, increasing, unimodal, increasing-decreasing-increasing, or decreasing-increasing-decreasing, depending on the values of its two parameters. The two-parameter distribution is derived from the one-parameter distribution by using a power transformation. We discuss some properties of these two distributions, such as the behavior of the failure rate function, the probability density function, the moments, skewness, and kurtosis, and limiting distributions of order statistics. Maximum likelihood estimation for the two-parameter model using complete samples is investigated. Different algorithms for generating random samples from the two new models are given. Applications to real data are discussed and compared with the fit attained by some one- and two-parameter distributions. Finally, a simulation study is carried out to investigate the mean square error of the maximum likelihood estimators, the coverage probability, and the width of the confidence intervals of the unknown parameters.

  16. Sensitivity Weaknesses in Application of some Statistical Distribution in First Order Reliability Methods

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Enevoldsen, I.

    1993-01-01

    a stochastic variable is modelled by an asymmetrical density function. For lognormally, Gumbel and Weibull distributed stochastic variables it is shown for which combinations of the/3-point, the expected value and standard deviation the weakness can occur. In relation to practical application the behaviour...... is probably rather infrequent. A simple example is shown as illustration and to exemplify that for second order reliability methods and for exact calculations of the probability of failure this behaviour is much more infrequent....

  17. Packing fraction of particles with a Weibull size distribution

    Science.gov (United States)

    Brouwers, H. J. H.

    2016-07-01

    This paper addresses the void fraction of polydisperse particles with a Weibull (or Rosin-Rammler) size distribution. It is demonstrated that the governing parameters of this distribution can be uniquely related to those of the lognormal distribution. Hence, an existing closed-form expression that predicts the void fraction of particles with a lognormal size distribution can be transformed into an expression for Weibull distributions. Both expressions contain the contraction coefficient β. Likewise the monosized void fraction φ1, it is a physical parameter which depends on the particles' shape and their state of compaction only. Based on a consideration of the scaled binary void contraction, a linear relation for (1 - φ1)β as function of φ1 is proposed, with proportionality constant B, depending on the state of compaction only. This is validated using computational and experimental packing data concerning random close and random loose packing arrangements. Finally, using this β, the closed-form analytical expression governing the void fraction of Weibull distributions is thoroughly compared with empirical data reported in the literature, and good agreement is found. Furthermore, the present analysis yields an algebraic equation relating the void fraction of monosized particles at different compaction states. This expression appears to be in good agreement with a broad collection of random close and random loose packing data.

  18. Runaway Events Dominate the Heavy Tail of Citation Distributions

    CERN Document Server

    Golosovsky, Michael

    2013-01-01

    Statistical distributions with heavy tails are ubiquitous in natural and social phenomena. Since the entries in heavy tail have disproportional significance, the knowledge of its exact shape is very important. Citations of scientific papers form one of the best-known heavy tail distributions. Even in this case there is a considerable debate whether citation distribution follows the log-normal or power-law fit. The goal of our study is to solve this debate by measuring citation distribution for a very large and homogeneous data. We measured citation distribution for 418,438 Physics papers published in 1980-1989 and cited by 2008. While the log-normal fit deviates too strong from the data, the discrete power-law function with the exponent $\\gamma=3.15$ does better and fits 99.955% of the data. However, the extreme tail of the distribution deviates upward even from the power-law fit and exhibits a dramatic "runaway" behavior. The onset of the runaway regime is revealed macroscopically as the paper garners 1000-1...

  19. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  20. Some properties of generalized gamma distribution

    Directory of Open Access Journals (Sweden)

    Morteza Khodabin

    2010-03-01

    Full Text Available In this paper, the generalized gamma (GG distribution that is a flexible distribution in statistical literature, and has exponential, gamma, and Weibull as subfamilies, and lognormal as a limiting distribution is introduced. The power and logarithmic moments of this family is defined. A new moment estimation method of parameters of GG family using it's characterization is presented, this method is compared with MLE method in gamma subfamily for small and large sample size. Here we study GG entropy representation and its estimation. In addition Kullback-Leibler discrimination , Akaike and Bayesian information criterion is discussed. In brief, this paper consist of presentation of general review of important properties in GG family.

  1. ASSESSMENT OF ACCURACY OF PRECIPITATION INDEX (SPI DETERMI-NED BY DIFFERENT PROBABILITY DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Edward Gąsiorek

    2014-11-01

    Full Text Available The use of different calculating methods to compute the standardized precipitation index (SPI results in various approximations. Methods based on normal distribution and its transformations, as well as on gamma distribution, give similar results and may be used equally, whereas the lognormal distribution fitting method is significantly discrepant, especially for extreme values of SPI. Therefore, it is problematic which method gives the distribution optimally fitted to empirical data. The aim of this study is to categorize the above mentioned methods according to the degree of approximation to empirical data from the Observatory of Agro- and Hydrometeorology in Wrocław-Swojec from 1964–2009 years.

  2. Measurements of the charged particle multiplicity distribution in restricted rapidity intervals

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1995-01-01

    Charged particle multiplicity distributions have been measured with the ALEPH detector in restricted rapidity intervals |Y| \\leq 0.5,1.0, 1.5,2.0\\/ along the thrust axis and also without restriction on rapidity. The distribution for the full range can be parametrized by a log-normal distribution. For smaller windows one finds a more complicated structure, which is understood to arise from perturbative effects. The negative-binomial distribution fails to describe the data both with and without the restriction on rapidity. The JETSET model is found to describe all aspects of the data while the width predicted by HERWIG is in significant disagreement.

  3. Single-peak distribution model of particulate size for welding aerosols

    Institute of Scientific and Technical Information of China (English)

    施雨湘; 李爱农

    2003-01-01

    A large number of particulate size distributions of welding aerosols are measured by means of DMPS method, several distribution types are presented. Among them the single-peak distribution is the basic composing unit of particulate size. The research on the mathematic models and distributions functions shows that the single-peak distribution features the log-normal distribution. The diagram-estimating method (DEM) is a concise approach to dealing with distribution types, obtaining distribution functions for the particulate sizes of welding aerosols. It proves that the distribution function of particulate size possesses the extending property, being from quantity distribution to volume, as well as high-order moment distributions, with K-S method verifying the application of single-peak distribution and of DEM.

  4. New Fitting Formula for Cosmic Nonlinear Density Distribution

    Science.gov (United States)

    Shin, Jihye; Kim, Juhan; Pichon, Christophe; Jeong, Donghui; Park, Changbom

    2017-07-01

    We have measured the probability distribution function (PDF) of a cosmic matter density field from a suite of N-body simulations. We propose the generalized normal distribution of version 2 ({{ N }}{{v}2}) as an alternative fitting formula to the well-known log-normal distribution. We find that {{ N }}{{v}2} provides a significantly better fit than that of the log-normal distribution for all smoothing radii (2, 5, 10, 25 [Mpc h -1]) that we studied. The improvement is substantial in the underdense regions. The development of non-Gaussianities in the cosmic matter density field is captured by continuous evolution of the skewness and shift parameters of the {{ N }}{{v}2} distribution. We present the redshift evolution of these parameters for aforementioned smoothing radii and various background cosmology models. All the PDFs measured from large and high-resolution N-body simulations that we use in this study can be obtained from the web site https://astro.kias.re.kr/jhshin.

  5. Income distribution dependence of poverty measure: A theoretical analysis

    Science.gov (United States)

    Chattopadhyay, Amit K.; Mallick, Sushanta K.

    2007-04-01

    Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.

  6. Modelling income data using two extensions of the exponential distribution

    Science.gov (United States)

    Calderín-Ojeda, Enrique; Azpitarte, Francisco; Gómez-Déniz, Emilio

    2016-11-01

    In this paper we propose two extensions of the Exponential model to describe income distributions. The Exponential ArcTan (EAT) and the composite EAT-Lognormal models discussed in this paper preserve key properties of the Exponential model including its capacity to model distributions with zero incomes. This is an important feature as the presence of zeros conditions the modelling of income distributions as it rules out the possibility of using many parametric models commonly used in the literature. Many researchers opt for excluding the zeros from the analysis, however, this may not be a sensible approach especially when the number of zeros is large or if one is interested in accurately describing the lower part of the distribution. We apply the EAT and the EAT-Lognormal models to study the distribution of incomes in Australia for the period 2001-2012. We find that these models in general outperform the Gamma and Exponential models while preserving the capacity of the latter to model zeros.

  7. On the Power-Law Tails of Vote Distributions in Proportional Elections

    CERN Document Server

    Palombi, Filippo

    2016-01-01

    In proportional elections with open lists the excess of preferences received by candidates with respect to the list average is known to follow a universal lognormal distribution. We show that lognormality is broken provided preferences are conditioned to lists with many candidates. In this limit power-law tails emerge. We study the large-list limit in the framework of a quenched approximation of the word-of-mouth model introduced by Fortunato and Castellano (Phys.Rev.Lett.99(13):138701,2007), where the activism of the agents is mitigated and the noise of the agent-agent interactions is averaged out. Then we argue that our analysis applies mutatis mutandis to the original model as well.

  8. The stochastic distribution of available coefficient of friction on quarry tiles for human locomotion.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2012-01-01

    The available coefficient of friction (ACOF) for human locomotion is the maximum coefficient of friction that can be supported without a slip at the shoe and floor interface. A statistical model was introduced to estimate the probability of slip by comparing the ACOF with the required coefficient of friction, assuming that both coefficients have stochastic distributions. This paper presents an investigation of the stochastic distributions of the ACOF of quarry tiles under dry, water and glycerol conditions. One hundred friction measurements were performed on a walkway under the surface conditions of dry, water and 45% glycerol concentration. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF appears to fit the normal and log-normal distributions better than the Weibull distribution for the water and glycerol conditions. However, no match was found between the distribution of ACOF under the dry condition and any of the three continuous distributions evaluated. Based on limited data, a normal distribution might be more appropriate due to its simplicity, practicality and familiarity among the three distributions evaluated.

  9. Non-Spatial Analysis of Relative Risk of Dengue Disease in Bandung Using Poisson-gamma and Log-normal Models: A Case Study of Dengue Data from Santo Borromeus Hospital in 2013

    Science.gov (United States)

    Irawan, R.; Yong, B.; Kristiani, F.

    2017-02-01

    Bandung, one of the cities in Indonesia, is vulnerable to dengue disease for both early-stage (Dengue Fever) and severe-stage (Dengue Haemorrhagic Fever and Dengue Shock Syndrome). In 2013, there were 5,749 patients in Bandung and 2,032 of the patients were hospitalized in Santo Borromeus Hospital. In this paper, there are two models, Poisson-gamma and Log-normal models, that use Bayesian inference to estimate the value of the relative risk. The calculation is done by Markov Chain Monte Carlo method which is the simulation using Gibbs Sampling algorithm in WinBUGS 1.4.3 software. The analysis results for dengue disease of 30 sub-districts in Bandung in 2013 based on Santo Borromeus Hospital’s data are Coblong and Bandung Wetan sub-districts had the highest relative risk using both models for the early-stage, severe-stage, and all stages. Meanwhile, Cinambo sub-district had the lowest relative risk using both models for the severe-stage and all stages and BojongloaKaler sub-district had the lowest relative risk using both models for the early-stage. For the model comparison using DIC (Deviance Information Criterion) method, the Log-normal model is a better model for the early-stage and severe-stage, but for the all stages, the Poisson-gamma model is a better model which fits the data.

  10. Methane Leaks from Natural Gas Systems Follow Extreme Distributions.

    Science.gov (United States)

    Brandt, Adam R; Heath, Garvin A; Cooley, Daniel

    2016-11-15

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.

  11. Log-concavity property for some well-known distributions

    Directory of Open Access Journals (Sweden)

    G. R. Mohtashami Borzadaran

    2011-12-01

    Full Text Available Interesting properties and propositions, in many branches of science such as economics have been obtained according to the property of cumulative distribution function of a random variable as a concave function. Caplin and Nalebuff (1988,1989, Bagnoli and Khanna (1989 and Bagnoli and Bergstrom (1989 , 1989, 2005 have discussed the log-concavity property of probability distributions and their applications, especially in economics. Log-concavity concerns twice differentiable real-valued function g whose domain is an interval on extended real line. g as a function is said to be log-concave on the interval (a,b if the function ln(g is a concave function on (a,b. Log-concavity of g on (a,b is equivalent to g'/g being monotone decreasing on (a,b or (ln(g" 6] have obtained log-concavity for distributions such as normal, logistic, extreme-value, exponential, Laplace, Weibull, power function, uniform, gamma, beta, Pareto, log-normal, Student's t, Cauchy and F distributions. We have discussed and introduced the continuous versions of the Pearson family, also found the log-concavity for this family in general cases, and then obtained the log-concavity property for each distribution that is a member of Pearson family. For the Burr family these cases have been calculated, even for each distribution that belongs to Burr family. Also, log-concavity results for distributions such as generalized gamma distributions, Feller-Pareto distributions, generalized Inverse Gaussian distributions and generalized Log-normal distributions have been obtained.

  12. GROWTH RATE DISTRIBUTION OF BORAX SINGLE CRYSTALS ON THE (001 FACE UNDER VARIOUS FLOW RATES

    Directory of Open Access Journals (Sweden)

    Suharso Suharso

    2010-06-01

    Full Text Available The growth rates of borax single crystals from aqueous solutions at various flow rates in the (001 direction were measured using in situ cell method. From the growth rate data obtained, the growth rate distribution of borax crystals was investigated using Minitab Software and SPSS Software at relative supersaturation of 0807 and temperature of 25 °C. The result shows that normal, gamma, and log-normal distribution give a reasonably good fit to GRD. However, there is no correlation between growth rate distribution and flow rate of solution.   Keywords: growth rate dispersion (GRD, borax, flow rate

  13. Vapor intrusion in soils with multimodal pore-size distribution

    Directory of Open Access Journals (Sweden)

    Alfaro Soto Miguel

    2016-01-01

    Full Text Available The Johnson and Ettinger [1] model and its extensions are at this time the most widely used algorithms for estimating subsurface vapor intrusion into buildings (API [2]. The functions which describe capillary pressure curves are utilized in quantitative analyses, although these are applicable for porous media with a unimodal or lognormal pore-size distribution. However, unaltered soils may have a heterogeneous pore distribution and consequently a multimodal pore-size distribution [3], which may be the result of specific granulometry or the formation of secondary porosity related to genetic processes. The present paper was designed to present the application of the Vapor Intrusion Model (SVI_Model to unsaturated soils with multimodal pore-size distribution. Simulations with data from the literature show that the use of a multimodal model in soils with such pore distribution characteristics could provide more reliable results for indoor air concentration, rather than conventional models.

  14. Changes of firm size distribution: The case of Korea

    Science.gov (United States)

    Kang, Sang Hoon; Jiang, Zhuhua; Cheong, Chongcheul; Yoon, Seong-Min

    2011-01-01

    In this paper, the distribution and inequality of firm sizes is evaluated for the Korean firms listed on the stock markets. Using the amount of sales, total assets, capital, and the number of employees, respectively, as a proxy for firm sizes, we find that the upper tail of the Korean firm size distribution can be described by power-law distributions rather than lognormal distributions. Then, we estimate the Zipf parameters of the firm sizes and assess the changes in the magnitude of the exponents. The results show that the calculated Zipf exponents over time increased prior to the financial crisis, but decreased after the crisis. This pattern implies that the degree of inequality in Korean firm sizes had severely deepened prior to the crisis, but lessened after the crisis. Overall, the distribution of Korean firm sizes changes over time, and Zipf’s law is not universal but does hold as a special case.

  15. The shape of terrestrial abundance distributions.

    Science.gov (United States)

    Alroy, John

    2015-09-01

    Ecologists widely accept that the distribution of abundances in most communities is fairly flat but heavily dominated by a few species. The reason for this is that species abundances are thought to follow certain theoretical distributions that predict such a pattern. However, previous studies have focused on either a few theoretical distributions or a few empirical distributions. I illustrate abundance patterns in 1055 samples of trees, bats, small terrestrial mammals, birds, lizards, frogs, ants, dung beetles, butterflies, and odonates. Five existing theoretical distributions make inaccurate predictions about the frequencies of the most common species and of the average species, and most of them fit the overall patterns poorly, according to the maximum likelihood-related Kullback-Leibler divergence statistic. Instead, the data support a low-dominance distribution here called the "double geometric." Depending on the value of its two governing parameters, it may resemble either the geometric series distribution or the lognormal series distribution. However, unlike any other model, it assumes both that richness is finite and that species compete unequally for resources in a two-dimensional niche landscape, which implies that niche breadths are variable and that trait distributions are neither arrayed along a single dimension nor randomly associated. The hypothesis that niche space is multidimensional helps to explain how numerous species can coexist despite interacting strongly.

  16. Simulation of energy barrier distributions using real particle parameters and comparison with experimental obtained results

    Energy Technology Data Exchange (ETDEWEB)

    Büttner, M., E-mail: Markus.Buettner@uni-jena.de [Institut für Festkörperphysik, Friedrich-Schiller-Universität Jena, Helmholtzweg 5, 07743 Jena (Germany); Schiffler, M. [Institut für Geowissenschaften, Friedrich-Schiller-Universität Jena, Burgweg 11, 07749 Jena (Germany); Weber, P.; Seidel, P. [Institut für Festkörperphysik, Friedrich-Schiller-Universität Jena, Helmholtzweg 5, 07743 Jena (Germany)

    2013-11-15

    Distributions of energy barriers in systems of magnetic nanoparticles have been calculated by means of the path integral method and the results have been compared with distributions previously obtained in our experiments by means of the temperature dependent magnetorelaxation method. The path integral method allowed to obtain energies of the interactions of magnetic moments of nanoparticles with axes of their easy magnetisation as well as energies of mutual interactions of magnetic moments. Calculated distributions of energy barriers have been described satisfactorily by curves of the lognormal distribution. We found an agreement between the theory and the experiment at temperatures above approximately 100 K. The influence of the volume concentration of nanoparticles and agglomeration on the energy barrier distribution has been investigated. - Highlights: • The path integral method of calculation allows to satisfactorily reproduce the quantitative experimental results. • The simulations of the energy barrier distributions reflect the lognormal distribution of the MNP found in real experiments. • Higher particle volume concentration leads to a broadening of the simulated energy barrier distribution. • At low particle concentration there is only anisotropy energy. • In case of agglomeration the energy barrier distribution broadens.

  17. The Concept of `Normalized' Distribution to Describe Raindrop Spectra: A Tool for Cloud Physics and Cloud Remote Sensing.

    Science.gov (United States)

    Testud, Jacques; Oury, Stéphane; Black, Robert A.; Amayenc, Paul; Dou, Xiankang

    2001-06-01

    The shape of the drop size distribution (DSD) reflects the physics of rain. The DSD is the result of the microphysical processes that transform the condensed water into rain. The question of the DSD is also central in radar meteorology, because it rules the relationships between the radar reflectivity and the rainfall rate R. Normalizing raindrop spectra is the only way to identify the shape of the distribution. The concept of normalization of DSD developed in this paper is founded upon two reference variables, the liquid water content LWC and the mean volume diameter Dm. It is shown mathematically that it is appropriate to normalize by N0( LWC/Dm4 with respect to particle concentration and by Dm with respect to drop diameter. Also, N0( may be defined as the intercept parameter that would have an exponential DSD with the same LWC and Dm as the real one. The major point of the authors' approach is that it is totally free of any assumption about the shape of the DSD. This new normalization has been applied to the airborne microphysical data of the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE) collected by the National Center for Atmospheric Research Electra aircraft. The classification of the TOGA COARE raindrop spectra into four categories [one stratiform, and three convective (0-10, 10-30, and 30-100 mm h1)] allowed the following features to be identified.1)There is a distinct behavior of N0( between stratiform and convective rains; typical values are 2.2 × 106 m4 for stratiform and 2 × 107 m4 for convective.2)In convective rain, there is a clear trend for Dm to increase with R, but there is no correlation between N0( and R.3)The `average' normalized shape of the DSD is remarkably stable among the four rain categories. This normalized shape departs from the exponential, but also from all the analytical shapes considered up to now (e.g., gamma, lognormal, modified gamma).The stability of the normalized DSD shape and

  18. Wealth distribution models: analisys and applications

    Directory of Open Access Journals (Sweden)

    Camilo Dagum

    2008-03-01

    Full Text Available After Pareto developed his Type I model in 1895, a large number of income distribution models have been specified. However, the important issue of wealth distribution called the attention of researchers more than sixty years later. It started with the contributions by Wold and Whittle, and Sargan, both published in 1957. The former authors proposed the Pareto Type I model and the latter the lognormal distribution, but they did not empirically validate them. Afterward, other models were proposed: in 1969 the Pareto Types I and II by Stiglitz; in 1975, the loglogistic by Atkinson and the Pearson Type V by Vaughan. In 1990 and 1994, Dagum developed a general model and his type II as models of wealth distribution. They were validated with real life data from the U.S.A., Canada, Italy and the U.K. In 1999, Dagum further developed his general model of net wealth distribution with support (??,? which contains, as particular cases, his Types I and II model of income and wealth distributions. This study presents and analyzes the proposed models of wealth distribution and their properties. The only model with the flexibility, power, and economic and stochastic foundations to accurately fit net and total wealth distributions is the Dagum general model and its particular cases as validated with the case studies of Ireland, U.K., Italy and U.S.A.

  19. Parameter estimation for the two-parameter bathtub-shaped lifetime distribution%一个两参数有浴盆形状失效率的寿命分布的参数估计

    Institute of Scientific and Technical Information of China (English)

    王炳兴

    2008-01-01

    讨论了一个由Chen(2000)提出的两参数有浴盆形状失效率的寿命分布基于定数截尾样本的参数估计,导出了有关参数的逆矩估计量和参数的区间估计,并利用模拟方法研究了所给点估计的精度.还指出了Wu等(2005)提出的选择区间估计的最优准则是错误的.最后用一个实例说明本文所给的方法.

  20. Inference of Parameters Ratio in Two-Parameter Exponential Distribution%双参数指数分布参数比率统计推断的研究

    Institute of Scientific and Technical Information of China (English)

    李建波; 张日权

    2010-01-01

    本文在产品寿命服从双参数指数分布的无替换定数截尾寿命试验场合下,提出了两独立产品平均寿命比率的两个估计量,并研究了这两个比率估计量的渐近正态性和置信区间.然后通过数据模拟,进一步验证了所提出比率估计量的有效性.

  1. Income Distribution Dependence of Poverty Measure: A Theoretical Analysis

    CERN Document Server

    Chattopadhyay, A K; Chattopadhyay, Amit K; Mallick, Sushanta K

    2005-01-01

    With a new deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the `global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Following these results, we make quantitative predictions to correlate a developing with a developed economy.

  2. Load research and load estimation in electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland). Energy Systems

    1996-12-31

    The topics introduced in this thesis are: the Finnish load research project, a simple form customer class load model, analysis of the origins of customers load distribution, a method for the estimation of the confidence interval of customer loads and Distribution Load Estimation (DLE) which utilises both the load models and measurements from distribution networks. The Finnish load research project started in 1983. The project was initially coordinated by the Association of Finnish Electric Utilities and 40 utilities joined the project. Now there are over 1000 customer hourly load recordings in a database. A simple form customer class load model is introduced. The model is designed to be practical for most utility applications and has been used by the Finnish utilities for several years. The only variable of the model is the customers annual energy consumption. The model gives the customers average hourly load and standard deviation for a selected month, day and hour. The statistical distribution of customer loads is studied and a model for customer electric load variation is developed. The model results in a lognormal distribution as an extreme case. Using the `simple form load model`, a method for estimating confidence intervals (confidence limits) of customer hourly load is developed. The two methods selected for final analysis are based on normal and lognormal distribution estimated in a simplified manner. The estimation of several cumulated customer class loads is also analysed. Customer class load estimation which combines the information from load models and distribution network load measurements is developed. This method, called Distribution Load Estimation (DLE), utilises information already available in the utilities databases and is thus easy to apply

  3. On the distribution of the stochastic component in SUE traffic assignment models

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper discuss the use of different distributions of the stochastic component in SUE. A main conclusion is that they generally gave reasonable similar results, except for the LogNormal distribution which use is dissuaded. However, in cases with low link-costs (e.g. in dense urban areas, ramps...... and modelling of intersections and inter-changes), distributions with long tails (Gumbel and Normal) gave biased results com-pared with the Rectangular distribution. The Triangular distribution gave results somewhat between. Besides giving the most reasonable results, the Rectangular dis-tribution is the most...... calculation effective.All distributions gave a unique solution at link level after a sufficient large number of iterations (up to 1,000 at full-scale networks) while the usual aggregated measures of convergence converged quite fast (under 50 iterations). The tests also showed, that the distributions must...

  4. Crystallite size distribution of clay minerals from selected Serbian clay deposits

    Directory of Open Access Journals (Sweden)

    Simić Vladimir

    2006-01-01

    Full Text Available The BWA (Bertaut-Warren-Averbach technique for the measurement of the mean crystallite thickness and thickness distributions of phyllosilicates was applied to a set of kaolin and bentonite minerals. Six samples of kaolinitic clays, one sample of halloysite, and five bentonite samples from selected Serbian deposits were analyzed. These clays are of sedimentary volcano-sedimentary (diagenetic, and hydrothermal origin. Two different types of shape of thickness distribution were found - lognormal, typical for bentonite and halloysite, and polymodal, typical for kaolinite. The mean crystallite thickness (T BWA seams to be influenced by the genetic type of the clay sample.

  5. Fitting statistical distributions to sea duck count data: implications for survey design and abundance estimation

    Science.gov (United States)

    Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.

    2014-01-01

    Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are

  6. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  7. Distribution of Earthquake Interevent Times in Northeast India and Adjoining Regions

    Science.gov (United States)

    Pasari, Sumanta; Dikshit, Onkar

    2015-10-01

    This study analyzes earthquake interoccurrence times of northeast India and its vicinity from eleven probability distributions, namely exponential, Frechet, gamma, generalized exponential, inverse Gaussian, Levy, lognormal, Maxwell, Pareto, Rayleigh, and Weibull distributions. Parameters of these distributions are estimated from the method of maximum likelihood estimation, and their respective asymptotic variances as well as confidence bounds are calculated using Fisher information matrices. Three model selection criteria namely the Chi-square criterion, the maximum likelihood criterion, and the Kolmogorov-Smirnov minimum distance criterion are used to compare model suitability for the present earthquake catalog (Y adav et al. in Pure Appl Geophys 167:1331-1342, 2010). It is observed that gamma, generalized exponential, and Weibull distributions provide the best fitting, while exponential, Frechet, inverse Gaussian, and lognormal distributions provide intermediate fitting, and the rest, namely Levy, Maxwell Pareto, and Rayleigh distributions fit poorly to the present data. The conditional probabilities for a future earthquake and related conditional probability curves are presented towards the end of this article.

  8. Effects of the Energy Error Distribution of Fluorescence Telescopes on the UHECR energy spectrum

    CERN Document Server

    Carvalho, Washington; de Souza, Vitor; 10.1016/j.astropartphys.2007.04.010

    2008-01-01

    The measurement of the ultra high energy cosmic ray (UHECR) spectrum is strongly affected by uncertainties on the reconstructed energy. The determination of the presence or absence of the GZK cutoff and its position in the energy spectrum depends not only on high statistics but also on the shape of the energy error distribution. Here we determine the energy error distribution for fluorescence telescopes, based on a Monte Carlo simulation. The HiRes and Auger fluorescence telescopes are simulated in detail. We analyze the UHECR spectrum convolved with this energy error distribution. We compare this spectrum with one convolved with a lognormal error distribution as well as with a Gaussian error distribution. We show that the energy error distribution for fluorescence detectors can not be represented by these known distributions. We conclude that the convolved energy spectrum will be smeared but not enough to affect the GZK cutoff detection. This conclusion stands for both HiRes and Auger fluorescence telescopes...

  9. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  10. On generalized trigonometric functions with two parameters

    OpenAIRE

    Bhayo, Barkat Ali; Vuorinen, Matti

    2011-01-01

    The generalized $p$-trigonometric and ($p,q$)-trigonometric functions were introduced by P. Lindqvist and S. Takeuchi, respectively. We prove some inequalities and present a few conjectures for the ($p,q$)-functions.

  11. Mirror symmetry for two parameter models, 2

    CERN Document Server

    Candelas, Philip; Katz, S; Morrison, Douglas Robert Ogston; Philip Candelas; Anamaria Font; Sheldon Katz; David R Morrison

    1994-01-01

    We describe in detail the space of the two K\\"ahler parameters of the Calabi--Yau manifold \\P_4^{(1,1,1,6,9)}[18] by exploiting mirror symmetry. The large complex structure limit of the mirror, which corresponds to the classical large radius limit, is found by studying the monodromy of the periods about the discriminant locus, the boundary of the moduli space corresponding to singular Calabi--Yau manifolds. A symplectic basis of periods is found and the action of the Sp(6,\\Z) generators of the modular group is determined. From the mirror map we compute the instanton expansion of the Yukawa couplings and the generalized N=2 index, arriving at the numbers of instantons of genus zero and genus one of each degree. We also investigate an SL(2,\\Z) symmetry that acts on a boundary of the moduli space.

  12. Power laws in citation distributions: evidence from Scopus.

    Science.gov (United States)

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  13. Aerosol formation from high-velocity uranium drops: Comparison of number and mass distributions. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rader, D.J.; Benson, D.A.

    1995-05-01

    This report presents the results of an experimental study of the aerosol produced by the combustion of high-velocity molten-uranium droplets produced by the simultaneous heating and electromagnetic launch of uranium wires. These tests are intended to simulate the reduction of high-velocity fragments into aerosol in high-explosive detonations or reactor accidents involving nuclear materials. As reported earlier, the resulting aerosol consists mainly of web-like chain agglomerates. A condensation nucleus counter was used to investigate the decay of the total particle concentration due to coagulation and losses. Number size distributions based on mobility equivalent diameter obtained soon after launch with a Differential Mobility Particle Sizer showed lognormal distributions with an initial count median diameter (CMD) of 0.3 {mu}m and a geometric standard deviation, {sigma}{sub g} of about 2; the CMD was found to increase and {sigma}{sub g} decrease with time due to coagulation. Mass size distributions based on aerodynamic diameter were obtained for the first time with a Microorifice Uniform Deposit Impactor, which showed lognormal distributions with mass median aerodynamic diameters of about 0.5 {mu}m and an aerodynamic geometric standard deviation of about 2. Approximate methods for converting between number and mass distributions and between mobility and aerodynamic equivalent diameters are presented.

  14. A Comparison of Four Precipitation Distribution Models Used in Daily Stochastic Models

    Institute of Scientific and Technical Information of China (English)

    LIU Yonghe; ZHANG Wanchang; SHAO Yuehong; ZHANG Kexin

    2011-01-01

    Stochastic weather generators are statistical models that produce random numbers that resemble the observed weather data on which they have been fitted; thev are widely used in meteorological and hydrological simulations. For modeling daily precipitation in weather generators, first-order Markov chain-dependent exponential, gamma, mixed-exponential, and lognormal distributions can be used. To examine the performance of these four distributions for precipitation simulation, they were fitted to observed data collected at 10 stations in the watershed of Yishu River. The parameters of these models were estimated using a maximum-likelihood technique performed using genetic algorithms. Parameters for each calendar month and the Fourier series describing parameters for the whole year were estimated separately. Bayesian information criterion, simulated monthly mean, maximum daily value, and variance were tested and compared to evaluate the fitness and performance of these models. The results indicate that the lognormal and mixed-exponential distributions give smaller BICs, but their stochastic simulations have overestimation and underestimation respectively, while the gamma and exponential distributions give larger BICs, but their stochastic simulations produced monthly mean precipitation very well. When these distributions were fitted using Fourier series, they all underestimated the above statistics for the months of June, July and August.

  15. A classification of the natural and social distributions Part 2: the explanations

    CERN Document Server

    Benguigui, L

    2016-01-01

    In this second part of our survey on the social and natural distributions, we investigate some models, which intend to explain the statistical regularity of the natural and social distributions. There is a large variety of models and in their majority, they look for a power law, at least in the tail, although there are several real distributions which are not described by a power law. Among the power law models, we discuss a) the two basic models and their variants: the random multiplicative model and the preferential attachment model; b) models based on the BoseEinstein statistics; c) geographical, economical, and criticality models. We present also some models, which do not intend to explain a power law, and among them lognormal-like distributions, exponential and stretched exponential distributions. The interesting findings of this survey are that there are few models giving a power law for the complete distribution and that among them, the Zipf exponent 1 is rare.

  16. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  17. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  18. S-N lognormal distribution of ultra-high strength steel welding spot%超高强钢焊点的 S-N 对数正态分布

    Institute of Scientific and Technical Information of China (English)

    王晓光; 宇慧平; 李晓阳; 陈树君; 刘跃华

    2016-01-01

    In order to research the fatigue properties of ultra-high strength steel welding spot,the 22MnB5 spot-welding structure was taken as the research object.An intermediate frequency servo spot-welding device was used to weld the specimens with a thickness of 2 mm.With the high-frequency fatigue testing machine,laser repair welding device and optical microscope,the S-N curves of specimens under different welding process parameters were studied.The results show that the welding time and current parameters have obvious influence on the fatigue life of welding spot in the high stress area,while have small influence on the fatigue life of welding spot in the low stress area.In addition,the reasonable process parameters can effectively improve the fatigue life of welding spot.After the repair welding for the tiny circumferential vulnerable zone of welding spot with the laser process,the structural strength of welding spot can be effectively enhanced.%为了研究超高强钢焊点的疲劳性能,以22MnB5点焊结构为研究对象,采用中频伺服点焊设备对2 mm 厚试件进行了焊接。利用高频疲劳试验机、激光补焊设备和光学显微镜,研究了不同焊接工艺参数下试件的 S-N 曲线。结果表明,焊接时间与电流参数会对高应力等级焊点的疲劳寿命产生明显影响,而其对低应力区焊点的疲劳寿命影响较小。合理的工艺参数可以有效提高焊点的疲劳寿命。利用激光工艺对焊点的微小圆周薄弱区进行补焊后,可以有效增强焊点的结构强度。

  19. Study on Fitting Heat Release Rate of HCCI Combustion with Partition Lognormal Distribution Function%运用分段对数正态函数拟合HCCI燃烧放热规律的研究

    Institute of Scientific and Technical Information of China (English)

    张宗法; 熊锐; 罗伟欢; 周伟文

    2008-01-01

    在分析对数正态函数在HCCI燃烧放热率研究中的运用及其存在问题的基础上,首次提出了分段对数正态函数,并用其拟合了HCCI燃烧的实际放热率曲线.结果表明,分段对数正态函数可以充分反映HCCI燃烧的阶段性特征,拟合效果更好.

  20. Generalised extreme value distributions provide a natural hypothesis for the shape of seed mass distributions.

    Directory of Open Access Journals (Sweden)

    Will Edwards

    Full Text Available Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed "for normality" but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs, a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species' life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm.

  1. Clean up or mess up: the effect of sampling biases on measurements of degree distributions in mobile phone datasets

    CERN Document Server

    Decuyper, Adeline; Traag, Vincent; Blondel, Vincent D; Delvenne, Jean-Charles

    2016-01-01

    Mobile phone data have been extensively used in the recent years to study social behavior. However, most of these studies are based on only partial data whose coverage is limited both in space and time. In this paper, we point to an observation that the bias due to the limited coverage in time may have an important influence on the results of the analyses performed. In particular, we observe significant differences, both qualitatively and quantitatively, in the degree distribution of the network, depending on the way the dataset is pre-processed and we present a possible explanation for the emergence of Double Pareto LogNormal (DPLN) degree distributions in temporal data.

  2. Data assimilation in a coupled physical-biogeochemical model of the California current system using an incremental lognormal 4-dimensional variational approach: Part 3-Assimilation in a realistic context using satellite and in situ observations

    Science.gov (United States)

    Song, Hajoon; Edwards, Christopher A.; Moore, Andrew M.; Fiechter, Jerome

    2016-10-01

    A fully coupled physical and biogeochemical ocean data assimilation system is tested in a realistic configuration of the California Current System using the Regional Ocean Modeling System. In situ measurements for sea surface temperature and salinity as well as satellite observations for temperature, sea level and chlorophyll are used for the year 2000. Initial conditions of the combined physical and biogeochemical state are adjusted at the start of each 3-day assimilation cycle. Data assimilation results in substantial reduction of root-mean-square error (RMSE) over unconstrained model output. RMSE for physical variables is slightly lower when assimilating only physical variables than when assimilating both physical variables and surface chlorophyll. Surface chlorophyll RMSE is lowest when assimilating both physical variables and surface chlorophyll. Estimates of subsurface, nitrate and chlorophyll show modest improvements over the unconstrained model run relative to independent, unassimilated in situ data. Assimilation adjustments to the biogeochemical initial conditions are investigated within different regions of the California Current System. The incremental, lognormal 4-dimensional data assimilation method tested here represents a viable approach to coupled physical biogeochemical state estimation at practical computational cost.

  3. A Fractal Approach to Dynamic Inference and Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marieke M.J.W. van Rooij

    2013-01-01

    Full Text Available Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution’s shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods.

  4. Casein micelles: size distribution in milks from individual cows.

    Science.gov (United States)

    de Kruif, C G Kees; Huppertz, Thom

    2012-05-09

    The size distribution and protein composition of casein micelles in the milk of Holstein-Friesian cows was determined as a function of stage and number of lactations. Protein composition did not vary significantly between the milks of different cows or as a function of lactation stage. Differences in the size and polydispersity of the casein micelles were observed between the milks of different cows, but not as a function of stage of milking or stage of lactation and not even over successive lactations periods. Modal radii varied from 55 to 70 nm, whereas hydrodynamic radii at a scattering angle of 73° (Q² = 350 μm⁻²) varied from 77 to 115 nm and polydispersity varied from 0.27 to 0.41, in a log-normal distribution. Casein micelle size in the milks of individual cows was not correlated with age, milk production, or lactation stage of the cows or fat or protein content of the milk.

  5. The Halo Occupation Distribution of Active Galactic Nuclei

    CERN Document Server

    Chatterjee, Suchetana; Richardson, Jonathan; Zheng, Zheng; Nagai, Daisuke; Di Matteo, Tiziana

    2011-01-01

    Using a fully cosmological hydrodynamic simulation that self-consistently incorporates the growth and feedback of supermassive black holes and the physics of galaxy formation, we examine the effects of environmental factors (e.g., local gas density, black hole feedback) on the halo occupation distribution of low luminosity active galactic nuclei (AGN). We decompose the mean occupation function into central and satellite contribution and compute the conditional luminosity functions (CLF). The CLF of the central AGN follows a log-normal distribution with the mean increasing and scatter decreasing with increasing redshifts. We analyze the light curves of individual AGN and show that the peak luminosity of the AGN has a tighter correlation with halo mass compared to instantaneous luminosity. We also compute the CLF of satellite AGN at a given central AGN luminosity. We do not see any significant correlation between the number of satellites with the luminosity of the central AGN at a fixed halo mass. We also show ...

  6. Seasonal Distribution of Bioaerosols in the Coastal Region of Qingdao

    Institute of Scientific and Technical Information of China (English)

    QI Jianhua; SHAO Qian; XU Wenbing; GAO Dongmei; JIN Chuan

    2014-01-01

    Bioaerosols were collected by using a six-stage bioaerosols sampler from September 2007 to August 2008 in the coastal region of Qingdao, China. The terrestrial and marine microbes (including bacteria and fungi) were analyzed in order to understand the distribution features of bioaerosols. The results show that the average monthly concentrations of terrestrial bacteria, marine bacte-ria, terrestrial fungi and marine fungi are in the ranges of 80-615 CFU m-3, 91-468 CFU m-3, 76-647 CFU m-3 and 231-1959 CFU m-3, respectively. The concentrations of terrestrial bacteria, marine bacteria, terrestrial fungi, marine fungi and total microbes are the highest in each microbial category during fall, high in spring, and the lowest in the summer and winter. The bacterial particles are coarse in spring, autumn and winter. The sizes of fungal particle present the log-normal distribution in all the seasons.

  7. Estimation of Log-Linear-Binomial Distribution with Applications

    Directory of Open Access Journals (Sweden)

    Elsayed Ali Habib

    2010-01-01

    Full Text Available Log-linear-binomial distribution was introduced for describing the behavior of the sum of dependent Bernoulli random variables. The distribution is a generalization of binomial distribution that allows construction of a broad class of distributions. In this paper, we consider the problem of estimating the two parameters of log-linearbinomial distribution by moment and maximum likelihood methods. The distribution is used to fit genetic data and to obtain the sampling distribution of the sign test under dependence among trials.

  8. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study.

    Science.gov (United States)

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2013-11-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin-Rammler-Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a

  9. Experimental study on bubble size distributions in a direct-contact evaporator

    Directory of Open Access Journals (Sweden)

    Ribeiro Jr. C. P.

    2004-01-01

    Full Text Available Experimental bubble size distributions and bubble mean diameters were obtained by means of a photographic technique for a direct-contact evaporator operating in the quasi-steady-state regime. Four gas superficial velocities and three different spargers were analysed for the air-water system. In order to assure the statistical significance of the determined size distributions, a minimum number of 450 bubbles was analysed for each experimental condition. Some runs were also conducted with an aqueous solution of sucrose to study the solute effect on bubble size distribution. For the lowest gas superficial velocity considered, at which the homogeneous bubbling regime is observed, the size distribution was log-normal and depended on the orifice diameter in the sparger. As the gas superficial velocity was increased, the size distribution progressively acquired a bimodal shape, regardless of the sparger employed. The presence of sucrose in the continuous phase led to coalescence hindrance.

  10. Ventilation-perfusion distribution in normal subjects.

    Science.gov (United States)

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  11. A Fossilized Energy Distribution of Lightning

    Science.gov (United States)

    Pasek, Matthew A.; Hurst, Marc

    2016-07-01

    When lightning strikes soil, it may generate a cylindrical tube of glass known as a fulgurite. The morphology of a fulgurite is ultimately a consequence of the energy of the lightning strike that formed it, and hence fulgurites may be useful in elucidating the energy distribution frequency of cloud-to-ground lightning. Fulgurites from sand mines in Polk County, Florida, USA were collected and analyzed to determine morphologic properties. Here we show that the energy per unit length of lightning strikes within quartz sand has a geometric mean of ~1.0 MJ/m, and that the distribution is lognormal with respect to energy per length and frequency. Energy per length is determined from fulgurites as a function of diameter, and frequency is determined both by cumulative number and by cumulative length. This distribution parallels those determined for a number of lightning parameters measured in actual atmospheric discharge events, such as charge transferred, voltage, and action integral. This methodology suggests a potential useful pathway for elucidating lightning energy and damage potential of strikes.

  12. The probabilistic distribution of metal whisker lengths

    Energy Technology Data Exchange (ETDEWEB)

    Niraula, D., E-mail: Dipesh.Niraula@rockets.utoledo.edu; Karpov, V. G., E-mail: victor.karpov@utoledo.edu [Department of Physics and Astronomy, University of Toledo, Toledo, Ohio 43606 (United States)

    2015-11-28

    Significant reliability concerns in multiple industries are related to metal whiskers, which are random high aspect ratio filaments growing on metal surfaces and causing shorts in electronic packages. We derive a closed form expression for the probabilistic distribution of metal whisker lengths. Our consideration is based on the electrostatic theory of metal whiskers, according to which whisker growth is interrupted when its tip enters a random local “dead region” of a weak electric field. Here, we use the approximation neglecting the possibility of thermally activated escapes from the “dead regions,” which is later justified. We predict a one-parameter distribution with a peak at a length that depends on the metal surface charge density and surface tension. In the intermediate range, it fits well the log-normal distribution used in the experimental studies, although it decays more rapidly in the range of very long whiskers. In addition, our theory quantitatively explains how the typical whisker concentration is much lower than that of surface grains. Finally, it predicts the stop-and-go phenomenon for some of the whiskers growth.

  13. A comprehensive study of distribution laws for the fragments of Ko\\v{s}ice meteorite

    CERN Document Server

    Gritsevich, Maria; Kohout, Tomáš; Tóth, Juraj; Peltoniemi, Jouni; Turchak, Leonid; Virtanen, Jenni

    2014-01-01

    In this study, we conduct a detailed analysis of the Ko\\v{s}ice meteorite fall (February 28, 2010), in order to derive a reliable law describing the mass distribution among the recovered fragments. In total, 218 fragments of the Ko\\v{s}ice meteorite, with a total mass of 11.285 kg, were analyzed. Bimodal Weibull, bimodal Grady and bimodal lognormal distributions are found to be the most appropriate for describing the Ko\\v{s}ice fragmentation process. Based on the assumption of bimodal lognormal, bimodal Grady, bimodal sequential and bimodal Weibull fragmentation distributions, we suggest that, prior to further extensive fragmentation in the lower atmosphere, the Ko\\v{s}ice meteoroid was initially represented by two independent pieces with cumulative residual masses of approximately 2 kg and 9 kg respectively. The smaller piece produced about 2 kg of multiple lightweight meteorite fragments with the mean around 12 g. The larger one resulted in 9 kg of meteorite fragments, recovered on the ground, including the...

  14. Size distributions and failure initiation of submarine and subaerial landslides

    Science.gov (United States)

    ten Brink, U.S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.

    2009-01-01

    Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area

  15. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    Energy Technology Data Exchange (ETDEWEB)

    Parravano, Antonio [Centro De Fisica Fundamental, Universidad de Los Andes, Merida (Venezuela, Bolivarian Republic of); Sanchez, Nestor [S. D. Astronomia y Geodesia, Fac. CC. Matematicas, Universidad Complutense de Madrid (Spain); Alfaro, Emilio J. [Instituto de Astrofisica de Andalucia (CSIC), Granada (Spain)

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.

  16. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition.

    Directory of Open Access Journals (Sweden)

    Xiliang Zheng

    2015-04-01

    Full Text Available We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity, the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.

  17. The Universal Statistical Distributions of the Affinity, Equilibrium Constants, Kinetics and Specificity in Biomolecular Recognition

    Science.gov (United States)

    Zheng, Xiliang; Wang, Jin

    2015-01-01

    We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453

  18. Product of Ginibre matrices: Fuss-Catalan and Raney distributions

    CERN Document Server

    Penson, Karol A

    2011-01-01

    Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distribution P_s(x), such that their moments are equal to the Fuss-Catalan numbers or order s. We find a representation of the Fuss--Catalan distributions P_s(x) in terms of a combination of s hypergeometric functions of the type sF_{s-1}. The explicit formula derived here is exact for an arbitrary positive integer s and for s=1 it reduces to the Marchenko--Pastur distribution. Using similar techniques, involving Mellin transform and the Meijer G-function, we find expressions for the Raney probability distributions, the moments of which are given by a two parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two parameter generalization of the Wigner semicircle law.

  19. An agent-based interaction model for Chinese personal income distribution

    Science.gov (United States)

    Zou, Yijiang; Deng, Weibing; Li, Wei; Cai, Xu

    2015-10-01

    The personal income distribution in China was studied by employing the data from China Household Income Projects (CHIP) between 1990 and 2002. It was observed that the low and middle income regions could be described by the log-normal law, while the large income region could be well fitted by the power law. To characterize these empirical findings, a stochastic interactive model with mean-field approach was discussed, and the analytic result shows that the wealth distribution is of the Pareto type. Then we explored the agent-based model on networks, in which the exchange of wealth among agents depends on their connectivity. Numerical results suggest that the wealth of agents would largely rely on their connectivity, and the Pareto index of the simulated wealth distributions is comparable to those of the empirical data. The Pareto behavior of the tails of the empirical wealth distributions is consistent with that of the 'mean-field' model, as well as numerical simulations.

  20. Beyond Zipf's Law: The Lavalette Rank Function and its Properties

    CERN Document Server

    Fontanelli, Oscar; Yang, Yaning; Cocho, Germinal; Li, Wentian

    2016-01-01

    Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.

  1. Understanding star formation in molecular clouds. III. Probability distribution functions of molecular lines in Cygnus X

    Science.gov (United States)

    Schneider, N.; Bontemps, S.; Motte, F.; Ossenkopf, V.; Klessen, R. S.; Simon, R.; Fechtenbaum, S.; Herpin, F.; Tremblin, P.; Csengeri, T.; Myers, P. C.; Hill, T.; Cunningham, M.; Federrath, C.

    2016-03-01

    The probability distribution function of column density (N-PDF) serves as a powerful tool to characterise the various physical processes that influence the structure of molecular clouds. Studies that use extinction maps or H2 column-density maps (N) that are derived from dust show that star-forming clouds can best be characterised by lognormal PDFs for the lower N range and a power-law tail for higher N, which is commonly attributed to turbulence and self-gravity and/or pressure, respectively. While PDFs from dust cover a large dynamic range (typically N ~ 1020-24 cm-2 or Av~ 0.1-1000), PDFs obtained from molecular lines - converted into H2 column density - potentially trace more selectively different regimes of (column) densities and temperatures. They also enable us to distinguish different clouds along the line of sight through using the velocity information. We report here on PDFs that were obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region, and make a comparison to a PDF that was derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av ~ 1-30, but is cut for higher Av because of optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up to Av ~ 1-15, followed by excess up to Av ~ 40. Above that value, all CO PDFs drop, which is most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av ~ 15 and 400, respectively. The PDF from dust is lognormal for Av ~ 3-15 and has a power-law tail up to Av ~ 500. Absolute values for the molecular line column densities are, however, rather uncertain because of abundance and excitation temperature variations. If we take the dust PDF at face value, we "calibrate" the molecular line PDF of CS to that of the dust and determine an abundance [CS]/[H2] of 10-9. The slopes of the power-law tails of the CS, N2H+, and dust PDFs are -1.6, -1.4, and -2.3, respectively, and are thus consistent

  2. Temperature dependent fission fragment distribution in the Langevin equation

    Institute of Scientific and Technical Information of China (English)

    WANG Kun; MA Yu-Gang; ZHENG Qing-Shan; CAI Xiang-Zhou; FANG De-Qing; FU Yao; LU Guang-Cheng; TIAN Wen-Dong; WANG Hong-Wei

    2009-01-01

    The temperature dependent width of the fission fragment distributions was simulated in the Langevin equation by taking two-parameter exponential form of the fission fragment mass variance at scission point for each fission event. The result can reproduce experimental data well, and it permits to make reliable estimate for unmeasured product yields near symmetry fission.

  3. Global patterns of city size distributions and their fundamental drivers.

    Directory of Open Access Journals (Sweden)

    Ethan H Decker

    Full Text Available Urban areas and their voracious appetites are increasingly dominating the flows of energy and materials around the globe. Understanding the size distribution and dynamics of urban areas is vital if we are to manage their growth and mitigate their negative impacts on global ecosystems. For over 50 years, city size distributions have been assumed to universally follow a power function, and many theories have been put forth to explain what has become known as Zipf's law (the instance where the exponent of the power function equals unity. Most previous studies, however, only include the largest cities that comprise the tail of the distribution. Here we show that national, regional and continental city size distributions, whether based on census data or inferred from cluster areas of remotely-sensed nighttime lights, are in fact lognormally distributed through the majority of cities and only approach power functions for the largest cities in the distribution tails. To explore generating processes, we use a simple model incorporating only two basic human dynamics, migration and reproduction, that nonetheless generates distributions very similar to those found empirically. Our results suggest that macroscopic patterns of human settlements may be far more constrained by fundamental ecological principles than more fine-scale socioeconomic factors.

  4. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  5. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    Science.gov (United States)

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  6. Influence of particle size distribution on nanopowder cold compaction processes

    Science.gov (United States)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  7. Application-dependent Probability Distributions for Offshore Wind Speeds

    Science.gov (United States)

    Morgan, E. C.; Lackner, M.; Vogel, R. M.; Baise, L. G.

    2010-12-01

    The higher wind speeds of the offshore environment make it an attractive setting for future wind farms. With sparser field measurements, the theoretical probability distribution of short-term wind speeds becomes more important in estimating values such as average power output and fatigue load. While previous studies typically compare the accuracy of probability distributions using R2, we show that validation based on this metric is not consistent with validation based on engineering parameters of interest, namely turbine power output and extreme wind speed. Thus, in order to make the most accurate estimates possible, the probability distribution that an engineer picks to characterize wind speeds should depend on the design parameter of interest. We introduce the Kappa and Wakeby probability distribution functions to wind speed modeling, and show that these two distributions, along with the Biweibull distribution, fit wind speed samples better than the more widely accepted Weibull and Rayleigh distributions based on R2. Additionally, out of the 14 probability distributions we examine, the Kappa and Wakeby give the most accurate and least biased estimates of turbine power output. The fact that the 2-parameter Lognormal distribution estimates extreme wind speeds (i.e. fits the upper tail of wind speed distributions) with least error indicates that not one single distribution performs satisfactorily for all applications. Our use of a large dataset composed of 178 buoys (totaling ~72 million 10-minute wind speed observations) makes these findings highly significant, both in terms of large sample size and broad geographical distribution across various wind regimes. Boxplots of R2 from the fit of each of the 14 distributions to the 178 boy wind speed samples. Distributions are ranked from left to right by ascending median R2, with the Biweibull having the closest median to 1.

  8. Territorial Distribution of the Population in the Russian Federation

    Directory of Open Access Journals (Sweden)

    Vsevolod Vladimirovich Andreev

    2017-09-01

    Full Text Available In 1931, Robert Gibrat found that the number of the employees of a firm and urban population follow the lognormal distribution. Numerous studies results show that the Gibrat’s law provides a basis for the analysis of the dynamics of the number of the employees of mature and large firms, which have already carved out a niche. Furthermore, the Gibrat’s law allows analyzing the dynamics and laws of the spatial distribution of the population of different countries in a case if their socioeconomic development is sustainable and equilibrium. The purpose of the study is testing the Gibrat’s law for Russian cities and towns of different sizes. If the Gibrat’s law is valid, we can conclude that the spatial distribution of the population in the country is equilibrium and the labour distribution is close to optimal. The opposite result demonstrates the imbalance between the allocation of manufacture and labour force. The author took 2010 national census results as source data. I have tested the hypothesis of the lognormal law of population distribution in Russia over different cities and towns using the Pearson fitting criterion with the value a = 0,05. The results of the study have shown that the distribution of the population over Russia does not follow the Gibrat’s law. As a result, the distribution of the population is uneven, which translates into the significant labour migration from settlements with the small population to large cities. The knowledge of the laws of the territorial distribution of the population and driving factors of population mobility is of importance for the development and implementation of effective socio-economic policy in the country. The definition of the population distribution imbalance over various population centers and the development of recommendations for the creation and optimal location of new production in the country may be the promising area for future research. The study of spatial clustering of

  9. Hg SOIL GAS AND Hg SOIL DISTRIBUTION AROUND FORMER „ZRINSKI“ MINE ON MT. MEDVEDNICA, CROATIA

    Directory of Open Access Journals (Sweden)

    Nataša Jug

    2008-12-01

    Full Text Available The purpose of this study is to present the field and laboratory researches, statistical analyses and graphical displays of the results of Hg soil gas and Hg soil distribution in the area around former mining site „Zrinski“ on Mt. Medvednica. The values of overall Hg concentrations in the soil gas show lognormal distribution, and their spatial distribution outlines the connection with the present Pb-Ag-Zn mineralization and confirms anthropogenic origin of uneven landscape relief (waste rock clusters. Regression analysis of the dependence between Hg contents in the soil gas and the distance from the mine entrance (correlation coefficient r also points to the considerable spatial dependence. Hg soil contents show as well distribution similar to lognormal, and there is a slight correlation when compared with Hg soil gas content. Concentrations are mostly within background values, except in the immediate vicinity of the mine entrance where the values are significantly higher due to the mineralization influence concentrated in the waste-rock clusters. Soil pollution caused by mercury is of local character with the spreading tendency to the south-west because of the dominant relief influence. Mercury found in the soil of the research location derives from the present mineral body and former mining activities, while possible anthropogenic atmospheric inputs from remote sources can not be proven on the basic of conducted research studies (the paper is published in Croatian.

  10. Polynomial probability distribution estimation using the method of moments.

    Science.gov (United States)

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  11. The size distribution of 'gold standard' nanoparticles.

    Science.gov (United States)

    Bienert, Ralf; Emmerling, Franziska; Thünemann, Andreas F

    2009-11-01

    The spherical gold nanoparticle reference materials RM 8011, RM 8012, and RM 8013, with a nominal radius of 5, 15, and 30 nm, respectively, have been available since 2008 from NIST. These materials are recommended as standards for nanoparticle size measurements and for the study of the biological effects of nanoparticles, e.g., in pre-clinical biomedical research. We report on determination of the size distributions of these gold nanoparticles using different small-angle X-ray scattering (SAXS) instruments. Measurements with a classical Kratky type SAXS instrument are compared with a synchrotron SAXS technique. Samples were investigated in situ, positioned in capillaries and in levitated droplets. The number-weighted size distributions were determined applying model scattering functions based on (a) Gaussian, (b) log-normal, and (c) Schulz distributions. The mean radii are 4.36 +/- 0.04 nm (RM 8011), 12.20 +/- 0.03 nm (RM 8012), and 25.74 +/- 0.27 nm (RM 8013). Low polydispersities, defined as relative width of the distributions, were detected with values of 0.067 +/- 0.006 (RM 8011), 0.103 +/- 0.003, (RM 8012), and 0.10 +/- 0.01 (RM 8013). The results are in agreement with integral values determined from classical evaluation procedures, such as the radius of gyration (Guinier) and particle volume (Kratky). No indications of particle aggregation and particle interactions--repulsive or attractive--were found. We recommend SAXS as a standard method for a fast and precise determination of size distributions of nanoparticles.

  12. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  13. Do wealth distributions follow power laws? Evidence from ‘rich lists’

    Science.gov (United States)

    Brzezinski, Michal

    2014-07-01

    We use data on the wealth of the richest persons taken from the 'rich lists' provided by business magazines like Forbes to verify if the upper tails of wealth distributions follow, as often claimed, a power-law behaviour. The data sets used cover the world's richest persons over 1996-2012, the richest Americans over 1988-2012, the richest Chinese over 2006-2012, and the richest Russians over 2004-2011. Using a recently introduced comprehensive empirical methodology for detecting power laws, which allows for testing the goodness of fit as well as for comparing the power-law model with rival distributions, we find that a power-law model is consistent with data only in 35% of the analysed data sets. Moreover, even if wealth data are consistent with the power-law model, they are usually also consistent with some rivals like the log-normal or stretched exponential distributions.

  14. Statistical Models for Solar Flare Interval Distribution in Individual Active Regions

    CERN Document Server

    Kubo, Yuki

    2008-01-01

    This article discusses statistical models for solar flare interval distribution in individual active regions. We analyzed solar flare data in 55 active regions that are listed in the GOES soft X-ray flare catalog. We discuss some problems with a conventional procedure to derive probability density functions from any data set and propose a new procedure, which uses the maximum likelihood method and Akaike Information Criterion (AIC) to objectively compare some competing probability density functions. We found that lognormal and inverse Gaussian models are more likely models than the exponential model for solar flare interval distribution in individual active regions. The results suggest that solar flares do not occur randomly in time; rather, solar flare intervals appear to be regulated by solar flare mechanisms. We briefly mention a probabilistic solar flare forecasting method as an application of a solar flare interval distribution analysis.

  15. Local Field Distribution Function and High Order Field Moments for metal-dielectric composites.

    Science.gov (United States)

    Genov, Dentcho A.; Sarychev, Andrey K.; Shalaev, Vladimir M.

    2001-11-01

    In a span of two decades the physics of nonlinear optics saw vast improvement in our understanding of optical properties for various inhomogeneous mediums. One such medium is the metal-dielectric composite, where the metal inclusions have a surface coverage fraction of p, while the rest (1-p) is assumed to represent the dielectric host. The computations carried out by using different theoretical models and the experimental data show existence of giant local electric and magnetic field fluctuations. In this presentation we will introduce a new developed 2D model that determines exactly the Local Field Distribution Function (LFDF) and all other relevant parameters of the film. The LFDF for small filling factors will be shown to transform from lognormal distribution into a single-dipole distribution function. We also will confirm the predictions of the scaling theory for the high field moments, which have a power law dependence on the loss factor.

  16. Performances of Shannon’s Entropy Statistic in Assessment of Distribution of Data

    Directory of Open Access Journals (Sweden)

    Jäntschi Lorentz

    2017-08-01

    Full Text Available Statistical analysis starts with the assessment of the distribution of experimental data. Different statistics are used to test the null hypothesis (H0 stated as Data follow a certain/specified distribution. In this paper, a new test based on Shannon’s entropy (called Shannon’s entropy statistic, H1 is introduced as goodness-of-fit test. The performance of the Shannon’s entropy statistic was tested on simulated and/or experimental data with uniform and respectively four continuous distributions (as error function, generalized extreme value, lognormal, and normal. The experimental data used in the assessment were properties or activities of active chemical compounds. Five known goodness-of-fit tests namely Anderson-Darling, Kolmogorov-Smirnov, Cramér-von Mises, Kuiper V, and Watson U2 were used to accompany and assess the performances of H1.

  17. Analysis of Two-Parameter (△K and Kmax) Fatigue Crack Propagation Models%包含△K和Kmax二参数的疲劳裂纹扩展模型

    Institute of Scientific and Technical Information of China (English)

    钱怡; 崔维成

    2011-01-01

    除了材料自身特性和环境因素外,疲劳裂纹扩展的方式取决于裂纹尖端附近的应力场.而该应力场由外加应力和残余应力组成,受到引起循环塑性区的应力强度因子变化幅度△K和产生单调塑性区的最大应力强度因子Kmax的共同影响.因此,驱动裂纹扩展的外部驱动力应该是△K和Kmax.通过比较Vasudevan和Sadananda,Kuiawski、张嘉振等人提出的3种典型的二参数疲劳裂纹扩展模型的特点,提出了一个兼顾内、外应力,适合变幅载荷下疲劳裂纹扩展的新模型.%The fatigue crack growth is dominated by stress field around the crack-tip except for the material characteristics and the effect of environment. The stress around the crack-tip is the superimposition of the residual stress and the externally applied stress, which are affected by maximum stress intensity factor, Kmax, and stress intensity factor range, △K. The former can be associated with the monotonic plastic zone, while the latter with the cyclic plastic zone. Therefore, the fatigue crack driving force should include two parameters △K and Kmax. After comparing three kinds of fatigue crack growth rate models, which were derived by Vasudevan and Sadananda, Kujawski, and Zhang, a new fatigue crack growth rate model is proposed. The model can deal with the variable amplitude loading cases, and will take into account the residual stress and the externally applied stress.

  18. Distribution for the number of coauthors.

    Science.gov (United States)

    Hsu, Jiann-Wien; Huang, Ding-Wei

    2009-11-01

    We study the coauthorship distribution by analyzing the number of coauthors on each paper published in Physical Review Letters and Physical Review for the last decade. We propose that the structure of the distribution can be understood as the result of a two-parameter Poisson process. We develop a dynamic model of dual mechanisms to simulate the personal and group collaborations. In this model, the single-author papers are portrayed as a leftover from the collaboration process. We also comment on the huge collaborations involving hundreds of coauthors.

  19. An assessment of calcite crystal growth mechanisms based on crystal size distributions

    Science.gov (United States)

    Kile, D.E.; Eberl, D.D.; Hoch, A.R.; Reddy, M.M.

    2000-01-01

    Calcite crystal growth experiments were undertaken to test a recently proposed model that relates crystal growth mechanisms to the shapes of crystal size distributions (CSDs). According to this approach, CSDs for minerals have three basic shapes: (1) asymptotic, which is related to a crystal growth mechanism having constant-rate nucleation accompanied by surface-controlled growth; (2) lognormal, which results from decaying-rate nucleation accompanied by surface-controlled growth; and (3) a theoretical, universal, steady-state curve attributed to Ostwald ripening. In addition, there is a fourth crystal growth mechanism that does not have a specific CSD shape, but which preserves the relative shapes of previously formed CSDs. This mechanism is attributed to supply-controlled growth. All three shapes were produced experimentally in the calcite growth experiments by modifying nucleation conditions and solution concentrations. The asymptotic CSD formed when additional reactants were added stepwise to the surface of solutions that were supersaturated with respect to calcite (initial Ω = 20, where Ω = 1 represents saturation), thereby leading to the continuous nucleation and growth of calcite crystals. Lognormal CSDs resulted when reactants were added continuously below the solution surface, via a submerged tube, to similarly supersaturated solutions (initial Ω = 22 to 41), thereby leading to a single nucleation event followed by surface-controlled growth. The Ostwald CSD resulted when concentrated reactants were rapidly mixed, leading initially to high levels of supersaturation (Ω >100), and to the formation and subsequent dissolution of very small nuclei, thereby yielding CSDs having small crystal size variances. The three CSD shapes likely were produced early in the crystallization process, in the nanometer crystal size range, and preserved during subsequent growth. Preservation of the relative shapes of the CSDs indicates that a supply-controlled growth mechanism

  20. Experimental investigation of particle velocity distributions in windblown sand movement

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With the PDPA(Phase Doppler Particle Analyzer) measurement technology,the probability distributions of particle impact and lift-off velocities on bed surface and the particle velocity distributions at different heights are detected in a wind tunnel. The results show that the probability distribution of impact and lift-off velocities of sand grains can be expressed by a log-normal function,and that of impact and lift-off angles complies with an exponential function. The mean impact angle is between 28° and 39°,and the mean lift-off angle ranges from 30° to 44°. The mean lift-off velocity is 0.81-0.9 times the mean impact velocity. The proportion of backward-impacting particles is 0.05-0.11,and that of backward-entrained particles ranges from 0.04 to 0.13. The probability distribution of particle horizontal velocity at 4 mm height is positive skew,the horizontal velocity of particles at 20 mm height varies widely,and the variation of the particle horizontal velocity at 80 mm height is less than that at 20 mm height. The probability distribution of particle vertical velocity at different heights can be described as a normal function.

  1. Characterizing 3D grain size distributions from 2D sections in mylonites using a modified version of the Saltykov method

    Science.gov (United States)

    Lopez-Sanchez, Marco; Llana-Fúnez, Sergio

    2016-04-01

    The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The

  2. Crystallite size distributions of marine gas hydrates

    Energy Technology Data Exchange (ETDEWEB)

    Klapp, S.A.; Bohrmann, G.; Abegg, F. [Bremen Univ., Bremen (Germany). Research Center of Ocean Margins; Hemes, S.; Klein, H.; Kuhs, W.F. [Gottingen Univ., Gottingen (Germany). Dept. of Crystallography

    2008-07-01

    Experimental studies were conducted to determine the crystallite size distributions of natural gas hydrate samples retrieved from the Gulf of Mexico, the Black Sea, and a hydrate ridge located near offshore Oregon. Synchrotron radiation technology was used to provide the high photon fluxes and high penetration depths needed to accurately analyze the bulk sediment samples. A new beam collimation diffraction technique was used to measure gas hydrate crystallite sizes. The analyses showed that gas hydrate crystals were globular in shape. Mean crystallite sizes ranged from 200 to 400 {mu}m for hydrate samples taken from the sea floor. Larger grain sizes in the hydrate ridge samples suggested differences in hydrate formation ages or processes. A comparison with laboratory-produced methane hydrate samples showed half a lognormal curve with a mean value of 40{mu}m. Results of the study showed that a cautious approach must be adopted when transposing crystallite-size sensitive physical data from laboratory-made gas hydrates to natural settings. It was concluded that crystallite size information may also be used to resolve the formation ages of gas hydrates when formation processes and conditions are constrained. 48 refs., 1 tab., 9 figs.

  3. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    Science.gov (United States)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  4. Schemes of Repeater Optimizing Distribution based on the MLC Application and CBLRD Simulation

    Directory of Open Access Journals (Sweden)

    Qian Qiuye

    2013-07-01

    Full Text Available The widely use of repeaters raises concern about their coordination among the public. Since repeaters may suffer interaction and limitation bearing capacity, designing a reasonable repeaters coordination method is of great significance. This study address the problem if repeater coordination in a circular flat area with minimal number of repeaters with seamless coverage theory, system simulation method. With 1,000 users, this study model the coverage, getting the minimal number of repeaters of different coverage radius based on extensive used regular hexagon coverage theory. A numerical example was given in this case. When the number of users increases to 10,000, this study simulate to get the signal density across the area according to the consideration of repeaters and the different distribution of users, which are divided into uniform distribution, linear distribution, normal distribution and lognormal distribution. Then, Multi-Layer Coverage (MLC and Coverage by Link Rate Density (CBLRD are created as the distribution scheme on the area where repeat service demand is large. Moreover, for solution on the distribution of the repeaters with barriers, distribution schemes are given considering the transmission of VHF spectrums and the distribution of users around the barrier. Additionally, Spring Comfortable Degree (SCD is used for evaluation of the results and the developing tends are given to improve the model. Due to the reasonable assumption, the solution of repeater distribution is of pivotal reference value based on the reasonable results.

  5. The Distribution of Pressures in a Supernova-Driven Interstellar Medium

    CERN Document Server

    MacLow, M M; Avillez, M A; Kim, J; Low, Mordecai-Mark Mac; Balsara, Dinshaw; Avillez, Miguel A.; Kim, Jongsoo

    2001-01-01

    Observations have suggested substantial departures from pressure equilibrium in the interstellar medium (ISM) in the plane of the Galaxy, even on scales under 50 pc. Nevertheless, multi-phase models of the ISM assume at least locally isobaric gas. The pressure then determines the density reached by gas cooling to stable thermal equilibrium. We use two different sets of numerical models of the ISM to examine the consequences of supernova driving for interstellar pressures. The first set of models is hydrodynamical, and uses adaptive mesh refinement to allow computation of a 1 x 1 x 20 kpc section of a stratified galactic disk. The second set of models is magnetohydrodynamical, using an independent code framework, and examines a 200 pc cubed periodic domain threaded by magnetic fields. Both of these models show broad pressure distributions with roughly log-normal functional forms produced by both shocks and rarefaction waves, rather than the power-law distributions predicted by previous work, with rather sharp ...

  6. Power laws in citation distributions: Evidence from Scopus

    CERN Document Server

    Brzezinski, Michal

    2014-01-01

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative mo...

  7. Distribution Structures

    NARCIS (Netherlands)

    Friedrich, H.; Tavasszy, L.A.; Davydenko, I.

    2013-01-01

    Distribution structures are important elements of the freight transportation system. Goods are routed via warehouses on their way from production to consumption. This chapter discusses drivers behind these structures, logistics decisions connected to distribution structures on the micro level, and

  8. Distributed Knight

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Damm, Christian Heide

    2005-01-01

    An extension of Knight (2005) that support distributed synchronous collaboration implemented using type-based publish/subscribe......An extension of Knight (2005) that support distributed synchronous collaboration implemented using type-based publish/subscribe...

  9. Statistical analyses support power law distributions found in neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Andreas Klaus

    Full Text Available The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii model parameter estimation to determine the specific exponent of the power law, and (iii comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect. This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  10. Statistical analyses support power law distributions found in neuronal avalanches.

    Science.gov (United States)

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  11. Raindrop size distribution: Fitting performance of common theoretical models

    Science.gov (United States)

    Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.

    2016-10-01

    Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.

  12. Distributions of personal VOC exposures: a population-based analysis.

    Science.gov (United States)

    Jia, Chunrong; D'Souza, Jennifer; Batterman, Stuart

    2008-10-01

    Information regarding the distribution of volatile organic compound (VOC) concentrations and exposures is scarce, and there have been few, if any, studies using population-based samples from which representative estimates can be derived. This study characterizes distributions of personal exposures to ten different VOCs in the U.S. measured in the 1999--2000 National Health and Nutrition Examination Survey (NHANES). Personal VOC exposures were collected for 669 individuals over 2-3 days, and measurements were weighted to derive national-level statistics. Four common exposure sources were identified using factor analyses: gasoline vapor and vehicle exhaust, methyl tert-butyl ether (MBTE) as a gasoline additive, tap water disinfection products, and household cleaning products. Benzene, toluene, ethyl benzene, xylenes chloroform, and tetrachloroethene were fit to log-normal distributions with reasonably good agreement to observations. 1,4-Dichlorobenzene and trichloroethene were fit to Pareto distributions, and MTBE to Weibull distribution, but agreement was poor. However, distributions that attempt to match all of the VOC exposure data can lead to incorrect conclusions regarding the level and frequency of the higher exposures. Maximum Gumbel distributions gave generally good fits to extrema, however, they could not fully represent the highest exposures of the NHANES measurements. The analysis suggests that complete models for the distribution of VOC exposures require an approach that combines standard and extreme value distributions, and that carefully identifies outliers. This is the first study to provide national-level and representative statistics regarding the VOC exposures, and its results have important implications for risk assessment and probabilistic analyses.

  13. On the capacity of FSO links under lognormal and Rician-lognormal turbulences

    KAUST Repository

    Ansari, Imran Shafique

    2014-09-01

    A unified capacity analysis under weak and composite turbulences of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection as well as heterodyne detection) is addressed in this work. More specifically, a unified exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing on these new moments expressions, unified approximate and simple closed- form results are offered for the ergodic capacity at high SNR regime as well as at low SNR regime. All the presented results are verified via computer- based Monte-Carlo simulations.

  14. [Concentration distribution of bioaerosol in summer and autumn in the Qingdao coastal region].

    Science.gov (United States)

    Xu, Wen-Bing; Qi, Jian-Hua; Jin, Chuan; Gao, Dong-Mei; Li, Meng-Fei; Li, Lin; Huang, Shuai; Zhang, Hai-Dong

    2011-01-01

    Bioaerosol samples were collected using an Andersen sampler from Jul. 2009 to Nov. 2009 in the Qingdao coastal region. Total microbe (including 'culturable microbe' and 'non-culturable microbe'), the terrigenous and marine microbe were analyzed by the counting methods of fluorescence microscope and Petri dishes containing agar media. The results showed that the proportion of non-culturable microbe to total microbe was as high as 99.58% of total on average, while the average proportion of culturable microbe to total microbe was 0.42%. The average proportions of marine bacteria/fungi did to the culturable microbe (18.99% and 45.47% respectively) were more than that of terrigenous bacteria/fungi (16.91% and 18.63% respectively), therefore marine bacteria/fungi contributed more to the microbe than terrigenous bacteria/fungi. It could be seen that the composition and concentration distribution were greatly affected by the ocean in the Qingdao coastal region. The average concentrations of total microbe were higher in Autumn (181 682.5 CFU/m3) than that in Summer (159 704.2 CFU/m3), and that of terrigenous bacteria, marine bacteria/fungi were also higher in Autumn than in Summer. The particle sizes of total microbe presented a log-normal distribution in summer and autumn, and the total microbe mainly existed in coarse particles larger than 2.1 microm. There was the highest proportion of total microbe in 3.3-4.7 microm particles, the lowest one in 0.65-1.1 microm. The terrigenous and marine bacterial particle sizes showed a skew distribution with a higher value in large particles (> 7 microm) and a lower one in fine particles (0.65-1.1 microm). The terrigenous and marine fungal particle sizes showed a log-normal distribution in summer and autumn, and the highest concentration proportion existed in particles with diameter of 2.1-3.3 microm.

  15. PARAMETER ESTIMATION OF EXPONENTIAL DISTRIBUTION

    Institute of Scientific and Technical Information of China (English)

    XU Haiyan; FEI Heliang

    2005-01-01

    Because of the importance of grouped data, many scholars have been devoted to the study of this kind of data. But, few documents have been concerned with the threshold parameter. In this paper, we assume that the threshold parameter is smaller than the first observing point. Then, on the basis of the two-parameter exponential distribution, the maximum likelihood estimations of both parameters are given, the sufficient and necessary conditions for their existence and uniqueness are argued, and the asymptotic properties of the estimations are also presented, according to which approximate confidence intervals of the parameters are derived. At the same time, the estimation of the parameters is generalized, and some methods are introduced to get explicit expressions of these generalized estimations. Also, a special case where the first failure time of the units is observed is considered.

  16. Distributed computing

    CERN Document Server

    Van Renesse, R

    1991-01-01

    This series will start with an introduction to distributed computing systems. Distributed computing paradigms will be presented followed by a discussion on how several important contemporary distributed operating systems use these paradigms. Topics will include processing paradigms, storage paradigms, scalability and robustness. Throughout the course everything will be illustrated by modern distributed systems notably the Amoeba distributed operating system of the Free University in Amsterdam and the Plan 9 operating system of AT&T Bell Laboratories. Plan 9 is partly designed and implemented by Ken Thompson, the main person behind the successful UNIX operating system.

  17. Thermal and chaotic distributions of plasma in laser driven Coulomb explosions of deuterium clusters

    CERN Document Server

    Barbarino, M; Bonasera, A; Lattuada, D; Bang, W; Quevedo, H J; Consoli, F; De Angelis, R; Andreoli, P; Kimura, S; Dyer, G; Bernstein, A C; Hagel, K; Barbui, M; Schmidt, K; Gaul, E; Donovan, M E; Natowitz, J B; Ditmire, T

    2015-01-01

    In this work we explore the possibility that the motion of the deuterium ions emitted from Coulomb cluster explosions is chaotic enough to resemble thermalization. We analyze the process of nuclear fusion reactions driven by laser-cluster interactions in experiments conducted at the Texas Petawatt laser facility using a mixture of D2+3He and CD4+3He cluster targets. When clusters explode by Coulomb repulsion, the emission of the energetic ions is nearly isotropic. In the framework of cluster Coulomb explosions, we analyze the energy distributions of the ions using a Maxwell- Boltzmann (MB) distribution, a shifted MB distribution (sMB) and the energy distribution derived from a log-normal (LN) size distribution of clusters. We show that the first two distributions reproduce well the experimentally measured ion energy distributions and the number of fusions from d-d and d-3He reactions. The LN distribution is a good representation of the ion kinetic energy distribution well up to high momenta where the noise be...

  18. Environmental Transmission Electron Microscopy Study of the Origins of Anomalous Particle Size Distributions in Supported Metal Catalysts

    DEFF Research Database (Denmark)

    Benavidez, Angelica D.; Kovarik, Libor; Genc, Arda

    2012-01-01

    at temperatures up to 600 °C. Individual NPs of Pd were tracked to determine the operative sintering mechanisms. We found anomalous growth of NPs occurred during the early stages of catalyst sintering wherein some particles started to grow significantly larger than the mean, resulting in a broadening...... explain PSDs in heterogeneous catalysts which often show particles that are significantly larger than the mean, resulting in a long tail to the right. It has been suggested previously that particle migration and coalescence could be the likely cause for such broad size distributions. We did not detect any...... of the particle size distribution (PSD). The abundance of the larger particles did not fit the log-normal distribution. We can rule out sample nonuniformity as a cause for the growth of these large particles, since images were recorded prior to heat treatments. The anomalous growth of these particles may help...

  19. Recurrent frequency-size distribution of characteristic events

    Directory of Open Access Journals (Sweden)

    S. G. Abaimov

    2009-04-01

    Full Text Available Statistical frequency-size (frequency-magnitude properties of earthquake occurrence play an important role in seismic hazard assessments. The behavior of earthquakes is represented by two different statistics: interoccurrent behavior in a region and recurrent behavior at a given point on a fault (or at a given fault. The interoccurrent frequency-size behavior has been investigated by many authors and generally obeys the power-law Gutenberg-Richter distribution to a good approximation. It is expected that the recurrent frequency-size behavior should obey different statistics. However, this problem has received little attention because historic earthquake sequences do not contain enough events to reconstruct the necessary statistics. To overcome this lack of data, this paper investigates the recurrent frequency-size behavior for several problems. First, the sequences of creep events on a creeping section of the San Andreas fault are investigated. The applicability of the Brownian passage-time, lognormal, and Weibull distributions to the recurrent frequency-size statistics of slip events is tested and the Weibull distribution is found to be the best-fit distribution. To verify this result the behaviors of numerical slider-block and sand-pile models are investigated and the Weibull distribution is confirmed as the applicable distribution for these models as well. Exponents β of the best-fit Weibull distributions for the observed creep event sequences and for the slider-block model are found to have similar values ranging from 1.6 to 2.2 with the corresponding aperiodicities CV of the applied distribution ranging from 0.47 to 0.64. We also note similarities between recurrent time-interval statistics and recurrent frequency-size statistics.

  20. Cell-size distribution in epithelial tissue formation and homeostasis.

    Science.gov (United States)

    Puliafito, Alberto; Primo, Luca; Celani, Antonio

    2017-03-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size.

  1. Probability Distribution Functions OF 12CO(J = 1-0) Brightness and Integrated Intensity in M51: The PAWS View

    CERN Document Server

    Hughes, Annie; Schinnerer, Eva; Colombo, Dario; Pety, Jerome; Leroy, Adam K; Dobbs, Clare L; Garcia-Burillo, Santiago; Thompson, Todd A; Dumas, Gaelle; Schuster, Karl F; Kramer, Carsten

    2013-01-01

    We analyse the distribution of CO brightness temperature and integrated intensity in M51 at ~40 pc resolution using new CO data from the Plateau de Bure Arcsecond Whirlpool Survey (PAWS). We present probability distribution functions (PDFs) of the CO emission within the PAWS field, which covers the inner 11 x 7 kpc of M51. We find variations in the shape of CO PDFs within different M51 environments, and between M51 and M33 and the Large Magellanic Cloud (LMC). Globally, the PDFs for the inner disk of M51 can be represented by narrow lognormal functions that cover 1 to 2 orders of magnitude in CO brightness and integrated intensity. The PDFs for M33 and the LMC are narrower and peak at lower CO intensities. However, the CO PDFs for different dynamical environments within the PAWS field depart from the shape of the global distribution. The PDFs for the interarm region are approximately lognormal, but in the spiral arms and central region of M51, they exhibit diverse shapes with a significant excess of bright CO...

  2. Uncertainty in volcanic ash particle size distribution and implications for infrared remote sensing and airspace management

    Science.gov (United States)

    Western, L.; Watson, M.; Francis, P. N.

    2014-12-01

    Volcanic ash particle size distributions are critical in determining the fate of airborne ash in drifting clouds. A significant amount of global airspace is managed using dispersion models that rely on a single ash particle size distribution, derived from a single source - Hobbs et al., 1991. This is clearly wholly inadequate given the range of magmatic compositions and eruptive styles that volcanoes present. Available measurements of airborne ash lognormal particle size distributions show geometric standard deviation values that range from 1.0 - 2.5, with others showing mainly polymodal distributions. This paucity of data pertaining to airborne sampling of volcanic ash results in large uncertainties both when using an assumed distribution to retrieve mass loadings from satellite observations and when prescribing particle size distributions of ash in dispersion models. Uncertainty in the particle size distribution can yield order of magnitude differences to mass loading retrievals of an ash cloud from satellite observations, a result that can easily reclassify zones of airspace closure. The uncertainty arises from the assumptions made when defining both the geometric particle size and particle single scattering properties in terms of an effective radius. This has significant implications for airspace management and emphasises the need for an improved quantification of airborne volcanic ash particle size distributions.

  3. The Intrinsic Eddington Ratio Distribution of Active Galactic Nuclei in Young Galaxies from SDSS

    Science.gov (United States)

    Jones, Mackenzie L.; Hickox, Ryan C.; Black, Christine; Hainline, Kevin Nicholas; DiPompeo, Michael A.

    2016-04-01

    An important question in extragalactic astronomy concerns the distribution of black hole accretion rates, i.e. the Eddington ratio distribution, of active galactic nuclei (AGN). Specifically, it is matter of debate whether AGN follow a broad distribution in accretion rates, or if the distribution is more strongly peaked at characteristic Eddington ratios. Using a sample of galaxies from SDSS DR7, we test whether an intrinsic Eddington ratio distribution that takes the form of a broad Schechter function is in fact consistent with previous work that suggests instead that young galaxies in optical surveys have a more strongly peaked lognormal Eddington ratio distribution. Furthermore, we present an improved method for extracting the AGN distribution using BPT diagnostics that allows us to probe over one order of magnitude lower in Eddington ratio, counteracting the effects of dilution by star formation. We conclude that the intrinsic Eddington ratio distribution of optically selected AGN is consistent with a power law with an exponential cutoff, as is observed in the X-rays. This work was supported in part by a NASA Jenkins Fellowship.

  4. Statistical distributions

    CERN Document Server

    Forbes, Catherine; Hastings, Nicholas; Peacock, Brian J.

    2010-01-01

    A new edition of the trusted guide on commonly used statistical distributions Fully updated to reflect the latest developments on the topic, Statistical Distributions, Fourth Edition continues to serve as an authoritative guide on the application of statistical methods to research across various disciplines. The book provides a concise presentation of popular statistical distributions along with the necessary knowledge for their successful use in data modeling and analysis. Following a basic introduction, forty popular distributions are outlined in individual chapters that are complete with re

  5. Airborne methane remote measurements reveal heavy-tail flux distribution in Four Corners region

    Science.gov (United States)

    Thorpe, Andrew K.; Thompson, David R.; Hulley, Glynn; Kort, Eric Adam; Vance, Nick; Borchardt, Jakob; Krings, Thomas; Gerilowski, Konstantin; Sweeney, Colm; Conley, Stephen; Bue, Brian D.; Aubrey, Andrew D.; Hook, Simon; Green, Robert O.

    2016-01-01

    Methane (CH4) impacts climate as the second strongest anthropogenic greenhouse gas and air quality by influencing tropospheric ozone levels. Space-based observations have identified the Four Corners region in the Southwest United States as an area of large CH4 enhancements. We conducted an airborne campaign in Four Corners during April 2015 with the next-generation Airborne Visible/Infrared Imaging Spectrometer (near-infrared) and Hyperspectral Thermal Emission Spectrometer (thermal infrared) imaging spectrometers to better understand the source of methane by measuring methane plumes at 1- to 3-m spatial resolution. Our analysis detected more than 250 individual methane plumes from fossil fuel harvesting, processing, and distributing infrastructures, spanning an emission range from the detection limit ∼ 2 kg/h to 5 kg/h through ∼ 5,000 kg/h. Observed sources include gas processing facilities, storage tanks, pipeline leaks, and well pads, as well as a coal mine venting shaft. Overall, plume enhancements and inferred fluxes follow a lognormal distribution, with the top 10% emitters contributing 49 to 66% to the inferred total point source flux of 0.23 Tg/y to 0.39 Tg/y. With the observed confirmation of a lognormal emission distribution, this airborne observing strategy and its ability to locate previously unknown point sources in real time provides an efficient and effective method to identify and mitigate major emissions contributors over a wide geographic area. With improved instrumentation, this capability scales to spaceborne applications [Thompson DR, et al. (2016) Geophys Res Lett 43(12):6571–6578]. Further illustration of this potential is demonstrated with two detected, confirmed, and repaired pipeline leaks during the campaign. PMID:27528660

  6. Geospatial Mapping of Soil Nitrate-Nitrogen Distribution Under a Mixed-Land Use System

    Institute of Scientific and Technical Information of China (English)

    S.LAMSAL; C.M.BLISS; D.A.GRAETZ

    2009-01-01

    Mapping the spatial distribution of soil nitrate-nitrogen (NO3-N) is important to guide nitrogen application as well as to assess environmental risk of NO3-N leaching into the groundwater.We employed univariate and hybrid geostatistical methods to map the spatial distribution of soil NO3-N across a landscape in northeast Florida.Soil samples were collected from four depth increments (0-30,30-60,60-120 and 120-180 era) from 147 sampling locations identified using a stratified random and nested sampling design based on soil,land use and elevation strata.Soil NOa-N distributions in the top two layers were spatially autocorrelated and mapped using lognormal kriging.Environmental correlation models for NO3-N prediction were derived using linear and non-linear regression methods,and employed to develop NO3-N trend maps.Land use and its related variables derived from satellite imagery were identified as important variables to predict NO3-N using environmental correlation models.While lognormal kriging produced smoothly varying maps,trend maps derived from environmental correlation models generated spatially heterogeneous maps.Trend maps were combined with ordinary kriging predictions of trend model residuals to develop regression kriging prediction maps,which gave the best NO3-N predictions.As land use and remotely sensed data are readily available and have much finer spatial resolution compared to field sampled soils,our findings suggested the efficacy of environmental correlation models based on land use and remotely sensed data for landscape scale mapping of soil NO3-N.The methodologies implemented are transferable for mapping of soil NO3-N in other landscapes.

  7. Distribution functions to estimate radionuclide solid-liquid distribution coefficients in soils: the case of Cs

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)

    2014-07-01

    In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for

  8. Distribution center

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    Distribution center is a logistics link fulfill physical distribution as its main functionGenerally speaking, it's a large and hiahly automated center destined to receive goods from various plants and suppliers,take orders,fill them efficiently,and deliver goods to customers as quickly as possible.

  9. 长白山不同演替阶段针阔混交林群落物种多度分布格局%Species-abundance distribution patterns at different successional stages of conifer and broad-leaved mixed forest communities in Changbai Mountains, China

    Institute of Scientific and Technical Information of China (English)

    闫琰; 张春雨; 赵秀海

    2012-01-01

    Aims Our objective was to explain processes that dominate species-abundance distribution pattern and mechanism of community assembly in temperate forests. Methods We used three 5.2-hm2 permanent plots established in secondary Populus davidiana-Betula platyphylla forest, secondary conifer and broad-leaved mixed forest and Titia amurensis-Pinus koraiensis mixed forest in Changbai Mountains. Within each plot, we randomly selected 500 subplots within 260 m x 200 m at the scales of 10 m × 10 m, 30 m × 30 m, 60 m × 60 m and 90 m × 90 m. We calculated the mean value of species-abundance distributions taken from the 500 subplots as the observed species-abundance distribution. We estimated the fitted species-abundance distributions by neutral, log-normal, Zipf, broken stick and niche preemption models at different scales. Simulation effects were tested by Chi-square test, Kolmogorov-Smirnov (K-S) test and Akaike Information Criterion (AIC). For the neutral model, we first estimated two parameters 6 and m and then simulated 600 species-abundance distributions. The average of these 600 species-abundance distributions was the best-fit result of the neutral model. We employed the 95% confidence envelopes that were approximated by the 2.5 and 97.5 percentiles of the abundances of species of rank / = 1 to S over the 600 simulations to test goodness-of-fit for the neutral model. All of the computations were conducted in R 2.14.1 with UNTB and VAGEN packages. Important findings The neutral model fit species-abundance distribution at different successional stages of conifer and broad-leaved mixed temperate forest communities. All five models fit the observed value at the 10 m x 10 m sampling scale, and the goodness of fit of the log-normal, Zipf, broken stick and niche preemption models were better than that of the neutral model. That means at small sampling scale the species-abundance distribution is dominated by neutral process and niche process; however, the niche process is

  10. Modelling the Pan-Spectral Energy Distributions of Starburst & Active Galaxies

    CERN Document Server

    Dopita, M A

    2004-01-01

    We present results of a self-consistent model of the spectral energy distribution (SED) of starburst galaxies. Two parameters control the IR SED, the mean pressure in the ISM and the destruction timescale of molecular clouds. Adding a simplified AGN spectrum provides mixing lines on IRAS color : color diagrams. This reproduces the observed colors of both AGNs and starbursts.

  11. Best fitting distributions for the standard duration annual maximum precipitations in the Aegean Region

    Directory of Open Access Journals (Sweden)

    Halil Karahan

    2013-03-01

    Full Text Available Knowing the properties like amount, duration, intensity, spatial and temporal variation etc… of precipitation which is the primary input of water resources is required for planning, design, construction and operation studies of various sectors like water resources, agriculture, urbanization, drainage, flood control and transportation. For executing the mentioned practices, reliable and realistic estimations based on existing observations should be made. The first step of making a reliable estimation is to test the reliability of existing observations. In this study, Kolmogorov-Smirnov, Anderson-Darling and Chi-Square goodness of distribution fit tests were applied for determining to which distribution the measured standard duration maximum precipitation values (in the years 1929-2005 fit in the meteorological stations operated by the Turkish State Meteorological Service (DMİ which are located in the city and town centers of Aegean Region. While all the observations fit to GEV distribution according to Anderson-Darling test, it was seen that short, mid-term and long duration precipitation observations generally fit to GEV, Gamma and Log-normal distribution according to Kolmogorov-Smirnov and Chi-square tests. To determine the parameters of the chosen probability distribution, maximum likelihood (LN2, LN3, EXP2, Gamma3, probability-weighted distribution (LP3,Gamma2, L-moments (GEV and least squares (Weibull2 methods were used according to different distributions.

  12. A theoretical explanation of grain size distributions in explosive rock fragmentation

    Science.gov (United States)

    Fowler, A. C.; Scheu, Bettina

    2016-06-01

    We have measured grain size distributions of the results of laboratory decompression explosions of volcanic rock. The resulting distributions can be approximately represented by gamma distributions of weight per cent as a function of ϕ =-log2⁡d , where d is the grain size in millimetres measured by sieving, with a superimposed long tail associated with the production of fines. We provide a description of the observations based on sequential fragmentation theory, which we develop for the particular case of `self-similar' fragmentation kernels, and we show that the corresponding evolution equation for the distribution can be explicitly solved, yielding the long-time lognormal distribution associated with Kolmogorov's fragmentation theory. Particular features of the experimental data, notably time evolution, advection, truncation and fines production, are described and predicted within the constraints of a generalized, `reductive' fragmentation model, and it is shown that the gamma distribution of coarse particles is a natural consequence of an assumed uniform fragmentation kernel. We further show that an explicit model for fines production during fracturing can lead to a second gamma distribution, and that the sum of the two provides a good fit to the observed data.

  13. Particle Size Distributions Measured in the Stratospheric Plumes of Three Rockets During the ACCENT Missions

    Science.gov (United States)

    Wiedinmyer, C.; Brock, C. A.; Reeves, J. M.; Ross, M. N.; Schmid, O.; Toohey, D.; Wilson, J. C.

    2001-12-01

    The global impact of particles emitted by rocket engines on stratospheric ozone is not well understood, mainly due to the lack of comprehensive in situ measurements of the size distributions of these emitted particles. During the Atmospheric Chemistry of Combustion Emissions Near the Tropopause (ACCENT) missions in 1999, the NASA WB-57F aircraft carried the University of Denver N-MASS and FCAS instruments into the stratospheric plumes from three rockets. Size distributions of particles with diameters from 4 to approximately 2000 nm were calculated from the instrument measurements using numerical inversion techniques. The data have been averaged over 30-second intervals. The particle size distributions observed in all of the rocket plumes included a dominant mode near 60 nm diameter, probably composed of alumina particles. A smaller mode at approximately 25 nm, possibly composed of soot particles, was seen in only the plumes of rockets that used liquid oxygen and kerosene as a propellant. Aircraft exhaust emitted by the WB-57F was also sampled; the size distributions within these plumes are consistent with prior measurements in aircraft plumes. The size distributions for all rocket intercepts have been fitted to bimodal, lognormal distributions to provide input for global models of the stratosphere. Our data suggest that previous estimates of the solid rocket motor alumina size distributions may underestimate the alumina surface area emission index, and so underestimate the particle surface area available for heterogeneous chlorine activation reactions in the global stratosphere.

  14. Statistical analysis of multilook polarimetric SAR data and terrain classification with adaptive distribution

    Science.gov (United States)

    Liu, Guoqing; Huang, ShunJi; Torre, Andrea; Rubertone, Franco S.

    1995-11-01

    This paper deals with analysis of statistical properties of multi-look processed polarimetric SAR data. Based on an assumption that the multi-look polarimetric measurement is a product between a Gamma-distributed texture variable and a Wishart-distributed polarimetric speckle variable, it is shown that the multi-look polarimetric measurement from a nonhomogeneous region obeys a generalized K-distribution. In order to validate this statistical model, two of its varied versions, multi-look intensity and amplitude K-distributions are particularly compared with histograms of the observed multi-look SAR data of three terrain types, ocean, forest-like and city regions, and with four empirical distribution models, Gaussian, log-normal, gamma and Weibull models. A qualitative relation between the degree of nonhomogeneity of a textured scene and the well-fitting statistical model is then empirically established. Finally, a classifier with adaptive distributions guided by the order parameter of the texture distribution estimated with local statistics is introduced to perform terrain classification, experimental results with both multi-look fully polarimetric data and multi-look single-channel intensity/amplitude data indicate its effectiveness.

  15. What is Fair Pay for Executives? An Information Theoretic Analysis of Wage Distributions

    Directory of Open Access Journals (Sweden)

    Venkat Venkatasubramanian

    2009-11-01

    Full Text Available The high pay packages of U.S. CEOs have raised serious concerns about what would constitute a fair pay. Since the present economic models do not adequately address this fundamental question, we propose a new theory based on statistical mechanics and information theory. We use the principle of maximum entropy to show that the maximally fair pay distribution is lognormal under ideal conditions. This prediction is in agreement with observed data for the bottom 90%–95% of the working population. The theory estimates that the top 35 U.S. CEOs were overpaid by about 129 times their ideal salaries in 2008. We also provide an insight of entropy as a measure of fairness, which is maximized at equilibrium, in an economic system.

  16. MAIL DISTRIBUTION

    CERN Multimedia

    Mail Office

    2001-01-01

    PLEASE NOTE: changed schedule for mail distribution on 21 December 2001 afternoon delivery will be one hour earlier than usual, delivery to LHC sites will take place late morning. Thank you for your understanding.

  17. Stable distributions

    CERN Document Server

    Janson, Svante

    2011-01-01

    We give some explicit calculations for stable distributions and convergence to them, mainly based on less explicit results in Feller (1971). The main purpose is to provide ourselves with easy reference to explicit formulas. (There are no new results.)

  18. Spatial distribution

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe; Hendrichsen, Ditte Katrine; Nachman, Gøsta Støger

    2008-01-01

    Living organisms are distributed over the entire surface of the planet. The distribution of the individuals of each species is not random; on the contrary, they are strongly dependent on the biology and ecology of the species, and vary over different spatial scale. The structure of whole...... populations reflects the location and fragmentation pattern of the habitat types preferred by the species, and the complex dynamics of migration, colonization, and population growth taking place over the landscape. Within these, individuals are distributed among each other in regular or clumped patterns......, depending on the nature of intraspecific interactions between them: while the individuals of some species repel each other and partition the available area, others form groups of varying size, determined by the fitness of each group member. The spatial distribution pattern of individuals again strongly...

  19. Distributed optimality

    OpenAIRE

    Trommer, Jochen

    2005-01-01

    In dieser Dissertation schlage ich eine Synthese (Distributed Optimality, DO) von Optimalitätstheorie und einem derivationellen, morphologischem Asatz, Distributed Morphology (DM; Halle & Marantz, 1993) vor. Durch die Integration von OT in DM wird es möglich, Phänomene, die in DM durch sprachspezifische Regeln oder Merkmale von lexikalischen Einträge erfasst werden, auf die Interaktion von verletzbaren, universellen Constraints zurückzuführen. Andererseits leistet auch DM zwei substantielle B...

  20. Distributed Subtyping

    OpenAIRE

    Baehni, Sébastien; Barreto, Joao; Guerraoui, Rachid

    2006-01-01

    One of the most frequent operations in object-oriented programs is the "instanceof" test, also called the "subtyping" test or the "type inclusion" test. This test determines if a given object is an instance of some type. Surprisingly, despite a lot of research on distributed object-oriented languages and systems, almost no work has been devoted to the implementation of this test in a distributed environment. This paper presents the first algorithm to implement the "subtyping" test on an obje...

  1. A bivariate limiting distribution of tumor latency time.

    Science.gov (United States)

    Rachev, S T; Wu, C; Yakovlev AYu

    1995-06-01

    The model of radiation carcinogenesis, proposed earlier by Klebanov, Rachev, and Yakovlev [8] substantiates the employment of limiting forms of the latent time distribution at high dose values. Such distributions arise within the random minima framework, the two-parameter Weibull distribution being a special case. This model, in its present form, does not allow for carcinogenesis at multiple sites. As shown in the present paper, a natural two-dimensional generalization of the model appears in the form of a Weibull-Marshall-Olkin distribution. Similarly, the study of a randomized version of the model based on the negative binomial minima scheme results in a bivariate Pareto-Marshall-Olkin distribution. In the latter case, an estimate for the rate of convergence to the limiting distribution is given.

  2. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    Science.gov (United States)

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  3. A global survey on the seasonal variation of the marginal distribution of daily precipitation

    Science.gov (United States)

    Papalexiou, Simon Michael; Koutsoyiannis, Demetris

    2016-08-01

    To characterize the seasonal variation of the marginal distribution of daily precipitation, it is important to find which statistical characteristics of daily precipitation actually vary the most from month-to-month and which could be regarded to be invariant. Relevant to the latter issue is the question whether there is a single model capable to describe effectively the nonzero daily precipitation for every month worldwide. To study these questions we introduce and apply a novel test for seasonal variation (SV-Test) and explore the performance of two flexible distributions in a massive analysis of approximately 170,000 monthly daily precipitation records at more than 14,000 stations from all over the globe. The analysis indicates that: (a) the shape characteristics of the marginal distribution of daily precipitation, generally, vary over the months, (b) commonly used distributions such as the Exponential, Gamma, Weibull, Lognormal, and the Pareto, are incapable to describe "universally" the daily precipitation, (c) exponential-tail distributions like the Exponential, mixed Exponentials or the Gamma can severely underestimate the magnitude of extreme events and thus may be a wrong choice, and (d) the Burr type XII and the Generalized Gamma distributions are two good models, with the latter performing exceptionally well.

  4. Photoelectron track length distributions measured in a negative ion time projection chamber

    CERN Document Server

    Prieskorn, Z R; Kaaret, P E; Black, J K

    2014-01-01

    We report photoelectron track length distributions between 3 and 8 keV in gas mixtures of Ne+CO2+CH3NO2 (260:80:10 Torr) and CO2+CH3NO2 (197.5: 15 Torr). The measurements were made using a negative ion time projection chamber (NITPC) at the National Synchrotron Light Source (NSLS) at the Brookhaven National Laboratory (BNL). We report the first quantitative analysis of photoelectron track length distributions in a gas. The distribution of track lengths at a given energy is best fit by a lognormal distribution. A powerlaw distribution of the form, f(E)=a(E/Eo)n, is found to fit the relationship between mean track length and energy. We find n=1.29 +/- 0.07 for Ne+CO2+CH3NO2 and n=1.20 +/- 0.09 for CO2+CH3NO2. Understanding the distribution of photoelectron track lengths in proportional counter gases is important for optimizing the pixel size and the dimensions of the active region in electron-drift time projection chambers (TPCs) and NITPC X-ray polarimeters.

  5. Associative memory model with long-tail-distributed Hebbian synaptic connections

    Directory of Open Access Journals (Sweden)

    Naoki eHiratani

    2013-02-01

    Full Text Available The postsynaptic potentials of pyramidal neurons have a non-Gaussian amplitude distribution with a heavy tail in both hippocampus and neocortex. Such distributions of synaptic weights were recently shown to generate spontaneous internal noise optimal for spike propagation in recurrent cortical circuits. However, whether this internal noise generation by heavy-tailed weight distributions is possible for and beneficial to other computational functions remains unknown. To clarify this point, we construct an associative memory network model of spiking neurons that stores multiple memory patterns in a connection matrix with a lognormal weight distribution. In associative memory networks, non-retrieved memory patterns generate a cross-talk noise that severely disturbs memory recall. We demonstrate that neurons encoding a retrieved memory pattern and those encoding non-retrieved memory patterns have different subthreshold membrane-potential distributions in our model. Consequently, the probability of responding to inputs at strong synapses increases for the encoding neurons, whereas it decreases for the non-encoding neurons. Our results imply that heavy-tailed distributions of connection weights can generate noise useful for associative memory recall.

  6. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    Science.gov (United States)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  7. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  8. Fuel distribution

    Energy Technology Data Exchange (ETDEWEB)

    Tison, R.R.; Baker, N.R.; Blazek, C.F.

    1979-07-01

    Distribution of fuel is considered from a supply point to the secondary conversion sites and ultimate end users. All distribution is intracity with the maximum distance between the supply point and end-use site generally considered to be 15 mi. The fuels discussed are: coal or coal-like solids, methanol, No. 2 fuel oil, No. 6 fuel oil, high-Btu gas, medium-Btu gas, and low-Btu gas. Although the fuel state, i.e., gas, liquid, etc., can have a major impact on the distribution system, the source of these fuels (e.g., naturally-occurring or coal-derived) does not. Single-source, single-termination point and single-source, multi-termination point systems for liquid, gaseous, and solid fuel distribution are considered. Transport modes and the fuels associated with each mode are: by truck - coal, methanol, No. 2 fuel oil, and No. 6 fuel oil; and by pipeline - coal, methane, No. 2 fuel oil, No. 6 oil, high-Btu gas, medium-Btu gas, and low-Btu gas. Data provided for each distribution system include component makeup and initial costs.

  9. Damage Distributions

    DEFF Research Database (Denmark)

    Lützen, Marie

    2001-01-01

    the damage location, the damage sizes and the main particulars of the struck vessel. From the numerical simulation and the analyse of the damage statistics it is found that the current formulation from the IMO SLF 43/3/2 can be used as basis for determination of the p-, r-, and v-factors. Expressions...... and methods of calculation have been discussed. The damage distributions for the different vessels have been compared and analyses regarding relations between damage parameters and main particulars have been performed. The damage statistics collected in work package 1 have been analysed for relations between...... for the distribution of the non-dimensional damage location, the non-dimensional damage length and the non-dimensional penetrations have been derived. These distributions have been used as basis for a proposal for the p- and r-factors. Two proposals for the v-factor have been performed using the damage statistics...

  10. Distributed creativity

    DEFF Research Database (Denmark)

    Glaveanu, Vlad Petre

    used within the literature and yet it has the potential to revolutionise the way we think about creativity, from how we define and measure it to what we can practically do to foster and develop creativity. Drawing on cultural psychology, ecological psychology and advances in cognitive science......This book challenges the standard view that creativity comes only from within an individual by arguing that creativity also exists ‘outside’ of the mind or more precisely, that the human mind extends through the means of action into the world. The notion of ‘distributed creativity’ is not commonly......, this book offers a basic framework for the study of distributed creativity that considers three main dimensions of creative work: sociality, materiality and temporality. Starting from the premise that creativity is distributed between people, between people and objects and across time, the book reviews...

  11. Aerosol Size Distribution in the marine regions

    Science.gov (United States)

    Markuszewski, Piotr; Petelski, Tomasz; Zielinski, Tymon; Pakszys, Paulina; Strzalkowska, Agata; Makuch, Przemyslaw; Kowalczyk, Jakub

    2014-05-01

    We would like to present the data obtained during the regular research cruises of the S/Y Oceania over a period of time between 2009 - 2012. The Baltic Sea is a very interesting polygon for aerosol measurements, however, also difficult due to the fact that mostly cases of a mixture of continental and marine aerosols are observed. It is possible to measure clear marine aerosol, but also advections of dust from southern Europe or even Africa. This variability of data allows to compare different conditions. The data is also compared with our measurements from the Arctic Seas, which have been made during the ARctic EXperiment (AREX). The Arctic Seas are very suitable for marine aerosol investigations since continental advections of aerosols are far less frequent than in other European sea regions. The aerosol size distribution was measured using the TSI Laser Aerosol Spectrometer model 3340 (99 channels, measurement range 0.09 μm to 7 μm), condensation particle counter (range 0.01 μm to 3 μm) and laser particle counter PMS CSASP-100-HV-SP (range 0.5 μm to 47 μm in 45 channels). Studies of marine aerosol production and transport are important for many Earth sciences such as cloud physics, atmospheric optics, environmental pollution studies and interaction between ocean and atmosphere. All equipment was placed on one of the masts of S/Y Oceania. Measurements using the laser aerosol spectrometer and condensation particle counter were made on one level (8 meters above sea level). Measurements with the laser particle counter were performed at five different levels above the sea level (8, 11, 14, 17 and 20 m). Based on aerosol size distribution the parameterizations with a Log-Normal and a Power-Law distributions were made. The aerosol source functions, characteristic for the region were also determined. Additionally, poor precision of the sea spray emission determination was confirmed while using only the aerosol concentration data. The emission of sea spray depends

  12. Distributed Logics

    Science.gov (United States)

    2014-10-03

    that must be woven into proofs of security statements. 03-10-2014 Memorandum Report Logic System-on-a-Chip Distributed systems 9888 ASDR&EAssistant...can be removed without damaging the logic. For all propositional letters p, E1. p ⊃ [r] p From now on, a distributed logic contains at least the...a ∈ x iff 〈h〉 ∈ x. These same definitions work for the canonical relation R for r : h y k where now a ∈ MA(k), [r] a, 〈r〉 a ∈ MA(h), x ∈ CF(h), and

  13. On the Weibull distribution for wind energy assessment

    DEFF Research Database (Denmark)

    Batchvarova, Ekaterina; Gryning, Sven-Erik

    2014-01-01

    The two parameter Weibull distribution is traditionally used to describe the long term fluctuations in the wind speed as part of the theoretical framework for wind energy assessment of wind farms. The Weibull distribution is described by a shape and a scale parameter. Here, based on recent long......-term measurements performed by a wind lidar, the vertical profile of the shape parameter will be discussed for a sub-urban site, a coastal site and a marine site. The profile of the shape parameter was found to be substantially different over land and sea. A parameterization of the vertical behavior of the shape...

  14. Efficient Estimation of Autoregression Parameters and Innovation Distributions for Semiparametric Integer-Valued AR(p) Models (Subsequently replaced by DP 2008-53)

    NARCIS (Netherlands)

    Drost, F.C.; van den Akker, R.; Werker, B.J.M.

    2007-01-01

    Integer-valued autoregressive (INAR) processes have been introduced to model nonnegative integer-valued phenomena that evolve over time. The distribution of an INAR(p) process is essentially described by two parameters: a vector of autoregression coefficients and a probability distribution on the no

  15. Efficient Estimation of Autoregression Parameters and Innovation Distributions forSemiparametric Integer-Valued AR(p) Models (Revision of DP 2007-23)

    NARCIS (Netherlands)

    Drost, F.C.; van den Akker, R.; Werker, B.J.M.

    2008-01-01

    Integer-valued autoregressive (INAR) processes have been introduced to model nonnegative integer-valued phenomena that evolve over time. The distribution of an INAR(p) process is essentially described by two parameters: a vector of autoregression coefficients and a probability distribution on the no

  16. Statistical distribution of nonlinear random wave height in shallow water

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Here we present a statistical model of random wave,using Stokes wave theory of water wave dynamics,as well as a new nonlinear probability distribution function of wave height in shallow water.It is more physically logical to use the wave steepness of shallow water and the factor of shallow water as the parameters in the wave height distribution.The results indicate that the two parameters not only could be parameters of the distribution function of wave height but also could reflect the degree of wave height distribution deviation from the Rayleigh distribution.The new wave height distribution overcomes the problem of Rayleigh distribution that the prediction of big wave is overestimated and the general wave is underestimated.The prediction of small probability wave height value of new distribution is also smaller than that of Rayleigh distribution.The effect of wave steepness in shallow water is similar to that in deep water;but the factor of shallow water lowers the wave height distribution of the general wave with the reduced factor of wave steepness.It also makes the wave height distribution of shallow water more centralized.The results indicate that the new distribution fits the in situ measurements much better than other distributions.

  17. Distributed Computing.

    Science.gov (United States)

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  18. Comparing nadir and limb observations of polar mesospheric clouds: The effect of the assumed particle size distribution

    Science.gov (United States)

    Bailey, Scott M.; Thomas, Gary E.; Hervig, Mark E.; Lumpe, Jerry D.; Randall, Cora E.; Carstens, Justin N.; Thurairajah, Brentha; Rusch, David W.; Russell, James M.; Gordley, Larry L.

    2015-05-01

    Nadir viewing observations of Polar Mesospheric Clouds (PMCs) from the Cloud Imaging and Particle Size (CIPS) instrument on the Aeronomy of Ice in the Mesosphere (AIM) spacecraft are compared to Common Volume (CV), limb-viewing observations by the Solar Occultation For Ice Experiment (SOFIE) also on AIM. CIPS makes multiple observations of PMC-scattered UV sunlight from a given location at a variety of geometries and uses the variation of the radiance with scattering angle to determine a cloud albedo, particle size distribution, and Ice Water Content (IWC). SOFIE uses IR solar occultation in 16 channels (0.3-5 μm) to obtain altitude profiles of ice properties including the particle size distribution and IWC in addition to temperature, water vapor abundance, and other environmental parameters. CIPS and SOFIE made CV observations from 2007 to 2009. In order to compare the CV observations from the two instruments, SOFIE observations are used to predict the mean PMC properties observed by CIPS. Initial agreement is poor with SOFIE predicting particle size distributions with systematically smaller mean radii and a factor of two more albedo and IWC than observed by CIPS. We show that significantly improved agreement is obtained if the PMC ice is assumed to contain 0.5% meteoric smoke by mass, in agreement with previous studies. We show that the comparison is further improved if an adjustment is made in the CIPS data processing regarding the removal of Rayleigh scattered sunlight below the clouds. This change has an effect on the CV PMC, but is negligible for most of the observed clouds outside the CV. Finally, we examine the role of the assumed shape of the ice particle size distribution. Both experiments nominally assume the shape is Gaussian with a width parameter roughly half of the mean radius. We analyze modeled ice particle distributions and show that, for the column integrated ice distribution, Log-normal and Exponential distributions better represent the range

  19. Effective Suppression of Pathological Synchronization in Cortical Networks by Highly Heterogeneous Distribution of Inhibitory Connections

    Science.gov (United States)

    Kada, Hisashi; Teramae, Jun-Nosuke; Tokuda, Isao T.

    2016-01-01

    Even without external random input, cortical networks in vivo sustain asynchronous irregular firing with low firing rate. In addition to detailed balance between excitatory and inhibitory activities, recent theoretical studies have revealed that another feature commonly observed in cortical networks, i.e., long-tailed distribution of excitatory synapses implying coexistence of many weak and a few extremely strong excitatory synapses, plays an essential role in realizing the self-sustained activity in recurrent networks of biologically plausible spiking neurons. The previous studies, however, have not considered highly non-random features of the synaptic connectivity, namely, bidirectional connections between cortical neurons are more common than expected by chance and strengths of synapses are positively correlated between pre- and postsynaptic neurons. The positive correlation of synaptic connections may destabilize asynchronous activity of networks with the long-tailed synaptic distribution and induce pathological synchronized firing among neurons. It remains unclear how the cortical network avoids such pathological synchronization. Here, we demonstrate that introduction of the correlated connections indeed gives rise to synchronized firings in a cortical network model with the long-tailed distribution. By using a simplified feed-forward network model of spiking neurons, we clarify the underlying mechanism of the synchronization. We then show that the synchronization can be efficiently suppressed by highly heterogeneous distribution, typically a lognormal distribution, of inhibitory-to-excitatory connection strengths in a recurrent network model of cortical neurons. PMID:27803659

  20. Head/tail Breaks: A New Classification Scheme for Data with a Heavy-tailed Distribution

    CERN Document Server

    Jiang, Bin

    2012-01-01

    This paper introduces a new classification scheme - head/tail breaks - in order to find groupings or hierarchy for data with a heavy-tailed distribution. The heavy-tailed distributions are heavily right skewed, with a minority of large values in the head and a majority of small values in the tail, commonly characterized by a power law, a lognormal or an exponential function. For example, a country's population is often distributed in such a heavy-tailed manner, with a minority of people (e.g., 20 percent) in the countryside and the vast majority (e.g., 80 percent) in urban areas. This heavy-tailed distribution is also called scaling, hierarchy or scaling hierarchy. This new classification scheme partitions all of the data values around the mean into two parts and continues the process iteratively for the values (above the mean) in the head until the head part values are no longer heavy-tailed distributed. Thus, the number of classes and the class intervals are both naturally determined. We therefore claim tha...

  1. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    Science.gov (United States)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  2. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    Science.gov (United States)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  3. MAIL DISTRIBUTION

    CERN Multimedia

    J. Ferguson

    2002-01-01

    Following discussions with the mail contractor and Mail Service personnel, an agreement has been reached which permits deliveries to each distribution point to be maintained, while still achieving a large proportion of the planned budget reduction in 2002. As a result, the service will revert to its previous level throughout the Laboratory as rapidly as possible. Outgoing mail will be collected from a single collection point at the end of each corridor. Further discussions are currently in progress between ST, SPL and AS divisions on the possibility of an integrated distribution service for internal mail, stores items and small parcels, which could lead to additional savings from 2003 onwards, without affecting service levels. J. Ferguson AS Division

  4. Quasihomogeneous distributions

    CERN Document Server

    von Grudzinski, O

    1991-01-01

    This is a systematic exposition of the basics of the theory of quasihomogeneous (in particular, homogeneous) functions and distributions (generalized functions). A major theme is the method of taking quasihomogeneous averages. It serves as the central tool for the study of the solvability of quasihomogeneous multiplication equations and of quasihomogeneous partial differential equations with constant coefficients. Necessary and sufficient conditions for solvability are given. Several examples are treated in detail, among them the heat and the Schrödinger equation. The final chapter is devoted to quasihomogeneous wave front sets and their application to the description of singularities of quasihomogeneous distributions, in particular to quasihomogeneous fundamental solutions of the heat and of the Schrödinger equation.

  5. Distributed Games

    OpenAIRE

    Dov Monderer; Moshe Tennenholtz

    1997-01-01

    The Internet exhibits forms of interactions which are not captured by existing models in economics, artificial intelligence and game theory. New models are needed to deal with these multi-agent interactions. In this paper we present a new model--distributed games. In such a model each players controls a number of agents which participate in asynchronous parallel multi-agent interactions (games). The agents jointly and strategically control the level of information monitoring by broadcasting m...

  6. Distributed scheduling

    OpenAIRE

    Toptal, Ayşegül

    1999-01-01

    Ankara : Department of Industrial Engineering and the Institute of Engineering and Science of Bilkent Univ., 1999. Thesis (Master's) -- Bilkent University, 1999. Includes bibliographical references. Distributed Scheduling (DS) is a new paradigm that enables the local decisionmakers make their own schedules by considering local objectives and constraints within the boundaries and the overall objective of the whole system. Local schedules from different parts of the system are...

  7. Two-parameter Rankine Heat Pumps’ COP Equations

    Directory of Open Access Journals (Sweden)

    Samuel Sunday Adefila

    2012-05-01

    Full Text Available Equations for ideal vapour compression heat pump coefficient of performance (COPR which contain two fit-parameters are reported in this work. These equations contain either temperature term alone or temperature and pressure terms as the only thermodynamic variable(s. The best equation gave error ≥5% over wide range of temperature-lift and for different working fluid types that include fluorocarbons, hydrocarbons and inorganic fluids. In these respects the equation performs better than the one-parameter models reported earlier.

  8. Parallel axes gear set optimization in two-parameter space

    Science.gov (United States)

    Theberge, Y.; Cardou, A.; Cloutier, L.

    1991-05-01

    This paper presents a method for optimal spur and helical gear transmission design that may be used in a computer aided design (CAD) approach. The design objective is generally taken as obtaining the most compact set for a given power input and gear ratio. A mixed design procedure is employed which relies both on heuristic considerations and computer capabilities. Strength and kinematic constraints are considered in order to define the domain of feasible designs. Constraints allowed include: pinion tooth bending strength, gear tooth bending strength, surface stress (resistance to pitting), scoring resistance, pinion involute interference, gear involute interference, minimum pinion tooth thickness, minimum gear tooth thickness, and profile or transverse contact ratio. A computer program was developed which allows the user to input the problem parameters, to select the calculation procedure, to see constraint curves in graphic display, to have an objective function level curve drawn through the design space, to point at a feasible design point and to have constraint values calculated at that point. The user can also modify some of the parameters during the design process.

  9. Flux Vacua Statistics for Two-Parameter Calabi-Yau's

    CERN Document Server

    Misra, A

    2004-01-01

    We study the number of flux vacua for type IIB string theory on an orientifold of the Calabi-Yau expressed as a hypersurface in WCP^4[1,1,2,2,6] by evaluating a suitable integral over the complex-structure moduli space as per the conjecture of Douglas and Ashok. We show that away from the singular conifold locus, one gets the expected power law, and that the (neighborhood) of the conifold locus indeed acts as an attractor in the (complex structure) moduli space. We also study (non)supersymmetric solutions near the conifold locus.

  10. Deducing growth mechanisms for minerals from the shapes of crystal size distributions

    Science.gov (United States)

    Eberl, D.D.; Drits, V.A.; Srodon, J.

    1998-01-01

    Crystal size distributions (CSDs) of natural and synthetic samples are observed to have several distinct and different shapes. We have simulated these CSDs using three simple equations: the Law of Proportionate Effect (LPE), a mass balance equation, and equations for Ostwald ripening. The following crystal growth mechanisms are simulated using these equations and their modifications: (1) continuous nucleation and growth in an open system, during which crystals nucleate at either a constant, decaying, or accelerating nucleation rate, and then grow according to the LPE; (2) surface-controlled growth in an open system, during which crystals grow with an essentially unlimited supply of nutrients according to the LPE; (3) supply-controlled growth in an open system, during which crystals grow with a specified, limited supply of nutrients according to the LPE; (4) supply- or surface-controlled Ostwald ripening in a closed system, during which the relative rate of crystal dissolution and growth is controlled by differences in specific surface area and by diffusion rate; and (5) supply-controlled random ripening in a closed system, during which the rate of crystal dissolution and growth is random with respect to specific surface area. Each of these mechanisms affects the shapes of CSDs. For example, mechanism (1) above with a constant nucleation rate yields asymptotically-shaped CSDs for which the variance of the natural logarithms of the crystal sizes (??2) increases exponentially with the mean of the natural logarithms of the sizes (??). Mechanism (2) yields lognormally-shaped CSDs, for which ??2 increases linearly with ??, whereas mechanisms (3) and (5) do not change the shapes of CSDs, with ??2 remaining constant with increasing ??. During supply-controlled Ostwald ripening (4), initial lognormally-shaped CSDs become more symmetric, with ??2 decreasing with increasing ??. Thus, crystal growth mechanisms often can be deduced by noting trends in ?? versus ??2 of CSDs for

  11. Poisson-weighted Lindley distribution and its application on insurance claim data

    Science.gov (United States)

    Manesh, Somayeh Nik; Hamzah, Nor Aishah; Zamani, Hossein

    2014-07-01

    This paper introduces a new two-parameter mixed Poisson distribution, namely the Poisson-weighted Lindley (P-WL), which is obtained by mixing the Poisson with a new class of weighted Lindley distributions. The closed form, the moment generating function and the probability generating function are derived. The parameter estimations methods of moments and the maximum likelihood procedure are provided. Some simulation studies are conducted to investigate the performance of P-WL distribution. In addition, the compound P-WL distribution is derived and some applications to insurance area based on observations of the number of claims and on observations of the total amount of claims incurred will be illustrated.

  12. Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.

    Science.gov (United States)

    Joo, Harry; Aguinis, Herman; Bradley, Kyle J

    2017-07-01

    We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Transition in the waiting-time distribution of price-change events in a global socioeconomic system

    Science.gov (United States)

    Zhao, Guannan; McDonald, Mark; Fenn, Dan; Williams, Stacy; Johnson, Nicholas; Johnson, Neil F.

    2013-12-01

    The goal of developing a firmer theoretical understanding of inhomogeneous temporal processes-in particular, the waiting times in some collective dynamical system-is attracting significant interest among physicists. Quantifying the deviations between the waiting-time distribution and the distribution generated by a random process may help unravel the feedback mechanisms that drive the underlying dynamics. We analyze the waiting-time distributions of high-frequency foreign exchange data for the best executable bid-ask prices across all major currencies. We find that the lognormal distribution yields a good overall fit for the waiting-time distribution between currency rate changes if both short and long waiting times are included. If we restrict our study to long waiting times, each currency pair’s distribution is consistent with a power-law tail with exponent near to 3.5. However, for short waiting times, the overall distribution resembles one generated by an archetypal complex systems model in which boundedly rational agents compete for limited resources. Our findings suggest that a gradual transition arises in trading behavior between a fast regime in which traders act in a boundedly rational way and a slower one in which traders’ decisions are driven by generic feedback mechanisms across multiple timescales and hence produce similar power-law tails irrespective of currency type.

  14. Distribution switchgear

    CERN Document Server

    Stewart, Stan

    2004-01-01

    Switchgear plays a fundamental role within the power supply industry. It is required to isolate faulty equipment, divide large networks into sections for repair purposes, reconfigure networks in order to restore power supplies and control other equipment.This book begins with the general principles of the Switchgear function and leads on to discuss topics such as interruption techniques, fault level calculations, switching transients and electrical insulation; making this an invaluable reference source. Solutions to practical problems associated with Distribution Switchgear are also included.

  15. Mail distribution

    CERN Multimedia

    2007-01-01

    Please note that starting from 1 March 2007, the mail distribution and collection times will be modified for the following buildings: 6, 8, 9, 10, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 29, 69, 40, 70, 101, 102, 109, 118, 152, 153, 154, 155, 166, 167, 169, 171, 174, 261, 354, 358, 576, 579 and 580. Complementary Information on the new times will be posted on the entry doors and left in the mail boxes of each building. TS/FM Group

  16. Specimen type and size effects on lithium hydride tensile strength distributions

    Energy Technology Data Exchange (ETDEWEB)

    Oakes, Jr, R E

    1991-12-01

    Weibull's two-parameter statistical-distribution function is used to account for the effects of specimen size and loading differences on strength distributions of lithium hydride. Three distinctly differing uniaxial specimen types (i.e., an elliptical-transition pure tensile specimen, an internally pressurized ring tensile, and two sizes of four-point-flexure specimens) are shown to provide different strength distributions as expected, because of their differing sizes and modes of loading. After separation of strengths into volumetric- and surface-initiated failure distributions, the Weibull characteristic strength parameters for the higher-strength tests associated with internal fracture initiations are shown to vary as predicted by the effective specimen volume Weibull relationship. Lower-strength results correlate with the effective area to much lesser degree, probably because of the limited number of surface-related failures and the different machining methods used to prepare the specimen. The strength distribution from the fourth specimen type, the predominantly equibiaxially stressed disk-flexure specimen, is well below that predicted by the two-parameter Weibull-derived effective volume or surface area relations. The two-parameter Weibull model cannot account for the increased failure probability associated with multiaxial stress fields. Derivations of effective volume and area relationships for those specimens for which none were found in the literature, the elliptical-transition tensile, the ring tensile, and the disk flexure (including the outer region), are also included.

  17. Fluvial Transport Model from Spatial Distribution Analysis of Libyan Desert Glass Mass on the Great Sand Sea (Southwest Egypt: Clues to Primary Glass Distribution

    Directory of Open Access Journals (Sweden)

    Nancy Jimenez-Martinez

    2015-04-01

    Full Text Available Libyan Desert Glass (LDG is a natural silica-rich melted rock found as pieces scattered over the sand and bedrock of the Western Desert of Egypt, northeast of the Gilf Kebir. In this work, a population mixture analysis serves to relate the present spatial distribution of LDG mass density with the Late Oligocene–Early Miocene fluvial dynamics in the Western Desert of Egypt. This was verified from a spatial distribution model that was predicted from the log-normal kriging method using the LDG–mass-dependent transformed variable, Y(x. Both low- and high-density normal populations (–9.2 < Y(x < –3.5 and –3.8 < Y(x < 2.1, respectively were identified. The low-density population was the result of an ordinary fluvial LDG transport/deposition sequence that was active from the time of the melting process, and which lasted until the end of activity of the Gilf River. The surface distribution of the high-density population allowed us to restrict the source area of the melting process. We demonstrate the importance of this geostatistical study in unveiling the probable location of the point where the melting of surficial material occurred and the role of the Gilf River in the configuration of the observed strewn field.

  18. DISTRIBUTION AND RANGE OF RADIONUCLIDE SORPTION COEFFICIENTS IN A SAVANNAH RIVER SITE SUBSURFACE: STOCHASTIC MODELING CONSIDERATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kaplan, D.; et. al

    2010-01-11

    The uncertainty associated with the sorption coefficient, or K{sub d} value, is one of the key uncertainties in estimating risk associated with burying low-level nuclear waste in the subsurface. The objective of this study was to measure >648 K{sub d} values and provide a measure of the range and distribution (normal or log-normal) of radionuclide K{sub d} values appropriate for the E-Area disposal site, within the Savannah River Site, near Aiken South Carolina. The 95% confidence level for the mean K{sub d} was twice the mean in the Aquifer Zone (18-30.5 m depth), equal to the mean for the Upper Vadose Zone (3.3-10 m depth), and half the mean for the Lower Vadose Zone (3.10-18 m depth). The distribution of K{sub d} values was log normal in the Upper Vadose Zone and Aquifer Zone, and normal in the Lower Vadose Zone. To our knowledge, this is the first report of natural radionuclide Kd variability in the literature. Using ranges and distribution coefficients that are specific to the hydrostratigraphic unit improved model accuracy and reduced model uncertainty. Unfortunately, extension of these conclusions to other sites is likely not appropriate given that each site has its own sources of hydrogeological variability. However, this study provides one of the first examples of the development stochastic ranges and distributions of K{sub d} values for a hydrological unit for stochastic modeling.

  19. Empirical analysis on the connection between power-law distributions and allometries for urban indicators

    Science.gov (United States)

    Alves, L. G. A.; Ribeiro, H. V.; Lenzi, E. K.; Mendes, R. S.

    2014-09-01

    We report on the existing connection between power-law distributions and allometries. As it was first reported in Gomez-Lievano et al. (2012) for the relationship between homicides and population, when these urban indicators present asymptotic power-law distributions, they can also display specific allometries among themselves. Here, we present an extensive characterization of this connection when considering all possible pairs of relationships from twelve urban indicators of Brazilian cities (such as child labor, illiteracy, income, sanitation and unemployment). Our analysis reveals that all our urban indicators are asymptotically distributed as power laws and that the proposed connection also holds for our data when the allometric relationship displays enough correlations. We have also found that not all allometric relationships are independent and that they can be understood as a consequence of the allometric relationship between the urban indicator and the population size. We further show that the residuals fluctuations surrounding the allometries are characterized by an almost constant variance and log-normal distributions.

  20. The X-ray Flux Distribution of Sagittarius A* as Seen by Chandra

    CERN Document Server

    Neilsen, J; Nowak, M A; Dexter, J; Witzel, G; Barrière, N; Li, Y; Baganoff, F K; Degenaar, N; Fragile, P C; Gammie, C; Goldwurm, A; Grosso, N; Haggard, D

    2014-01-01

    We present a statistical analysis of the X-ray flux distribution of Sgr A* from the Chandra X-ray Observatory's 3 Ms Sgr A* X-ray Visionary Project (XVP) in 2012. Our analysis indicates that the observed X-ray flux distribution can be decomposed into a steady quiescent component, represented by a Poisson process with rate $Q=(5.24\\pm0.08)\\times10^{-3}$ cts s$^{-1},$ and a variable component, represented by a power law process ($dN/dF\\propto F^{-\\xi},$ $\\xi=1.92_{-0.02}^{+0.03}$). This slope matches our recently-reported distribution of flare luminosities. The variability may also be described by a log-normal process with a median unabsorbed 2-8 keV flux of $1.8^{+0.9}_{-0.6}\\times10^{-14}$ erg s$^{-1}$ cm$^{-2}$ and a shape parameter $\\sigma=2.4\\pm0.2,$ but the power law provides a superior description of the data. In this decomposition of the flux distribution, all of the intrinsic X-ray variability of Sgr A* (spanning at least three orders of magnitude in flux) can be attributed to flaring activity, likely ...

  1. Molecular theory of size exclusion chromatography for wide pore size distributions.

    Science.gov (United States)

    Sepsey, Annamária; Bacskay, Ivett; Felinger, Attila

    2014-02-28

    Chromatographic processes can conveniently be modeled at a microscopic level using the molecular theory of chromatography. This molecular or microscopic theory is completely general; therefore it can be used for any chromatographic process such as adsorption, partition, ion-exchange or size exclusion chromatography. The molecular theory of chromatography allows taking into account the kinetics of the pore ingress and egress processes, the heterogeneity of the pore sizes and polymer polydispersion. In this work, we assume that the pore size in the stationary phase of chromatographic columns is governed by a wide lognormal distribution. This property is integrated into the molecular model of size exclusion chromatography and the moments of the elution profiles were calculated for several kinds of pore structure. Our results demonstrate that wide pore size distributions have strong influence on the retention properties (retention time, peak width, and peak shape) of macromolecules. The novel model allows us to estimate the real pore size distribution of commonly used HPLC stationary phases, and the effect of this distribution on the size exclusion process. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Wetlands Spatial-Temporal Distribution Multi-Scale Simulation Using Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Huan Yu

    2012-08-01

    Full Text Available The simulation of wetland landscape spatial-temporal distribution not only can reveal the mechanisms and laws of landscape evolution, but achieve the sustainable land use as well as provide supports for wetland conservation and management. In this report, the inland freshwater wetlands in the Sanjiang Plain of China were selected for wetland landscape changing process simulation studies. Results showed that both visual effects of simulation and prediction were good and the total accuracy co-efficiency of points to points was also significantly high (above 82%, which demonstrated the feasibility and effectiveness of wetland landscape spatial-temporal distribution simulation using Multi-Agent System (MAS. Scales exerted influence on visual effects, simulation accuracies and statistics of landscape index. Scale effects were obvious during simulation process using MAS. It was demonstrated that 60m was the best scale for simulation. It was shown that contagion index lines were exponential distribution while accuracy lines were lognormal distribution with the scale rising, which provided a reference for scale effect assessment and simulation scale selection.

  3. Millimeter-wave Line Ratios and Sub-beam Volume Density Distributions

    Science.gov (United States)

    Leroy, Adam K.; Usero, Antonio; Schruba, Andreas; Bigiel, Frank; Kruijssen, J. M. Diederik; Kepley, Amanda; Blanc, Guillermo A.; Bolatto, Alberto D.; Cormier, Diane; Gallagher, Molly; Hughes, Annie; Jiménez-Donaire, Maria J.; Rosolowsky, Erik; Schinnerer, Eva

    2017-02-01

    We explore the use of mm-wave emission line ratios to trace molecular gas density when observations integrate over a wide range of volume densities within a single telescope beam. For observations targeting external galaxies, this case is unavoidable. Using a framework similar to that of Krumholz & Thompson, we model emission for a set of common extragalactic lines from lognormal and power law density distributions. We consider the median density of gas that produces emission and the ability to predict density variations from observed line ratios. We emphasize line ratio variations because these do not require us to know the absolute abundance of our tracers. Patterns of line ratio variations have the potential to illuminate the high-end shape of the density distribution, and to capture changes in the dense gas fraction and median volume density. Our results with and without a high-density power law tail differ appreciably; we highlight better knowledge of the probability density function (PDF) shape as an important area. We also show the implications of sub-beam density distributions for isotopologue studies targeting dense gas tracers. Differential excitation often implies a significant correction to the naive case. We provide tabulated versions of many of our results, which can be used to interpret changes in mm-wave line ratios in terms of adjustments to the underlying density distributions.

  4. MODELLING OF SHORT DURATION RAINFALL (SDR INTENSITY EQUATIONS FOR ERZURUM

    Directory of Open Access Journals (Sweden)

    Serkan ŞENOCAK

    2007-01-01

    Full Text Available The scope of this study is to develop a rainfall intensity-duration-frequency (IDF equation for some return periods at Erzurum rainfall station. The maximum annual rainfall values for 5, 10, 15, 30 and 60 minutes are statistically analyzed for the period 1956 – 2004 by using some statistical distributions such as the Generalized Extreme Values (GEV, Gumbel, Normal, Two-parameter Lognormal, Three-parameter Lognormal, Gamma, Pearson type III and Log-Pearson type III distributions. ?2 goodness-of-fit test was used to choose the best statistical distribution among all distributions. IDF equation constants and coefficients of correlation (R for each emprical functions are calculated using nonlinear estimation method for each return periods (T = 2, 5, 10, 25, 50, 75 and 100 years. The most suitable IDF equation is observed that ( B max i (t = A/ t + C , except for T=100 years, because of the highest coefficients of correlation.

  5. Magnetic relaxation and correlating effective magnetic moment with particle size distribution in maghemite nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Pisane, K.L. [Department of Physics & Astronomy, West Virginia University, Morgantown, WV 26506 (United States); Despeaux, E.C. [Department of Pharmaceutical Sciences, West Virginia University, Morgantown, WV 26506 (United States); Seehra, M.S., E-mail: mseehra@wvu.edu [Department of Physics & Astronomy, West Virginia University, Morgantown, WV 26506 (United States)

    2015-06-15

    The role of particle size distribution inherently present in magnetic nanoparticles (NPs) is examined in considerable detail in relation to the measured magnetic properties of oleic acid-coated maghemite (γ-Fe{sub 2}O{sub 3}) NPs. Transmission electron microscopy (TEM) of the sol–gel synthesized γ-Fe{sub 2}O{sub 3} NPs showed a log-normal distribution of sizes with average diameter 〈D〉=7.04 nm and standard deviation σ=0.78 nm. Magnetization, M, vs. temperature (2–350 K) of the NPs was measured in an applied magnetic field H up to 90 kOe along with the temperature dependence of the ac susceptibilities, χ′ and χ″, at various frequencies, f{sub m}, from 10 Hz to 10 kHz. From the shift of the blocking temperature from T{sub B}=35 K at 10 Hz to T{sub B}=48 K at 10 kHz, the absence of any significant interparticle interaction is inferred and the relaxation frequency f{sub o}=2.6×10{sup 10} Hz and anisotropy constant K{sub a}=5.48×10{sup 5} erg/cm{sup 3} are determined. For TT{sub B}, the data of M vs. H up to 90 kOe at several temperatures are analyzed two different ways: (i) in terms of the modified Langevin function yielding an average magnetic moment per particle μ{sub p}=7300(500) μ{sub B}; and (ii) in terms of log-normal distribution of moments yielding 〈μ〉=6670 µ{sub B} at 150 K decreasing to 〈μ〉=6100 µ{sub B} at 300 K with standard deviations σ≃〈μ〉/2. It is argued that the above two approaches yield consistent and physically meaningful results as long as the width parameter, s, of the log-normal distribution is less than 0.83. - Highlights: • Magnetic properties of γ-Fe{sub 2}O{sub 3} nanoparticles, size=7.04(0.78)nm, are reported. • Attempt frequency f{sub o}=2.6×10{sup 10} Hz and no interparticle interactions are inferred. • M vs. H above T{sub B} analyzed using modified Langevin yields µ{sub p}≈7300 µ{sub B} per particle. • M vs. H above T

  6. Universal functional form of 1-minute raindrop size distribution?

    Science.gov (United States)

    Cugerone, Katia; De Michele, Carlo

    2015-04-01

    Rainfall remains one of the poorly quantified phenomena of the hydrological cycle, despite its fundamental role. No universal laws describing the rainfall behavior are available in literature. This is probably due to the continuous description of rainfall, which is a discrete phenomenon, made by drops. From the statistical point of view, the rainfall variability at particle size scale, is described by the drop size distribution (DSD). With this term, it is generally indicated as the concentration of raindrops per unit volume and diameter, as the probability density function of drop diameter at the ground, according to the specific problem of interest. Raindrops represent the water exchange, under liquid form, between atmosphere and earth surface, and the number of drops and their size have impacts in a wide range of hydrologic, meteorologic, and ecologic phenomena. DSD is used, for example, to measure the multiwavelength rain attenuation for terrestrial and satellite systems, it is an important input for the evaluation of the below cloud scavenging coefficient of the aerosol by precipitation, and is of primary importance to make estimates of rainfall rate through radars. In literature, many distributions have been used to this aim (Gamma and Lognormal above all), without statistical supports and with site-specific studies. Here, we present an extensive investigation of raindrop size distribution based on 18 datasets, consisting in 1-minute disdrometer data, sampled using Joss-Waldvogel or Thies instrument in different locations on Earth's surface. The aim is to understand if an universal functional form of 1-minute drop diameter variability exists. The study consists of three main steps: analysis of the high order moments, selection of the model through the AIC index and test of the model with the use of goodness-of-fit tests.

  7. [Distribution characteristics of soil pH, CEC and organic matter in a small watershed of the Loess Plateau].

    Science.gov (United States)

    Wei, Xiao-Rong; Shao, Ming-An

    2009-11-01

    Soil chemical properties play important roles in soil ecological functioning. In this study, 207 surface soil (0-20 cm) samples were collected from different representative landscape units in a gully watershed of the Loess Plateau to examine the distribution characteristics of soil pH, cation exchange capacity (CEC) and organic matter, and their relations to land use type, landform, and soil type. The soil pH, CEC and organic matter content ranged from 7.7 to 8.6, 11.9 to 28.7 cmol x kg(-1), and 3.0 to 27.9 g x kg(-1), and followed normal distribution, log-normal distribution, and negative binomial distribution, respectively. These three properties were significantly affected by land use type, landform, and soil type. Soil CEC and organic matter content were higher in forestland, grassland and farmland than in orchard land, and soil pH was lower in forestland than in other three land use types. Soil pH, CEC and organic matter content were higher in plateau land and sloping land than in gully bottom and terrace land. Soil CEC and organic matter content were higher in dark loessial soil and rebified soil, while soil pH was higher in yellow loessial soil. Across all the three landscape factors, soil CEC and organic matter content showed the similar distribution pattern, but an opposite distribution pattern was observed for soil pH.

  8. Magnitude-frequency distribution of submarine landslides in the Gioia Basin (southern Tyrrhenian Sea)

    Science.gov (United States)

    Casas, D.; Chiocci, F.; Casalbore, D.; Ercilla, G.; de Urbina, J. Ortiz

    2016-07-01

    Regional inventories and magnitude-frequency relationships provide critical information about landslides and represent a first step in landslide hazard assessment. Despite this, the availability of accurate inventories in the marine environment remains poor because of the commonly low accessibility of high-resolution data at regional scales. Evaluating high-resolution bathymetric data spanning the time interval 2007-2011 for the Gioa Basin of the southern Tyrrhenian Sea yielded a landslide inventory of 428 events affecting an area of >85 km2 and mobilizing approximately 1.4 km3 of sediment. This is the first time that this area is studied in such detail, justifying comparison with other areas both onland and offshore. Statistical analyses revealed that the cumulative distribution of the dataset is characterized by two right-skewed probability distributions with a heavy tail. Moreover, evidence of a rollover for smaller landslide volumes is consistent with similar trends reported in other settings worldwide. This may reflect an observational limitation and the site-specific geologic factors that influence landslide occurrence. The robust validation of both power-law and log-normal probability distributions enables the quantification of a range of probabilities for new extreme events far from the background landslide sizes defined in the area. This is a useful tool at regional scales, especially in geologically active areas where submarine landslides can occur frequently, such as the Gioia Basin.

  9. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-01-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  10. Detection of two power-law tails in the probability distribution functions of massive GMCs

    CERN Document Server

    Schneider, N; Girichidis, P; Rayner, T; Motte, F; Andre, P; Russeil, D; Abergel, A; Anderson, L; Arzoumanian, D; Benedettini, M; Csengeri, T; Didelon, P; Francesco, J D; Griffin, M; Hill, T; Klessen, R S; Ossenkopf, V; Pezzuto, S; Rivera-Ingraham, A; Spinoglio, L; Tremblin, P; Zavagno, A

    2015-01-01

    We report the novel detection of complex high-column density tails in the probability distribution functions (PDFs) for three high-mass star-forming regions (CepOB3, MonR2, NGC6334), obtained from dust emission observed with Herschel. The low column density range can be fit with a lognormal distribution. A first power-law tail starts above an extinction (Av) of ~6-14. It has a slope of alpha=1.3-2 for the rho~r^-alpha profile for an equivalent density distribution (spherical or cylindrical geometry), and is thus consistent with free-fall gravitational collapse. Above Av~40, 60, and 140, we detect an excess that can be fitted by a flatter power law tail with alpha>2. It correlates with the central regions of the cloud (ridges/hubs) of size ~1 pc and densities above 10^4 cm^-3. This excess may be caused by physical processes that slow down collapse and reduce the flow of mass towards higher densities. Possible are: 1. rotation, which introduces an angular momentum barrier, 2. increasing optical depth and weaker...

  11. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  12. Optimal design of unit hydrographs using probability distribution and genetic algorithms

    Indian Academy of Sciences (India)

    Rajib Kumar Bhattacharjya

    2004-10-01

    A nonlinear optimization model is developed to transmute a unit hydrograph into a probability distribution function (PDF). The objective function is to minimize the sum of the square of the deviation between predicted and actual direct runoff hydrograph of a watershed. The predicted runoff hydrograph is estimated by using a PDF. In a unit hydrograph, the depth of rainfall excess must be unity and the ordinates must be positive. Incorporation of a PDF ensures that the depth of rainfall excess for the unit hydrograph is unity, and the ordinates are also positive. Unit hydrograph ordinates are in terms of intensity of rainfall excess on a discharge per unit catchment area basis, the unit area thus representing the unit rainfall excess. The proposed method does not have any constraint. The nonlinear optimization formulation is solved using binary-coded genetic algorithms. The number of variables to be estimated by optimization is the same as the number of probability distribution parameters; gamma and log-normal probability distributions are used. The existing nonlinear programming model for obtaining optimal unit hydrograph has also been solved using genetic algorithms, where the constrained nonlinear optimization problem is converted to an unconstrained problem using penalty parameter approach. The results obtained are compared with those obtained by the earlier LP model and are fairly similar.

  13. Characterization of tropical precipitation using drop size distribution and rain rate-radar reflectivity relation

    Science.gov (United States)

    Das, Saurabh; Maitra, Animesh

    2017-03-01

    Characterization of precipitation is important for proper interpretation of rain information from remotely sensed data. Rain attenuation and radar reflectivity (Z) depend directly on the drop size distribution (DSD). The relation between radar reflectivity/rain attenuation and rain rate (R) varies widely depending upon the origin, topography, and drop evolution mechanism and needs further understanding of the precipitation characteristics. The present work utilizes 2 years of concurrent measurements of DSD using a ground-based disdrometer at five diverse climatic conditions in Indian subcontinent and explores the possibility of rain classification based on microphysical characteristics of precipitation. It is observed that both gamma and lognormal distributions are performing almost similar for Indian region with a marginally better performance by one model than other depending upon the locations. It has also been found that shape-slope relationship of gamma distribution can be a good indicator of rain type. The Z-R relation, Z = ARb, is found to vary widely for different precipitation systems, with convective rain that has higher values of A than the stratiform rain for two locations, whereas the reverse is observed for the rest of the three locations. Further, the results indicate that the majority of rainfall (>50%) in Indian region is due to the convective rain although the occurrence time of convective rain is low (<10%).

  14. Latitudinal aerosol size distribution variation in the Eastern Atlantic Ocean measured aboard the FS-Polarstern

    Directory of Open Access Journals (Sweden)

    M. W. Gallagher

    2007-05-01

    Full Text Available Aerosol size distribution measurements from 0.03 µm to 25 µm diameter were taken at ambient humidity aboard the German research vessel, FS-Polarstern, during a transect from Bremerhaven in northern Germany, to Cape Town in South Africa across latitudes 53°32' N to 33°55' S, denoted cruise number ANT XXI/1. The data were segregated according to air mass history, wind speed and latitude. Under clean marine conditions, the averaged size distributions were generally in good agreement with those reported previously for diameters less than 0.5 µm and can be approximated by two log-normal modes, with significant variation in the mean modal diameters. Two short periods of tri-modal behaviour were observed. Above 0.5 µm, there is indication of a limit to the mechanical generation of marine aerosol over the range of wind speeds observed (~1.7–14.7 m s−1. A new technique to determine the errors associated with aerosol size distribution measurements using Poisson statistics has been applied to the dataset, providing a tool to determine the necessary sample or averaging times for correct interpretation of such data. Finally, the data were also used to investigate the loss rate of condensing gases with potentially important consequences for heterogeneous marine photochemical cycles.

  15. Latitudinal aerosol size distribution variation in the Eastern Atlantic Ocean measured aboard the FS-Polarstern

    Directory of Open Access Journals (Sweden)

    P. I. Williams

    2006-12-01

    Full Text Available Aerosol size distribution measurements from 0.03 μm to 25 μm diameter were taken at ambient humidity aboard the German research vessel, FS-Polarstern, during a transect from Bremerhaven in northern Germany, to Cape Town in South Africa across latitudes 53°32' N to 33°55' S, denoted cruise number ANT XXI/1. The data were segregated according to air mass history, wind speed and latitude. Under clean marine conditions, the averaged size distributions were generally in good agreement with those reported previously for diameters less than 0.5 μm and can be approximated by two log-normal modes, with significant variation in the mean modal diameters. Two short periods of tri-modal behaviour were observed. Above 0.5 μm, there is indication of a limit to the mechanical generation of marine aerosol over the range of wind speeds observed. A new technique to determine the errors associated with aerosol size distribution measurements using Poisson statistics has been applied to the dataset, providing a tool to determine the necessary sample or averaging times for correct interpretation of such data. Finally, the data were also used to investigate the loss rate of condensing gases with potentially important consequences for heterogeneous marine photochemical cycles.

  16. Probabilistic Assessment of Earthquake Hazards: a Comparison among Gamma, Weibull, Generalized Exponential and Gamma Distributions

    Science.gov (United States)

    Pasari, S.

    2013-05-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Weibull, gamma, generalized exponential and lognormal distributions are quite established probability models in this recurrence interval estimation. Moreover these models share many important characteristics among themselves. In this paper, we aim to compare the effectiveness of these models in recurrence interval estimations and eventually in hazard analysis. To contemplate the appropriateness of these models, we use a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (200-320 N and 870-1000 E). The model parameters are estimated using modified maximum likelihood estimator (MMLE). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  17. Calibrating and Controlling the Quantum Efficiency Distribution of Inhomogeneously Broadened Quantum Rods Using a Mirror Ball

    CERN Document Server

    Lunnemann, Per; van Dijk-Moes, Relinde J A; Pietra, Francesca; Vanmaekelbergh, Daniël; Koenderink, A Femius

    2013-01-01

    We demonstrate that a simple silver coated ball lens can be used to accurately measure the entire distribution of radiative transition rates of quantum dot nanocrystals. This simple and cost-effective implementation of Drexhage's method that uses nanometer-controlled optical mode density variations near a mirror, not only allows to extract calibrated ensemble-averaged rates, but for the first time also to quantify the full inhomogeneous dispersion of radiative and non radiative decay rates across thousands of nanocrystals. We apply the technique to novel ultra-stable CdSe/CdS dot-in-rod emitters. The emitters are of large current interest due to their improved stability and reduced blinking. We retrieve a room-temperature ensemble average quantum efficiency of 0.87+-0.08 at a mean lifetime around 20 ns. We confirm a log-normal distribution of decay rates as often assumed in literature and we show that the rate distribution-width, that amounts to about 30% of the mean decay rate, is strongly dependent on the l...

  18. The ATLASGAL survey: distribution of cold dust in the Galactic plane. Combination with Planck data

    CERN Document Server

    Csengeri, T; Wyrowski, F; Menten, K M; Urquhart, J S; Leurini, S; Schuller, F; Beuther, H; Bontemps, S; Bronfman, L; Henning, Th; Schneider, N

    2015-01-01

    Sensitive ground-based submillimeter surveys, such as ATLASGAL, provide a global view on the distribution of cold dense gas in the Galactic plane. Here we use the 353 GHz maps from the Planck/HFI instrument to complement the ground-based APEX/LABOCA observations with information on larger angular scales. The resulting maps reveal the distribution of cold dust in the inner Galaxy with a larger spatial dynamic range. We find examples of elongated structures extending over angular scales of 0.5 degree. Corresponding to >30 pc structures in projection at a distance of 3 kpc, these dust lanes are very extended and show large aspect ratios. Furthermore, we assess the fraction of dense gas ($f_{\\rm DG}$), and estimate 2-5% (above A$_{\\rm{v}}>$7 mag) on average in the Galactic plane. PDFs of the column density reveal the typically observed log-normal distribution for low- and exhibit an excess at high column densities. As a reference for extragalactic studies, we show the line-of-sight integrated N-PDF of the inner G...

  19. Magnitude-frequency distribution of submarine landslides in the Gioia Basin (southern Tyrrhenian Sea)

    Science.gov (United States)

    Casas, D.; Chiocci, F.; Casalbore, D.; Ercilla, G.; de Urbina, J. Ortiz

    2016-12-01

    Regional inventories and magnitude-frequency relationships provide critical information about landslides and represent a first step in landslide hazard assessment. Despite this, the availability of accurate inventories in the marine environment remains poor because of the commonly low accessibility of high-resolution data at regional scales. Evaluating high-resolution bathymetric data spanning the time interval 2007-2011 for the Gioa Basin of the southern Tyrrhenian Sea yielded a landslide inventory of 428 events affecting an area of >85 km2 and mobilizing approximately 1.4 km3 of sediment. This is the first time that this area is studied in such detail, justifying comparison with other areas both onland and offshore. Statistical analyses revealed that the cumulative distribution of the dataset is characterized by two right-skewed probability distributions with a heavy tail. Moreover, evidence of a rollover for smaller landslide volumes is consistent with similar trends reported in other settings worldwide. This may reflect an observational limitation and the site-specific geologic factors that influence landslide occurrence. The robust validation of both power-law and log-normal probability distributions enables the quantification of a range of probabilities for new extreme events far from the background landslide sizes defined in the area. This is a useful tool at regional scales, especially in geologically active areas where submarine landslides can occur frequently, such as the Gioia Basin.

  20. STOCHASTIC ANALYSIS OF UNSATURATED FLOW WITH THE NORMAL DISTRIBUTION OF SOIL HYDRAULIC CONDUCTIVITY

    Institute of Scientific and Technical Information of China (English)

    Huang Guan-hua; Zhang Ren-duo

    2003-01-01

    Stochastic approaches are useful to quantitatively describe transport behavior over large temporal and spatial scales while accounting for the influence of small-scale variabilities. Numerous solutions have been developed for unsaturated soil water flow based on the lognormal distribution of soil hydraulic conductivity. To our knowledge, no available stochastic solutions for unsaturated flow have been derived on the basis of the normal distribution of hydraulic conductivity. In this paper, stochastic solutions were developed for unsaturated flow by assuming the normal distribution of saturated hydraulic conductivity (Ks). Under the assumption that soil hydraulic properties are second-order stationary, analytical expressions for capillary tension head variance (σ2h) and effective hydraulic conductivity (K*ii) in stratified soils were derived using the perturbation method. The dependence of σ2h and K*ii on soil variability and mean flow variables (the mean capillary tension head and its temporal and spatial gradients) and mean flow conditions (wetting and drying) were systematically analyzed. The calculated variance of capillary tension head with the analytical solution derived in this paper was compared with field experimental data. The good agreement indicates that the analytical solution is applicable to evaluate the variance of capillary tension head of field soils with moderate variability.