WorldWideScience

Sample records for normal distribution model

  1. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  2. Modified Normal Demand Distributions in (R,S)-Inventory Models

    NARCIS (Netherlands)

    Strijbosch, L.W.G.; Moors, J.J.A.

    2003-01-01

    To model demand, the normal distribution is by far the most popular; the disadvantage that it takes negative values is taken for granted.This paper proposes two modi.cations of the normal distribution, both taking non-negative values only.Safety factors and order-up-to-levels for the familiar (R,

  3. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  4. A New Distribution-Random Limit Normal Distribution

    OpenAIRE

    Gong, Xiaolin; Yang, Shuzhen

    2013-01-01

    This paper introduces a new distribution to improve tail risk modeling. Based on the classical normal distribution, we define a new distribution by a series of heat equations. Then, we use market data to verify our model.

  5. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  6. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  7. About normal distribution on SO(3) group in texture analysis

    Science.gov (United States)

    Savyolova, T. I.; Filatov, S. V.

    2017-12-01

    This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.

  8. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    Science.gov (United States)

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  9. The N'ormal Distribution

    Indian Academy of Sciences (India)

    An optimal way of choosing sample size in an opinion poll is indicated using the normal distribution. Introduction. In this article, the ubiquitous normal distribution is intro- duced as a convenient approximation for computing bino- mial probabilities for large values of n. Stirling's formula. • and DeMoivre-Laplace theorem ...

  10. Concentration distribution of trace elements: from normal distribution to Levy flights

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Banas, D.; Braziewicz, J.; Majewska, U.; Pajek, M.

    2003-01-01

    The paper discusses a nature of concentration distributions of trace elements in biomedical samples, which were measured by using the X-ray fluorescence techniques (XRF, TXRF). Our earlier observation, that the lognormal distribution well describes the measured concentration distribution is explained here on a more general ground. Particularly, the role of random multiplicative process, which models the concentration distributions of trace elements in biomedical samples, is discussed in detail. It is demonstrated that the lognormal distribution, appearing when the multiplicative process is driven by normal distribution, can be generalized to the so-called log-stable distribution. Such distribution describes the random multiplicative process, which is driven, instead of normal distribution, by more general stable distribution, being known as the Levy flights. The presented ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, obtained by using the conventional (XRF) and (TXRF) X-ray fluorescence methods. Particularly, the first observation of log-stable concentration distribution of trace elements is reported and discussed here in detail

  11. American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution

    DEFF Research Database (Denmark)

    Stentoft, Lars Peter

    In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... properties shows that there are important option pricing differences compared to the Gaussian case as well as to the symmetric special case. A large scale empirical examination shows that our model outperforms the Gaussian case for pricing options on three large US stocks as well as a major index...

  12. A locally adaptive normal distribution

    DEFF Research Database (Denmark)

    Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren

    2016-01-01

    entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models......The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density...

  13. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  14. Thermal modelling of normal distributed nanoparticles through thickness in an inorganic material matrix

    Science.gov (United States)

    Latré, S.; Desplentere, F.; De Pooter, S.; Seveno, D.

    2017-10-01

    Nanoscale materials showing superior thermal properties have raised the interest of the building industry. By adding these materials to conventional construction materials, it is possible to decrease the total thermal conductivity by almost one order of magnitude. This conductivity is mainly influenced by the dispersion quality within the matrix material. At the industrial scale, the main challenge is to control this dispersion to reduce or even eliminate thermal bridges. This allows to reach an industrially relevant process to balance out the high material cost and their superior thermal insulation properties. Therefore, a methodology is required to measure and describe these nanoscale distributions within the inorganic matrix material. These distributions are either random or normally distributed through thickness within the matrix material. We show that the influence of these distributions is meaningful and modifies the thermal conductivity of the building material. Hence, this strategy will generate a thermal model allowing to predict the thermal behavior of the nanoscale particles and their distributions. This thermal model will be validated by the hot wire technique. For the moment, a good correlation is found between the numerical results and experimental data for a randomly distributed form of nanoparticles in all directions.

  15. Understanding a Normal Distribution of Data.

    Science.gov (United States)

    Maltenfort, Mitchell G

    2015-12-01

    Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?

  16. Scale and shape mixtures of multivariate skew-normal distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.

    2018-02-26

    We introduce a broad and flexible class of multivariate distributions obtained by both scale and shape mixtures of multivariate skew-normal distributions. We present the probabilistic properties of this family of distributions in detail and lay down the theoretical foundations for subsequent inference with this model. In particular, we study linear transformations, marginal distributions, selection representations, stochastic representations and hierarchical representations. We also describe an EM-type algorithm for maximum likelihood estimation of the parameters of the model and demonstrate its implementation on a wind dataset. Our family of multivariate distributions unifies and extends many existing models of the literature that can be seen as submodels of our proposal.

  17. New Riemannian Priors on the Univariate Normal Model

    Directory of Open Access Journals (Sweden)

    Salem Said

    2014-07-01

    Full Text Available The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ with hyperparameters θ - ∈ Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - /2γ2, where d2(θ, θ - is the square of Rao’s Riemannian distance. The distributions G( θ - , γ are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ, as shown in the paper, and its dispersion away from θ - is given by γ.  Therefore, one thinks of members of the class represented by G( θ - , γ as being centered around θ - and  lying within a typical  distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that  this  leads  to  an  improvement  in  performance  over  the  use  of  conjugate  priors.

  18. The exp-normal distribution is infinitely divisible

    OpenAIRE

    Pinelis, Iosif

    2018-01-01

    Let $Z$ be a standard normal random variable (r.v.). It is shown that the distribution of the r.v. $\\ln|Z|$ is infinitely divisible; equivalently, the standard normal distribution considered as the distribution on the multiplicative group over $\\mathbb{R}\\setminus\\{0\\}$ is infinitely divisible.

  19. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  20. A novel generalized normal distribution for human longevity and other negatively skewed data.

    Science.gov (United States)

    Robertson, Henry T; Allison, David B

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.

  1. A Platoon Dispersion Model Based on a Truncated Normal Distribution of Speed

    Directory of Open Access Journals (Sweden)

    Ming Wei

    2012-01-01

    Full Text Available Understanding platoon dispersion is critical for the coordination of traffic signal control in an urban traffic network. Assuming that platoon speed follows a truncated normal distribution, ranging from minimum speed to maximum speed, this paper develops a piecewise density function that describes platoon dispersion characteristics as the platoon moves from an upstream to a downstream intersection. Based on this density function, the expected number of cars in the platoon that pass the downstream intersection, and the expected number of cars in the platoon that do not pass the downstream point are calculated. To facilitate coordination in a traffic signal control system, dispersion models for the front and the rear of the platoon are also derived. Finally, a numeric computation for the coordination of successive signals is presented to illustrate the validity of the proposed model.

  2. The retest distribution of the visual field summary index mean deviation is close to normal.

    Science.gov (United States)

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  3. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  4. Advection-diffusion model for normal grain growth and the stagnation of normal grain growth in thin films

    International Nuclear Information System (INIS)

    Lou, C.

    2002-01-01

    An advection-diffusion model has been set up to describe normal grain growth. In this model grains are divided into different groups according to their topological classes (number of sides of a grain). Topological transformations are modelled by advective and diffusive flows governed by advective and diffusive coefficients respectively, which are assumed to be proportional to topological classes. The ordinary differential equations governing self-similar time-independent grain size distribution can be derived analytically from continuity equations. It is proved that the time-independent distributions obtained by solving the ordinary differential equations have the same form as the time-dependent distributions obtained by solving the continuity equations. The advection-diffusion model is extended to describe the stagnation of normal grain growth in thin films. Grain boundary grooving prevents grain boundaries from moving, and the correlation between neighbouring grains accelerates the stagnation of normal grain growth. After introducing grain boundary grooving and the correlation between neighbouring grains into the model, the grain size distribution is close to a lognormal distribution, which is usually found in experiments. A vertex computer simulation of normal grain growth has also been carried out to make a cross comparison with the advection-diffusion model. The result from the simulation did not verify the assumption that the advective and diffusive coefficients are proportional to topological classes. Instead, we have observed that topological transformations usually occur on certain topological classes. This suggests that the advection-diffusion model can be improved by making a more realistic assumption on topological transformations. (author)

  5. Quantiles for Finite Mixtures of Normal Distributions

    Science.gov (United States)

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  6. Characteristic functions of scale mixtures of multivariate skew-normal distributions

    KAUST Repository

    Kim, Hyoung-Moon

    2011-08-01

    We obtain the characteristic function of scale mixtures of skew-normal distributions both in the univariate and multivariate cases. The derivation uses the simple stochastic relationship between skew-normal distributions and scale mixtures of skew-normal distributions. In particular, we describe the characteristic function of skew-normal, skew-t, and other related distributions. © 2011 Elsevier Inc.

  7. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  8. A general approach to double-moment normalization of drop size distributions

    Science.gov (United States)

    Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.

    2003-04-01

    Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.

  9. Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities

    International Nuclear Information System (INIS)

    Waite, D.A.; Denham, D.H.

    1975-01-01

    The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these

  10. An approach to normal forms of Kuramoto model with distributed delays and the effect of minimal delay

    Energy Technology Data Exchange (ETDEWEB)

    Niu, Ben, E-mail: niubenhit@163.com [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Guo, Yuxiao [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Jiang, Weihua [Department of Mathematics, Harbin Institute of Technology, Harbin 150001 (China)

    2015-09-25

    Heterogeneous delays with positive lower bound (gap) are taken into consideration in Kuramoto model. On the Ott–Antonsen's manifold, the dynamical transitional behavior from incoherence to coherence is mediated by Hopf bifurcation. We establish a perturbation technique on complex domain, by which universal normal forms, stability and criticality of the Hopf bifurcation are obtained. Theoretically, a hysteresis loop is found near the subcritically bifurcated coherent state. With respect to Gamma distributed delay with fixed mean and variance, we find that the large gap decreases Hopf bifurcation value, induces supercritical bifurcations, avoids the hysteresis loop and significantly increases in the number of coexisting coherent states. The effect of gap is finally interpreted from the viewpoint of excess kurtosis of Gamma distribution. - Highlights: • Heterogeneously delay-coupled Kuramoto model with minimal delay is considered. • Perturbation technique on complex domain is established for bifurcation analysis. • Hysteresis phenomenon is investigated in a theoretical way. • The effect of excess kurtosis of distributed delays is discussed.

  11. Notes on power of normality tests of error terms in regression models

    International Nuclear Information System (INIS)

    Střelec, Luboš

    2015-01-01

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models

  12. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  13. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    Science.gov (United States)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when

  14. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  15. Inheritance of Properties of Normal and Non-Normal Distributions after Transformation of Scores to Ranks

    Science.gov (United States)

    Zimmerman, Donald W.

    2011-01-01

    This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…

  16. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research

    2011-01-15

    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  17. Ventilation-perfusion distribution in normal subjects.

    Science.gov (United States)

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  18. Evaluating Transfer Entropy for Normal and y-Order Normal Distributions

    Czech Academy of Sciences Publication Activity Database

    Hlaváčková-Schindler, Kateřina; Toulias, T. L.; Kitsos, C. P.

    2016-01-01

    Roč. 17, č. 5 (2016), s. 1-20 ISSN 2231-0851 Institutional support: RVO:67985556 Keywords : Transfer entropy * time series * Kullback-Leibler divergence * causality * generalized normal distribution Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2016/AS/hlavackova-schindler-0461261.pdf

  19. The PDF of fluid particle acceleration in turbulent flow with underlying normal distribution of velocity fluctuations

    International Nuclear Information System (INIS)

    Aringazin, A.K.; Mazhitov, M.I.

    2003-01-01

    We describe a formal procedure to obtain and specify the general form of a marginal distribution for the Lagrangian acceleration of fluid particle in developed turbulent flow using Langevin type equation and the assumption that velocity fluctuation u follows a normal distribution with zero mean, in accord to the Heisenberg-Yaglom picture. For a particular representation, β=exp[u], of the fluctuating parameter β, we reproduce the underlying log-normal distribution and the associated marginal distribution, which was found to be in a very good agreement with the new experimental data by Crawford, Mordant, and Bodenschatz on the acceleration statistics. We discuss on arising possibilities to make refinements of the log-normal model

  20. Characteristic functions of scale mixtures of multivariate skew-normal distributions

    KAUST Repository

    Kim, Hyoung-Moon; Genton, Marc G.

    2011-01-01

    We obtain the characteristic function of scale mixtures of skew-normal distributions both in the univariate and multivariate cases. The derivation uses the simple stochastic relationship between skew-normal distributions and scale mixtures of skew

  1. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    International Nuclear Information System (INIS)

    Baidillah, Marlin R; Takei, Masahiro

    2017-01-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution. (paper)

  2. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming; Jin, Ick-Hoon

    2013-01-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  3. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming

    2013-08-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  4. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    Science.gov (United States)

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  5. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  7. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  8. Normal and student´s t distributions and their applications

    CERN Document Server

    Ahsanullah, Mohammad; Shakil, Mohammad

    2014-01-01

    The most important properties of normal and Student t-distributions are presented. A number of applications of these properties are demonstrated. New related results dealing with the distributions of the sum, product and ratio of the independent normal and Student distributions are presented. The materials will be useful to the advanced undergraduate and graduate students and practitioners in the various fields of science and engineering.

  9. Scale and shape mixtures of multivariate skew-normal distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.; Ferreira, Clé cio S.; Genton, Marc G.

    2018-01-01

    We introduce a broad and flexible class of multivariate distributions obtained by both scale and shape mixtures of multivariate skew-normal distributions. We present the probabilistic properties of this family of distributions in detail and lay down

  10. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    Energy Technology Data Exchange (ETDEWEB)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu [Division of Science and Mathematics, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi (United Arab Emirates)

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  11. Evaluation of the Weibull and log normal distribution functions as survival models of Escherichia coli under isothermal and non isothermal conditions.

    Science.gov (United States)

    Aragao, Glaucia M F; Corradini, Maria G; Normand, Mark D; Peleg, Micha

    2007-11-01

    Published survival curves of Escherichia coli in two growth media, with and without the presence of salt, at various temperatures and in a Greek eggplant salad having various levels of essential oil, all had a characteristic downward concavity when plotted on semi logarithmic coordinates. Some also exhibited what appeared as a 'shoulder' of considerable length. Regardless of whether a shoulder was noticed, the survival pattern could be considered as a manifestation of an underlying unimodal distribution of the cells' death times. Mathematically, the data could be described equally well by the Weibull and log normal distribution functions, which had similar modes, means, standard deviations and coefficients of skewness. When plotted in their probability density function (PDF) form, the curves also appeared very similar visually. This enabled us to quantify and compare the effect of temperature or essential oil concentration on the organism's survival in terms of these temporal distributions' characteristics. Increased lethality was generally expressed in a shorter mean and mode, a smaller standard deviation and increased overall symmetry as judged by the distributions' degree of skewness. The 'shoulder', as expected, simply indicated that the distribution's standard deviation was much smaller than its mode. Rate models based on the two distribution functions could be used to predict non isothermal survival patterns. They were derived on the assumption that the momentary inactivation rate is the isothermal rate at the momentary temperature at a time that corresponds to the momentary survival ratio. In this application, however, the Weibullian model with a fixed power was not only simpler and more convenient mathematically than the one based on the log normal distribution, but it also provided more accurate estimates of the dynamic inactivation patterns.

  12. Computer modeling the boron compound factor in normal brain tissue

    International Nuclear Information System (INIS)

    Gavin, P.R.; Huiskamp, R.; Wheeler, F.J.; Griebenow, M.L.

    1993-01-01

    The macroscopic distribution of borocaptate sodium (Na 2 B 12 H 11 SH or BSH) in normal tissues has been determined and can be accurately predicted from the blood concentration. The compound para-borono-phenylalanine (p-BPA) has also been studied in dogs and normal tissue distribution has been determined. The total physical dose required to reach a biological isoeffect appears to increase directly as the proportion of boron capture dose increases. This effect, together with knowledge of the macrodistribution, led to estimates of the influence of the microdistribution of the BSH compound. This paper reports a computer model that was used to predict the compound factor for BSH and p-BPA and, hence, the equivalent radiation in normal tissues. The compound factor would need to be calculated for other compounds with different distributions. This information is needed to design appropriate normal tissue tolerance studies for different organ systems and/or different boron compounds

  13. A Box-Cox normal model for response times.

    Science.gov (United States)

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  14. Normal distribution of standing balance for healthy Danish children

    DEFF Research Database (Denmark)

    Pedersen, Line Kjeldgaard; Ghasemi, Habib; Rahbek, Ole

    2013-01-01

    Title: Normal distribution of standing balance for healthy Danish children – Reproducibility of parameters of balance. Authors Line Kjeldgaard Pedersen Habib Ghasemi Ole Rahbek Bjarne Møller-Madsen 1800 characters incl. spaces Background Pedobarographic measurements are increasingly used in child......Title: Normal distribution of standing balance for healthy Danish children – Reproducibility of parameters of balance. Authors Line Kjeldgaard Pedersen Habib Ghasemi Ole Rahbek Bjarne Møller-Madsen 1800 characters incl. spaces Background Pedobarographic measurements are increasingly used...

  15. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  16. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  17. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    Science.gov (United States)

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the

  18. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    Science.gov (United States)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  19. Mast cell distribution in normal adult skin

    NARCIS (Netherlands)

    A.S. Janssens (Artiena Soe); R. Heide (Rogier); J.C. den Hollander (Jan); P.G.M. Mulder (P. G M); B. Tank (Bhupendra); A.P. Oranje (Arnold)

    2005-01-01

    markdownabstract__AIMS:__ To investigate mast cell distribution in normal adult skin to provide a reference range for comparison with mastocytosis. __METHODS:__ Mast cells (MCs) were counted in uninvolved skin adjacent to basal cell carcinomas and other dermatological disorders in adults.

  20. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  1. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars Peter

    While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order...... moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared...

  2. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  3. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    Directory of Open Access Journals (Sweden)

    Eckhard Limpert

    Full Text Available BACKGROUND: The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. METHODOLOGY/PRINCIPAL FINDINGS: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log- normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. CONCLUSIONS/SIGNIFICANCE: The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  4. Sketching Curves for Normal Distributions--Geometric Connections

    Science.gov (United States)

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  5. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  6. Dynamic models for distributed generation resources

    Energy Technology Data Exchange (ETDEWEB)

    Morched, A.S. [BPR Energie, Sherbrooke, PQ (Canada)

    2010-07-01

    Distributed resources can impact the performance of host power systems during both normal and abnormal system conditions. This PowerPoint presentation discussed the use of dynamic models for identifying potential interaction problems between interconnected systems. The models were designed to simulate steady state behaviour as well as transient responses to system disturbances. The distributed generators included directly coupled and electronically coupled generators. The directly coupled generator was driven by wind turbines. Simplified models of grid-side inverters, electronically coupled wind generators and doubly-fed induction generators (DFIGs) were presented. The responses of DFIGs to wind variations were evaluated. Synchronous machine and electronically coupled generator responses were compared. The system model components included load models, generators, protection systems, and system equivalents. Frequency responses to islanding events were reviewed. The study demonstrated that accurate simulations are needed to predict the impact of distributed generation resources on the performance of host systems. Advances in distributed generation technology have outpaced the development of models needed for integration studies. tabs., figs.

  7. A general approach to double-moment normalization of drop size distributions

    NARCIS (Netherlands)

    Lee, G.W.; Zawadzki, I.; Szyrmer, W.; Sempere Torres, D.; Uijlenhoet, R.

    2004-01-01

    Normalization of drop size distributions (DSDs) is reexamined here. First, an extension of the scaling normalization that uses one moment of the DSD as a scaling parameter to a more general scaling normalization that uses two moments as scaling parameters of the normalization is presented. In

  8. The Weight of Euro Coins: Its Distribution Might Not Be as Normal as You Would Expect

    Science.gov (United States)

    Shkedy, Ziv; Aerts, Marc; Callaert, Herman

    2006-01-01

    Classical regression models, ANOVA models and linear mixed models are just three examples (out of many) in which the normal distribution of the response is an essential assumption of the model. In this paper we use a dataset of 2000 euro coins containing information (up to the milligram) about the weight of each coin, to illustrate that the…

  9. Normality of raw data in general linear models: The most widespread myth in statistics

    Science.gov (United States)

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  10. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time...... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  11. A framework for analysis of abortive colony size distributions using a model of branching processes in irradiated normal human fibroblasts.

    Science.gov (United States)

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Ouchi, Noriyuki B; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2013-01-01

    Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD) over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells) and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.

  12. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  13. Model for the angular distribution of sky radiance

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, F C; Brunger, A P

    1979-08-01

    A flexible mathematical model is introduced which describes the radiance of the dome of the sky under various conditions. This three-component continuous distribution (TCCD) model is compounded by the superposition of three separate terms, the isotropic, circumsolar and horizon brightening terms, each representing the contribution of a particular sky characteristic. In use a particular sky condition is characterized by the values of the coefficients of each of these three terms, defining the distribution of the total diffuse component. The TCCD model has been demonstrated to fit both the normalized clear sky data and the normalized overcast sky data with an RMS error of about ten percent of the man overall sky radiance. By extension the model could describe variable or partly clouded sky conditions. The model can aid in improving the prediction of solar collector performance.

  14. Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)

    Science.gov (United States)

    Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi

    2017-03-01

    The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.

  15. Generating log-normally distributed random numbers by using the Ziggurat algorithm

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2016-01-01

    Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method

  16. Austenite Grain Size Estimtion from Chord Lengths of Logarithmic-Normal Distribution

    Directory of Open Access Journals (Sweden)

    Adrian H.

    2017-12-01

    Full Text Available Linear section of grains in polyhedral material microstructure is a system of chords. The mean length of chords is the linear grain size of the microstructure. For the prior austenite grains of low alloy structural steels, the chord length is a random variable of gamma- or logarithmic-normal distribution. The statistical grain size estimation belongs to the quantitative metallographic problems. The so-called point estimation is a well known procedure. The interval estimation (grain size confidence interval for the gamma distribution was given elsewhere, but for the logarithmic-normal distribution is the subject of the present contribution. The statistical analysis is analogous to the one for the gamma distribution.

  17. Kullback–Leibler Divergence of the γ–ordered Normal over t–distribution

    OpenAIRE

    Toulias, T-L.; Kitsos, C-P.

    2012-01-01

    The aim of this paper is to evaluate and study the Kullback–Leibler divergence of the γ–ordered Normal distribution, a generalization of Normal distribution emerged from the generalized Fisher’s information measure, over the scaled t–distribution. We investigate this evaluation through a series of bounds and approximations while the asymptotic behavior of the divergence is also studied. Moreover, we obtain a generalization of the known Kullback–Leibler information measure betwe...

  18. Statistical properties of the normalized ice particle size distribution

    Science.gov (United States)

    Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.

    2005-05-01

    Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000

  19. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    Science.gov (United States)

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  20. Linking the Value Assessment of Oil and Gas Firms to Ambidexterity Theory Using a Mixture of Normal Distributions

    Directory of Open Access Journals (Sweden)

    Casault Sébastien

    2016-05-01

    Full Text Available Oil and gas exploration and production firms have return profiles that are not easily explained by current financial theory – the variation in their market returns is non-Gaussian. In this paper, the nature and underlying reason for these significant deviations from expected behavior are considered. Understanding these differences in financial market behavior is important for a wide range of reasons, including: assessing investments, investor relations, decisions to raise capital, assessment of firm and management performance. We show that using a “thicker tailed” mixture of two normal distributions offers a significantly more accurate model than the traditionally Gaussian approach in describing the behavior of the value of oil and gas firms. This mixture of normal distribution is also more effective in bridging the gap between management theory and practice without the need to introduce complex time-sensitive GARCH and/or jump diffusion dynamics. The mixture distribution is consistent with ambidexterity theory that suggests firms operate in two distinct states driven by the primary focus of the firm: an exploration state with high uncertainty and, an exploitation (or production state with lower uncertainty. The findings have direct implications on improving the accuracy of real option pricing techniques and futures analysis of risk management. Traditional options pricing models assume that commercial returns from these assets are described by a normal random walk. However, a normal random walk model discounts the possibility of large changes to the marketplace from events such as the discovery of important reserves or the introduction of new technology. The mixture distribution proves to be well suited to inherently describe the unusually large risks and opportunities associated with oil and gas production and exploration. A significance testing study of 554 oil and gas exploration and production firms empirically supports using a mixture

  1. Exact, time-independent estimation of clone size distributions in normal and mutated cells.

    Science.gov (United States)

    Roshan, A; Jones, P H; Greenman, C D

    2014-10-06

    Biological tools such as genetic lineage tracing, three-dimensional confocal microscopy and next-generation DNA sequencing are providing new ways to quantify the distribution of clones of normal and mutated cells. Understanding population-wide clone size distributions in vivo is complicated by multiple cell types within observed tissues, and overlapping birth and death processes. This has led to the increased need for mathematically informed models to understand their biological significance. Standard approaches usually require knowledge of clonal age. We show that modelling on clone size independent of time is an alternative method that offers certain analytical advantages; it can help parametrize these models, and obtain distributions for counts of mutated or proliferating cells, for example. When applied to a general birth-death process common in epithelial progenitors, this takes the form of a gambler's ruin problem, the solution of which relates to counting Motzkin lattice paths. Applying this approach to mutational processes, alternative, exact, formulations of classic Luria-Delbrück-type problems emerge. This approach can be extended beyond neutral models of mutant clonal evolution. Applications of these approaches are twofold. First, we resolve the probability of progenitor cells generating proliferating or differentiating progeny in clonal lineage tracing experiments in vivo or cell culture assays where clone age is not known. Second, we model mutation frequency distributions that deep sequencing of subclonal samples produce.

  2. Improved Root Normal Size Distributions for Liquid Atomization

    Science.gov (United States)

    2015-11-01

    ANSI Std. Z39.18 j CONVERSION TABLE Conversion Factors for U.S. Customary to metric (SI) units of measurement. MULTIPLY BY TO...Gray (Gy) coulomb /kilogram (C/kg) second (s) kilogram (kg) kilo pascal (kPa) 1 Improved Root Normal Size Distributions for Liquid

  3. Application of a truncated normal failure distribution in reliability testing

    Science.gov (United States)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  4. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  5. Modeling a Distribution of Mortgage Credit Losses

    Czech Academy of Sciences Publication Activity Database

    Gapko, Petr; Šmíd, Martin

    2010-01-01

    Roč. 23, č. 23 (2010), s. 1-23 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045 Grant - others:Univerzita Karlova - GAUK(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Keywords : Credit Risk * Mortgage * Delinquency Rate * Generalized Hyperbolic Distribution * Normal Distribution Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/gapko-modeling a distribution of mortgage credit losses-ies wp.pdf

  6. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  7. Mast cell distribution in normal adult skin.

    Science.gov (United States)

    Janssens, A S; Heide, R; den Hollander, J C; Mulder, P G M; Tank, B; Oranje, A P

    2005-03-01

    To investigate mast cell distribution in normal adult skin to provide a reference range for comparison with mastocytosis. Mast cells (MCs) were counted in uninvolved skin adjacent to basal cell carcinomas and other dermatological disorders in adults. There was an uneven distribution of MCs in different body sites using the anti-tryptase monoclonal antibody technique. Numbers of MCs on the trunk, upper arm, and upper leg were similar, but were significantly different from those found on the lower leg and forearm. Two distinct groups were formed--proximal and distal. There were 77.0 MCs/mm2 at proximal body sites and 108.2 MCs/mm2 at distal sites. Adjusted for the adjacent diagnosis and age, this difference was consistent. The numbers of MCs in uninvolved skin adjacent to basal cell carcinomas and other dermatological disorders were not different from those in the control group. Differences in the numbers of MCs between the distal and the proximal body sites must be considered when MCs are counted for a reliable diagnosis of mastocytosis. A pilot study in patients with mastocytosis underlined the variation in the numbers of MCs in mastocytosis and normal skin, but showed a considerable overlap. The observed numbers of MCs in adults cannot be extrapolated to children. MC numbers varied significantly between proximal and distal body sites and these differences must be considered when MCs are counted for a reliable diagnosis of mastocytosis. There was a considerable overlap between the numbers of MCs in mastocytosis and normal skin.

  8. An empirical multivariate log-normal distribution representing uncertainty of biokinetic parameters for 137Cs

    International Nuclear Information System (INIS)

    Miller, G.; Martz, H.; Bertelli, L.; Melo, D.

    2008-01-01

    A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)

  9. Comparison of CSF Distribution between Idiopathic Normal Pressure Hydrocephalus and Alzheimer Disease.

    Science.gov (United States)

    Yamada, S; Ishikawa, M; Yamamoto, K

    2016-07-01

    CSF volumes in the basal cistern and Sylvian fissure are increased in both idiopathic normal pressure hydrocephalus and Alzheimer disease, though the differences in these volumes in idiopathic normal pressure hydrocephalus and Alzheimer disease have not been well-described. Using CSF segmentation and volume quantification, we compared the distribution of CSF in idiopathic normal pressure hydrocephalus and Alzheimer disease. CSF volumes were extracted from T2-weighted 3D spin-echo sequences on 3T MR imaging and quantified semi-automatically. We compared the volumes and ratios of the ventricles and subarachnoid spaces after classification in 30 patients diagnosed with idiopathic normal pressure hydrocephalus, 10 with concurrent idiopathic normal pressure hydrocephalus and Alzheimer disease, 18 with Alzheimer disease, and 26 control subjects 60 years of age or older. Brain to ventricle ratios at the anterior and posterior commissure levels and 3D volumetric convexity cistern to ventricle ratios were useful indices for the differential diagnosis of idiopathic normal pressure hydrocephalus or idiopathic normal pressure hydrocephalus with Alzheimer disease from Alzheimer disease, similar to the z-Evans index and callosal angle. The most distinctive characteristics of the CSF distribution in idiopathic normal pressure hydrocephalus were small convexity subarachnoid spaces and the large volume of the basal cistern and Sylvian fissure. The distribution of the subarachnoid spaces in the idiopathic normal pressure hydrocephalus with Alzheimer disease group was the most deformed among these 3 groups, though the mean ventricular volume of the idiopathic normal pressure hydrocephalus with Alzheimer disease group was intermediate between that of the idiopathic normal pressure hydrocephalus and Alzheimer disease groups. The z-axial expansion of the lateral ventricle and compression of the brain just above the ventricle were the common findings in the parameters for differentiating

  10. Multivariate stochastic simulation with subjective multivariate normal distributions

    Science.gov (United States)

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  11. Validation of MCDS by comparison of predicted with experimental velocity distribution functions in rarefied normal shocks

    Science.gov (United States)

    Pham-Van-diep, Gerald C.; Erwin, Daniel A.

    1989-01-01

    Velocity distribution functions in normal shock waves in argon and helium are calculated using Monte Carlo direct simulation. These are compared with experimental results for argon at M = 7.18 and for helium at M = 1.59 and 20. For both argon and helium, the variable-hard-sphere (VHS) model is used for the elastic scattering cross section, with the velocity dependence derived from a viscosity-temperature power-law relationship in the way normally used by Bird (1976).

  12. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  13. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    Science.gov (United States)

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  14. Distribution of separated energy and injected charge at normal falling of fast electron beam on target

    CERN Document Server

    Smolyar, V A; Eremin, V V

    2002-01-01

    In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well

  15. Distribution of separated energy and injected charge at normal falling of fast electron beam on target

    International Nuclear Information System (INIS)

    Smolyar, V.A.; Eremin, A.V.; Eremin, V.V.

    2002-01-01

    In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well [ru

  16. Distributive justice and cognitive enhancement in lower, normal intelligence.

    Science.gov (United States)

    Dunlop, Mikael; Savulescu, Julian

    2014-01-01

    There exists a significant disparity within society between individuals in terms of intelligence. While intelligence varies naturally throughout society, the extent to which this impacts on the life opportunities it affords to each individual is greatly undervalued. Intelligence appears to have a prominent effect over a broad range of social and economic life outcomes. Many key determinants of well-being correlate highly with the results of IQ tests, and other measures of intelligence, and an IQ of 75 is generally accepted as the most important threshold in modern life. The ability to enhance our cognitive capacities offers an exciting opportunity to correct disabling natural variation and inequality in intelligence. Pharmaceutical cognitive enhancers, such as modafinil and methylphenidate, have been shown to have the capacity to enhance cognition in normal, healthy individuals. Perhaps of most relevance is the presence of an 'inverted U effect' for most pharmaceutical cognitive enhancers, whereby the degree of enhancement increases as intelligence levels deviate further below the mean. Although enhancement, including cognitive enhancement, has been much debated recently, we argue that there are egalitarian reasons to enhance individuals with low but normal intelligence. Under egalitarianism, cognitive enhancement has the potential to reduce opportunity inequality and contribute to relative income and welfare equality in the lower, normal intelligence subgroup. Cognitive enhancement use is justifiable under prioritarianism through various means of distribution; selective access to the lower, normal intelligence subgroup, universal access, or paradoxically through access primarily to the average and above average intelligence subgroups. Similarly, an aggregate increase in social well-being is achieved through similar means of distribution under utilitarianism. In addition, the use of cognitive enhancement within the lower, normal intelligence subgroup negates, or at

  17. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  18. Non-linear learning in online tutorial to enhance students’ knowledge on normal distribution application topic

    Science.gov (United States)

    Kartono; Suryadi, D.; Herman, T.

    2018-01-01

    This study aimed to analyze the enhancement of non-linear learning (NLL) in the online tutorial (OT) content to students’ knowledge of normal distribution application (KONDA). KONDA is a competence expected to be achieved after students studied the topic of normal distribution application in the course named Education Statistics. The analysis was performed by quasi-experiment study design. The subject of the study was divided into an experimental class that was given OT content in NLL model and a control class which was given OT content in conventional learning (CL) model. Data used in this study were the results of online objective tests to measure students’ statistical prior knowledge (SPK) and students’ pre- and post-test of KONDA. The statistical analysis test of a gain score of KONDA of students who had low and moderate SPK’s scores showed students’ KONDA who learn OT content with NLL model was better than students’ KONDA who learn OT content with CL model. Meanwhile, for students who had high SPK’s scores, the gain score of students who learn OT content with NLL model had relatively similar with the gain score of students who learn OT content with CL model. Based on those findings it could be concluded that the NLL model applied to OT content could enhance KONDA of students in low and moderate SPK’s levels. Extra and more challenging didactical situation was needed for students in high SPK’s level to achieve the significant gain score.

  19. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  20. Comparing of Normal Stress Distribution in Static and Dynamic Soil-Structure Interaction Analyses

    International Nuclear Information System (INIS)

    Kholdebarin, Alireza; Massumi, Ali; Davoodi, Mohammad; Tabatabaiefar, Hamid Reza

    2008-01-01

    It is important to consider the vertical component of earthquake loading and inertia force in soil-structure interaction analyses. In most circumstances, design engineers are primarily concerned about the analysis of behavior of foundations subjected to earthquake-induced forces transmitted from the bedrock. In this research, a single rigid foundation with designated geometrical parameters located on sandy-clay soil has been modeled in FLAC software with Finite Different Method and subjected to three different vertical components of earthquake records. In these cases, it is important to evaluate effect of footing on underlying soil and to consider normal stress in soil with and without footing. The distribution of normal stress under the footing in static and dynamic states has been studied and compared. This Comparison indicated that, increasing in normal stress under the footing caused by vertical component of ground excitations, has decreased dynamic vertical settlement in comparison with static state

  1. Modeling a Distribution of Mortgage Credit Losses

    Czech Academy of Sciences Publication Activity Database

    Gapko, Petr; Šmíd, Martin

    2012-01-01

    Roč. 60, č. 10 (2012), s. 1005-1023 ISSN 0013-3035 R&D Projects: GA ČR GD402/09/H045; GA ČR(CZ) GBP402/12/G097 Grant - others:Univerzita Karlova(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : credit risk * mortgage * delinquency rate * generalized hyperbolic distribution * normal distribution Subject RIV: AH - Economics Impact factor: 0.194, year: 2012 http://library.utia.cas.cz/separaty/2013/E/smid-modeling a distribution of mortgage credit losses.pdf

  2. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  3. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Directory of Open Access Journals (Sweden)

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  4. Comparative pharmacokinetic and tissue distribution profiles of four major bioactive components in normal and hepatic fibrosis rats after oral administration of Fuzheng Huayu recipe.

    Science.gov (United States)

    Yang, Tao; Liu, Shan; Wang, Chang-Hong; Tao, Yan-Yan; Zhou, Hua; Liu, Cheng-Hai

    2015-10-10

    Fuzheng Huayu recipe (FZHY) is a herbal product for the treatment of liver fibrosis approved by the Chinese State Food and Drug Administration (SFDA), but its pharmacokinetics and tissue distribution had not been investigated. In this study, the liver fibrotic model was induced with intraperitoneal injection of dimethylnitrosamine (DMN), and FZHY was given orally to the model and normal rats. The plasma pharmacokinetics and tissue distribution profiles of four major bioactive components from FZHY were analyzed in the normal and fibrotic rat groups using an ultrahigh performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method. Results revealed that the bioavailabilities of danshensu (DSS), salvianolic acid B (SAB) and rosmarinic acid (ROS) in liver fibrotic rats increased 1.49, 3.31 and 2.37-fold, respectively, compared to normal rats. There was no obvious difference in the pharmacokinetics of amygdalin (AMY) between the normal and fibrotic rats. The tissue distribution of DSS, SAB, and AMY trended to be mostly in the kidney and lung. The distribution of DSS, SAB, and AMY in liver tissue of the model rats was significantly decreased compared to the normal rats. Significant differences in the pharmacokinetics and tissue distribution profiles of DSS, ROS, SAB and AMY were observed in rats with hepatic fibrosis after oral administration of FZHY. These results provide a meaningful basis for developing a clinical dosage regimen in the treatment of hepatic fibrosis by FZHY. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  6. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  7. Radiation distribution sensing with normal optical fiber

    International Nuclear Information System (INIS)

    Kawarabayashi, Jun; Mizuno, Ryoji; Naka, Ryotaro; Uritani, Akira; Watanabe, Ken-ichi; Iguchi, Tetsuo; Tsujimura, Norio

    2002-01-01

    The purpose of this study is to develop a radiation distribution monitor using a normal plastic optical fiber. The monitor has a long operating length (10m-100m) and can obtain continuous radiation distributions. A principle of the position sensing is based on a time-of-flight technique. The characteristics of this monitor to beta particles, gamma rays and fast neutrons were obtained. The spatial resolutions for beta particles ( 90 Sr -90 Y), gamma rays ( 137 Cs) and D-T neutrons were 30 cm, 37 cm and 13 cm, respectively. The detection efficiencies for the beta rays, the gamma rays and D-T neutrons were 0.11%, 1.6x10 -5 % and 5.4x10 -4 %, respectively. The effective attenuation length of the detection efficiency was 18m. New principle of the position sensing based on spectroscopic analysis was also proposed. A preliminary test showed that the spectrum observed at the end of the fiber depended on the position of the irradiated point. This fact shows that the radiation distributions were calculated from the spectrum by mathematical deconvolution technique. (author)

  8. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  9. Computer program determines exact two-sided tolerance limits for normal distributions

    Science.gov (United States)

    Friedman, H. A.; Webb, S. R.

    1968-01-01

    Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.

  10. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    Science.gov (United States)

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  11. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  12. Distribution of normal superficial ocular vessels in digital images.

    Science.gov (United States)

    Banaee, Touka; Ehsaei, Asieh; Pourreza, Hamidreza; Khajedaluee, Mohammad; Abrishami, Mojtaba; Basiri, Mohsen; Daneshvar Kakhki, Ramin; Pourreza, Reza

    2014-02-01

    To investigate the distribution of different-sized vessels in the digital images of the ocular surface, an endeavor which may provide useful information for future studies. This study included 295 healthy individuals. From each participant, four digital photographs of the superior and inferior conjunctivae of both eyes, with a fixed succession of photography (right upper, right lower, left upper, left lower), were taken with a slit lamp mounted camera. Photographs were then analyzed by a previously described algorithm for vessel detection in the digital images. The area (of the image) occupied by vessels (AOV) of different sizes was measured. Height, weight, fasting blood sugar (FBS) and hemoglobin levels were also measured and the relationship between these parameters and the AOV was investigated. These findings indicated a statistically significant difference in the distribution of the AOV among the four conjunctival areas. No significant correlations were noted between the AOV of each conjunctival area and the different demographic and biometric factors. Medium-sized vessels were the most abundant vessels in the photographs of the four investigated conjunctival areas. The AOV of the different sizes of vessels follows a normal distribution curve in the four areas of the conjunctiva. The distribution of the vessels in successive photographs changes in a specific manner, with the mean AOV becoming larger as the photos were taken from the right upper to the left lower area. The AOV of vessel sizes has a normal distribution curve and medium-sized vessels occupy the largest area of the photograph. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  13. Ultrasound-mediated delivery and distribution of polymeric nanoparticles in the normal brain parenchyma of a metastatic brain tumour model.

    Directory of Open Access Journals (Sweden)

    Habib Baghirov

    Full Text Available The treatment of brain diseases is hindered by the blood-brain barrier (BBB preventing most drugs from entering the brain. Focused ultrasound (FUS with microbubbles can open the BBB safely and reversibly. Systemic drug injection might induce toxicity, but encapsulation into nanoparticles reduces accumulation in normal tissue. Here we used a novel platform based on poly(2-ethyl-butyl cyanoacrylate nanoparticle-stabilized microbubbles to permeabilize the BBB in a melanoma brain metastasis model. With a dual-frequency ultrasound transducer generating FUS at 1.1 MHz and 7.8 MHz, we opened the BBB using nanoparticle-microbubbles and low-frequency FUS, and applied high-frequency FUS to generate acoustic radiation force and push nanoparticles through the extracellular matrix. Using confocal microscopy and image analysis, we quantified nanoparticle extravasation and distribution in the brain parenchyma. We also evaluated haemorrhage, as well as the expression of P-glycoprotein, a key BBB component. FUS and microbubbles distributed nanoparticles in the brain parenchyma, and the distribution depended on the extent of BBB opening. The results from acoustic radiation force were not conclusive, but in a few animals some effect could be detected. P-glycoprotein was not significantly altered immediately after sonication. In summary, FUS with our nanoparticle-stabilized microbubbles can achieve accumulation and displacement of nanoparticles in the brain parenchyma.

  14. Ultrasound-mediated delivery and distribution of polymeric nanoparticles in the normal brain parenchyma of a metastatic brain tumour model

    Science.gov (United States)

    Baghirov, Habib; Snipstad, Sofie; Sulheim, Einar; Berg, Sigrid; Hansen, Rune; Thorsen, Frits; Mørch, Yrr; Åslund, Andreas K. O.

    2018-01-01

    The treatment of brain diseases is hindered by the blood-brain barrier (BBB) preventing most drugs from entering the brain. Focused ultrasound (FUS) with microbubbles can open the BBB safely and reversibly. Systemic drug injection might induce toxicity, but encapsulation into nanoparticles reduces accumulation in normal tissue. Here we used a novel platform based on poly(2-ethyl-butyl cyanoacrylate) nanoparticle-stabilized microbubbles to permeabilize the BBB in a melanoma brain metastasis model. With a dual-frequency ultrasound transducer generating FUS at 1.1 MHz and 7.8 MHz, we opened the BBB using nanoparticle-microbubbles and low-frequency FUS, and applied high-frequency FUS to generate acoustic radiation force and push nanoparticles through the extracellular matrix. Using confocal microscopy and image analysis, we quantified nanoparticle extravasation and distribution in the brain parenchyma. We also evaluated haemorrhage, as well as the expression of P-glycoprotein, a key BBB component. FUS and microbubbles distributed nanoparticles in the brain parenchyma, and the distribution depended on the extent of BBB opening. The results from acoustic radiation force were not conclusive, but in a few animals some effect could be detected. P-glycoprotein was not significantly altered immediately after sonication. In summary, FUS with our nanoparticle-stabilized microbubbles can achieve accumulation and displacement of nanoparticles in the brain parenchyma. PMID:29338016

  15. In-medium pion valence distributions in a light-front model

    Energy Technology Data Exchange (ETDEWEB)

    Melo, J.P.B.C. de, E-mail: joao.mello@cruzeirodosul.edu.br [Laboratório de Física Teórica e Computacional – LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo (Brazil); Tsushima, K. [Laboratório de Física Teórica e Computacional – LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo (Brazil); Ahmed, I. [Laboratório de Física Teórica e Computacional – LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo (Brazil); National Center for Physics, Quaidi-i-Azam University Campus, Islamabad 45320 (Pakistan)

    2017-03-10

    Pion valence distributions in nuclear medium and vacuum are studied in a light-front constituent quark model. The in-medium input for studying the pion properties is calculated by the quark-meson coupling model. We find that the in-medium pion valence distribution, as well as the in-medium pion valence wave function, are substantially modified at normal nuclear matter density, due to the reduction in the pion decay constant.

  16. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  17. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  18. Radiation distribution sensing with normal optical fiber

    Energy Technology Data Exchange (ETDEWEB)

    Kawarabayashi, Jun; Mizuno, Ryoji; Naka, Ryotaro; Uritani, Akira; Watanabe, Ken-ichi; Iguchi, Tetsuo [Nagoya Univ., Dept. of Nuclear Engineering, Nagoya, Aichi (Japan); Tsujimura, Norio [Japan Nuclear Cycle Development Inst., Tokai Works, Tokai, Ibaraki (Japan)

    2002-12-01

    The purpose of this study is to develop a radiation distribution monitor using a normal plastic optical fiber. The monitor has a long operating length (10m-100m) and can obtain continuous radiation distributions. A principle of the position sensing is based on a time-of-flight technique. The characteristics of this monitor to beta particles, gamma rays and fast neutrons were obtained. The spatial resolutions for beta particles ({sup 90}Sr{sup -90}Y), gamma rays ({sup 137}Cs) and D-T neutrons were 30 cm, 37 cm and 13 cm, respectively. The detection efficiencies for the beta rays, the gamma rays and D-T neutrons were 0.11%, 1.6x10{sup -5}% and 5.4x10{sup -4}%, respectively. The effective attenuation length of the detection efficiency was 18m. New principle of the position sensing based on spectroscopic analysis was also proposed. A preliminary test showed that the spectrum observed at the end of the fiber depended on the position of the irradiated point. This fact shows that the radiation distributions were calculated from the spectrum by mathematical deconvolution technique. (author)

  19. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    Science.gov (United States)

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher

  20. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    Science.gov (United States)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been

  1. Very short-term probabilistic forecasting of wind power with generalized logit-Normal distributions

    DEFF Research Database (Denmark)

    Pinson, Pierre

    2012-01-01

    and probability masses at the bounds. Both auto-regressive and conditional parametric auto-regressive models are considered for the dynamics of their location and scale parameters. Estimation is performed in a recursive least squares framework with exponential forgetting. The superiority of this proposal over......Very-short-term probabilistic forecasts, which are essential for an optimal management of wind generation, ought to account for the non-linear and double-bounded nature of that stochastic process. They take here the form of discrete–continuous mixtures of generalized logit–normal distributions...

  2. Adaptive Bayesian inference on the mean of an infinite-dimensional normal distribution

    NARCIS (Netherlands)

    Belitser, E.; Ghosal, S.

    2003-01-01

    We consider the problem of estimating the mean of an infinite-break dimensional normal distribution from the Bayesian perspective. Under the assumption that the unknown true mean satisfies a "smoothness condition," we first derive the convergence rate of the posterior distribution for a prior that

  3. Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

    Science.gov (United States)

    Sass, D. A.; Schmitt, T. A.; Walker, C. M.

    2008-01-01

    Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…

  4. A branching process model for the analysis of abortive colony size distributions in carbon ion-irradiated normal human fibroblasts

    International Nuclear Information System (INIS)

    Sakashita, Tetsuya; Kobayashi, Yasuhiko; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Saito, Kimiaki

    2014-01-01

    A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/μm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log–log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE. (author)

  5. Modeling the brain morphology distribution in the general aging population

    Science.gov (United States)

    Huizinga, W.; Poot, D. H. J.; Roshchupkin, G.; Bron, E. E.; Ikram, M. A.; Vernooij, M. W.; Rueckert, D.; Niessen, W. J.; Klein, S.

    2016-03-01

    Both normal aging and neurodegenerative diseases such as Alzheimer's disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.

  6. Modeling pore corrosion in normally open gold- plated copper connectors.

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.

  7. Optimization of b-value distribution for biexponential diffusion-weighted MR imaging of normal prostate.

    Science.gov (United States)

    Jambor, Ivan; Merisaari, Harri; Aronen, Hannu J; Järvinen, Jukka; Saunavaara, Jani; Kauko, Tommi; Borra, Ronald; Pesola, Marko

    2014-05-01

    To determine the optimal b-value distribution for biexponential diffusion-weighted imaging (DWI) of normal prostate using both a computer modeling approach and in vivo measurements. Optimal b-value distributions for the fit of three parameters (fast diffusion Df, slow diffusion Ds, and fraction of fast diffusion f) were determined using Monte-Carlo simulations. The optimal b-value distribution was calculated using four individual optimization methods. Eight healthy volunteers underwent four repeated 3 Tesla prostate DWI scans using both 16 equally distributed b-values and an optimized b-value distribution obtained from the simulations. The b-value distributions were compared in terms of measurement reliability and repeatability using Shrout-Fleiss analysis. Using low noise levels, the optimal b-value distribution formed three separate clusters at low (0-400 s/mm2), mid-range (650-1200 s/mm2), and high b-values (1700-2000 s/mm2). Higher noise levels resulted into less pronounced clustering of b-values. The clustered optimized b-value distribution demonstrated better measurement reliability and repeatability in Shrout-Fleiss analysis compared with 16 equally distributed b-values. The optimal b-value distribution was found to be a clustered distribution with b-values concentrated in the low, mid, and high ranges and was shown to improve the estimation quality of biexponential DWI parameters of in vivo experiments. Copyright © 2013 Wiley Periodicals, Inc.

  8. Skewed Normal Distribution Of Return Assets In Call European Option Pricing

    Directory of Open Access Journals (Sweden)

    Evy Sulistianingsih

    2011-12-01

    Full Text Available Option is one of security derivates. In financial market, option is a contract that gives a right (notthe obligation for its owner to buy or sell a particular asset for a certain price at a certain time.Option can give a guarantee for a risk that can be faced in a market.This paper studies about theuse of Skewed Normal Distribution (SN in call europeanoption pricing. The SN provides aflexible framework that captures the skewness of log return. We obtain aclosed form solution forthe european call option pricing when log return follow the SN. Then, we will compare optionprices that is obtained by the SN and the Black-Scholes model with the option prices of market. Keywords: skewed normaldistribution, log return, options.

  9. Statistical mechanics of normal grain growth in one dimension: A partial integro-differential equation model

    International Nuclear Information System (INIS)

    Ng, Felix S.L.

    2016-01-01

    We develop a statistical-mechanical model of one-dimensional normal grain growth that does not require any drift-velocity parameterization for grain size, such as used in the continuity equation of traditional mean-field theories. The model tracks the population by considering grain sizes in neighbour pairs; the probability of a pair having neighbours of certain sizes is determined by the size-frequency distribution of all pairs. Accordingly, the evolution obeys a partial integro-differential equation (PIDE) over ‘grain size versus neighbour grain size’ space, so that the grain-size distribution is a projection of the PIDE's solution. This model, which is applicable before as well as after statistically self-similar grain growth has been reached, shows that the traditional continuity equation is invalid outside this state. During statistically self-similar growth, the PIDE correctly predicts the coarsening rate, invariant grain-size distribution and spatial grain size correlations observed in direct simulations. The PIDE is then reducible to the standard continuity equation, and we derive an explicit expression for the drift velocity. It should be possible to formulate similar parameterization-free models of normal grain growth in two and three dimensions.

  10. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  11. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    Science.gov (United States)

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  12. Size distribution of interstellar particles. III. Peculiar extinctions and normal infrared extinction

    International Nuclear Information System (INIS)

    Mathis, J.S.; Wallenhorst, S.G.

    1981-01-01

    The effect of changing the upper and lower size limits of a distribution of bare graphite and silicate particles with n(a)αa/sup -q/ is investigated. Mathis, Rumpl, and Nordsieck showed that the normal extinction is matched very well by having the small-size cutoff, a/sub -/, roughly-equal0.005 or 0.01 μm, and the large size a/sub +/, about 0.25 μm, and q = 3.5 for both substances. We consider the progressively peculiar extinctions exhibited by the well-observed stars, sigma Sco, rho Oph, and theta 1 Ori C, with values of R/sub v/[equivalentA/sub v//E(B--V)] of 3.4, 4.4, and 5.5 compared to the normal 3.1. Two (sigma Sco, rho Oph) are in a neutral dense cloud; theta 1 Ori C is in the Orion Nebula. We find that sigma Sco has a normal graphite distribution but has had its small silicate particles removed, so that a/sub -/(sil)roughly-equal0.04 μm if q = 3.5, or q(sil) = 2.6 if the size limits are fixed. However, the upper size limit on silicates remains normal. In rho Oph, the graphite is still normal, but both a/sub -/(sil) and a/sub +/(sil) are increased, to about 0.04 μm and 0.4 or 0.5 μm, respectively, if q = 3.5, or q(sil)roughly-equal1.3 if the size limits are fixed. In theta 1 Ori, the small limit on graphite has increased to about 0.04 μm, or q(gra)roughly-equal3, while the silicates are about like those in rho Oph. The calculated lambda2175 bump is broader than the observed, but normal foreground extinction probably contributes appreciably to the observed bump. The absolute amount of extinction per H atom for rho Oph is not explained. The column density of H is so large that systematic effects might be present. Very large graphite particles (a>3 μm) are required to ''hide'' the graphite without overly affecting the visual extinction, but a normal (small) graphite size distribution is required by the lambda2175 bump. We feel that it is unlikely that such a bimodal distribution exists

  13. On the distribution of the stochastic component in SUE traffic assignment models

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper discuss the use of different distributions of the stochastic component in SUE. A main conclusion is that they generally gave reasonable similar results, except for the LogNormal distribution which use is dissuaded. However, in cases with low link-costs (e.g. in dense urban areas, ramps...... and modelling of intersections and inter-changes), distributions with long tails (Gumbel and Normal) gave biased results com-pared with the Rectangular distribution. The Triangular distribution gave results somewhat between. Besides giving the most reasonable results, the Rectangular dis-tribution is the most...... calculation effective.All distributions gave a unique solution at link level after a sufficient large number of iterations (up to 1,000 at full-scale networks) while the usual aggregated measures of convergence converged quite fast (under 50 iterations). The tests also showed, that the distributions must...

  14. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    Science.gov (United States)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  15. EARLY GUIDANCE FOR ASSIGNING DISTRIBUTION PARAMETERS TO GEOCHEMICAL INPUT TERMS TO STOCHASTIC TRANSPORT MODELS

    International Nuclear Information System (INIS)

    Kaplan, D; Margaret Millings, M

    2006-01-01

    Stochastic modeling is being used in the Performance Assessment program to provide a probabilistic estimate of the range of risk that buried waste may pose. The objective of this task was to provide early guidance for stochastic modelers for the selection of the range and distribution (e.g., normal, log-normal) of distribution coefficients (K d ) and solubility values (K sp ) to be used in modeling subsurface radionuclide transport in E- and Z-Area on the Savannah River Site (SRS). Due to the project's schedule, some modeling had to be started prior to collecting the necessary field and laboratory data needed to fully populate these models. For the interim, the project will rely on literature values and some statistical analyses of literature data as inputs. Based on statistical analyses of some literature sorption tests, the following early guidance was provided: (1) Set the range to an order of magnitude for radionuclides with K d values >1000 mL/g and to a factor of two for K d values of sp values -6 M and to a factor of two for K d values of >10 -6 M. This decision is based on the literature. (3) The distribution of K d values with a mean >1000 mL/g will be log-normally distributed. Those with a K d value <1000 mL/g will be assigned a normal distribution. This is based on statistical analysis of non-site-specific data. Results from on-going site-specific field/laboratory research involving E-Area sediments will supersede this guidance; these results are expected in 2007

  16. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  17. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  18. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  19. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  20. Real-time modeling and simulation of distribution feeder and distributed resources

    Science.gov (United States)

    Singh, Pawan

    The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.

  1. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  2. Dobinski-type relations and the log-normal distribution

    International Nuclear Information System (INIS)

    Blasiak, P; Penson, K A; Solomon, A I

    2003-01-01

    We consider sequences of generalized Bell numbers B(n), n = 1, 2, ..., which can be represented by Dobinski-type summation formulae, i.e. B(n) = 1/C Σ k =0 ∞ [P(k)] n /D(k), with P(k) a polynomial, D(k) a function of k and C = const. They include the standard Bell numbers (P(k) k, D(k) = k!, C = e), their generalizations B r,r (n), r = 2, 3, ..., appearing in the normal ordering of powers of boson monomials (P(k) (k+r)!/k!, D(k) = k!, C = e), variants of 'ordered' Bell numbers B o (p) (n) (P(k) = k, D(k) = (p+1/p) k , C = 1 + p, p = 1, 2 ...), etc. We demonstrate that for α, β, γ, t positive integers (α, t ≠ 0), [B(αn 2 + βn + γ)] t is the nth moment of a positive function on (0, ∞) which is a weighted infinite sum of log-normal distributions. (letter to the editor)

  3. The law of distribution of light beam direction fluctuations in telescopes. [normal density functions

    Science.gov (United States)

    Divinskiy, M. L.; Kolchinskiy, I. G.

    1974-01-01

    The distribution of deviations from mean star trail directions was studied on the basis of 105 star trails. It was found that about 93% of the trails yield a distribution in agreement with the normal law. About 4% of the star trails agree with the Charlier distribution.

  4. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    Science.gov (United States)

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  5. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.

  6. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    Science.gov (United States)

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  7. MR imaging of the bone marrow using short TI IR, 1. Normal and pathological intensity distribution of the bone marrow

    Energy Technology Data Exchange (ETDEWEB)

    Ishizaka, Hiroshi; Kurihara, Mikiko; Tomioka, Kuniaki; Kobayashi, Kanako; Sato, Noriko; Nagai, Teruo; Heshiki, Atsuko; Amanuma, Makoto; Mizuno, Hitomi.

    1989-02-01

    Normal vertebral bone marrow intensity distribution and its alteration in various anemias were evaluated on short TI IR sequences. Material consists of 73 individuals, 48 normals and 25 anemic patients excluding neoplastic conditions. All normal and reactive hypercellular bone marrow revealed characteristic intensity distribution; marginal high intensity and central low intensity, corresponding well to normal distribution of red and yellow marrows and their physiological or reactive conversion between red and yellow marrows. Aplastic anemia did not reveal normal intensity distribution, presumably due to autonomous condition.

  8. Annual rainfall statistics for stations in the Top End of Australia: normal and log-normal distribution analysis

    International Nuclear Information System (INIS)

    Vardavas, I.M.

    1992-01-01

    A simple procedure is presented for the statistical analysis of measurement data where the primary concern is the determination of the value corresponding to a specified average exceedance probability. The analysis employs the normal and log-normal frequency distributions together with a χ 2 -test and an error analysis. The error analysis introduces the concept of a counting error criterion, or ζ-test, to test whether the data are sufficient to make the Z 2 -test reliable. The procedure is applied to the analysis of annual rainfall data recorded at stations in the tropical Top End of Australia where the Ranger uranium deposit is situated. 9 refs., 12 tabs., 9 figs

  9. Quantum arrival times and operator normalization

    International Nuclear Information System (INIS)

    Hegerfeldt, Gerhard C.; Seidel, Dirk; Gonzalo Muga, J.

    2003-01-01

    A recent approach to arrival times used the fluorescence of an atom entering a laser illuminated region, and the resulting arrival-time distribution was close to the axiomatic distribution of Kijowski, but not exactly equal, neither in limiting cases nor after compensation of reflection losses by normalization on the level of expectation values. In this paper we employ a normalization on the level of operators, recently proposed in a slightly different context. We show that in this case the axiomatic arrival-time distribution of Kijowski is recovered as a limiting case. In addition, it is shown that Allcock's complex potential model is also a limit of the physically motivated fluorescence approach and connected to Kijowski's distribution through operator normalization

  10. Radiation distribution sensing with normal optical fiber

    CERN Document Server

    Kawarabayashi, J; Naka, R; Uritani, A; Watanabe, K I; Iguchi, T; Tsujimura, N

    2002-01-01

    The purpose of this study is to develop a radiation distribution monitor using a normal plastic optical fiber. The monitor has a long operating length (10m-100m) and can obtain continuous radiation distributions. A principle of the position sensing is based on a time-of-flight technique. The characteristics of this monitor to beta particles, gamma rays and fast neutrons were obtained. The spatial resolutions for beta particles ( sup 9 sup 0 Sr sup - sup 9 sup 0 Y), gamma rays ( sup 1 sup 3 sup 7 Cs) and D-T neutrons were 30 cm, 37 cm and 13 cm, respectively. The detection efficiencies for the beta rays, the gamma rays and D-T neutrons were 0.11%, 1.6x10 sup - sup 5 % and 5.4x10 sup - sup 4 %, respectively. The effective attenuation length of the detection efficiency was 18m. New principle of the position sensing based on spectroscopic analysis was also proposed. A preliminary test showed that the spectrum observed at the end of the fiber depended on the position of the irradiated point. This fact shows that t...

  11. Mixed normal inference on multicointegration

    NARCIS (Netherlands)

    Boswijk, H.P.

    2009-01-01

    Asymptotic likelihood analysis of cointegration in I(2) models, see Johansen (1997, 2006), Boswijk (2000) and Paruolo (2000), has shown that inference on most parameters is mixed normal, implying hypothesis test statistics with an asymptotic 2 null distribution. The asymptotic distribution of the

  12. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    Science.gov (United States)

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  13. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  14. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  15. Spatial arrangement and size distribution of normal faults, Buckskin detachment upper plate, Western Arizona

    Science.gov (United States)

    Laubach, S. E.; Hundley, T. H.; Hooker, J. N.; Marrett, R. A.

    2018-03-01

    Fault arrays typically include a wide range of fault sizes and those faults may be randomly located, clustered together, or regularly or periodically located in a rock volume. Here, we investigate size distribution and spatial arrangement of normal faults using rigorous size-scaling methods and normalized correlation count (NCC). Outcrop data from Miocene sedimentary rocks in the immediate upper plate of the regional Buckskin detachment-low angle normal-fault, have differing patterns of spatial arrangement as a function of displacement (offset). Using lower size-thresholds of 1, 0.1, 0.01, and 0.001 m, displacements range over 5 orders of magnitude and have power-law frequency distributions spanning ∼ four orders of magnitude from less than 0.001 m to more than 100 m, with exponents of -0.6 and -0.9. The largest faults with >1 m displacement have a shallower size-distribution slope and regular spacing of about 20 m. In contrast, smaller faults have steep size-distribution slopes and irregular spacing, with NCC plateau patterns indicating imposed clustering. Cluster widths are 15 m for the 0.1-m threshold, 14 m for 0.01-m, and 1 m for 0.001-m displacement threshold faults. Results demonstrate normalized correlation count effectively characterizes the spatial arrangement patterns of these faults. Our example from a high-strain fault pattern above a detachment is compatible with size and spatial organization that was influenced primarily by boundary conditions such as fault shape, mechanical unit thickness and internal stratigraphy on a range of scales rather than purely by interaction among faults during their propagation.

  16. Even-odd charged multiplicity distributions and energy dependence of normalized multiplicity moments in different rapidity windows

    International Nuclear Information System (INIS)

    Wu Yuanfang; Liu Lianshou

    1990-01-01

    The even and odd multiplicity distributions for hadron-hadron collision in different rapidity windows are calculated, starting from a simple picture for charge correlation with non-zero correlation length. The coincidence and separation of these distributions are explained. The calculated window-and energy-dependence of normalized moments recovered the behaviour found in experiments. A new definition for normalized moments is propossed, especially suitable for narrow rapidity windows

  17. The first-passage time distribution for the diffusion model with variable drift

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, Miriam; Gondan, Matthias

    2017-01-01

    across trials. This extra flexibility allows accounting for slow errors that often occur in response time experiments. So far, the predicted response time distributions were obtained by numerical evaluation as analytical solutions were not available. Here, we present an analytical expression...... for the cumulative first-passage time distribution in the diffusion model with normally distributed trial-to-trial variability in the drift. The solution is obtained with predefined precision, and its evaluation turns out to be extremely fast.......The Ratcliff diffusion model is now arguably the most widely applied model for response time data. Its major advantage is its description of both response times and the probabilities for correct as well as incorrect responses. The model assumes a Wiener process with drift between two constant...

  18. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering Based on the Newly Developed Self-consistent RC/EMIC Waves Model by Khazanov et al. [2006

    Science.gov (United States)

    Khazanov, G. V.; Gallagher, D. L.; Gamayunov, K.

    2007-01-01

    It is well known that the effects of EMIC waves on RC ion and RB electron dynamics strongly depend on such particle/wave characteristics as the phase-space distribution function, frequency, wave-normal angle, wave energy, and the form of wave spectral energy density. Therefore, realistic characteristics of EMIC waves should be properly determined by modeling the RC-EMIC waves evolution self-consistently. Such a selfconsistent model progressively has been developing by Khaznnov et al. [2002-2006]. It solves a system of two coupled kinetic equations: one equation describes the RC ion dynamics and another equation describes the energy density evolution of EMIC waves. Using this model, we present the effectiveness of relativistic electron scattering and compare our results with previous work in this area of research.

  19. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  20. Normal modified stable processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process......This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...

  1. Generating a normalized geometric liver model with warping

    International Nuclear Information System (INIS)

    Boes, J.L.; Weymouth, T.E.; Meyer, C.R.; Quint, L.E.; Bland, P.H.; Bookstein, F.L.

    1990-01-01

    This paper reports on the automated determination of the liver surface in abdominal CT scans for radiation treatment, surgery planning, and anatomic visualization. The normalized geometric model of the liver is generated by averaging registered outlines from a set of 15 studies of normal liver. The outlines have been registered with the use of thin-plate spline warping based on a set of five homologous landmarks. Thus, the model consists of an average of the surface and a set of five anatomic landmarks. The accuracy of the model is measured against both the set of studies used in model generation and an alternate set of 15 normal studies with use of, as an error measure, the ratio of nonoverlapping model and study volume to total model volume

  2. Options and pitfalls of normal tissues complication probability models

    International Nuclear Information System (INIS)

    Dorr, Wolfgang

    2011-01-01

    Full text: Technological improvements in the physical administration of radiotherapy have led to increasing conformation of the treatment volume (TV) with the planning target volume (PTV) and of the irradiated volume (IV) with the TV. In this process of improvement of the physical quality of radiotherapy, the total volumes of organs at risk exposed to significant doses have significantly decreased, resulting in increased inhomogeneities in the dose distributions within these organs. This has resulted in a need to identify and quantify volume effects in different normal tissues. Today, irradiated volume today must be considered a 6t h 'R' of radiotherapy, in addition to the 5 'Rs' defined by Withers and Steel in the mid/end 1980 s. The current status of knowledge of these volume effects has recently been summarized for many organs and tissues by the QUANTEC (Quantitative Analysis of Normal Tissue Effects in the Clinic) initiative [Int. J. Radiat. Oncol. BioI. Phys. 76 (3) Suppl., 2010]. However, the concept of using dose-volume histogram parameters as a basis for dose constraints, even without applying any models for normal tissue complication probabilities (NTCP), is based on (some) assumptions that are not met in clinical routine treatment planning. First, and most important, dose-volume histogram (DVH) parameters are usually derived from a single, 'snap-shot' CT-scan, without considering physiological (urinary bladder, intestine) or radiation induced (edema, patient weight loss) changes during radiotherapy. Also, individual variations, or different institutional strategies of delineating organs at risk are rarely considered. Moreover, the reduction of the 3-dimentional dose distribution into a '2dimensl' DVH parameter implies that the localization of the dose within an organ is irrelevant-there are ample examples that this assumption is not justified. Routinely used dose constraints also do not take into account that the residual function of an organ may be

  3. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve P. Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...

  4. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering

    Science.gov (United States)

    Gamayunov, K. V.; Khazanov, G. V.

    2006-01-01

    calculate the pitch-angle diffusion coefficients using the typical wave normal distributions obtained from our self-consistent ring current-EMIC wave model, and try to quantify the effect of EMIC wave normal angle characteristics on relativistic electron scattering.

  5. Normal cranial bone marrow MR imaging pattern with age-related ADC value distribution

    International Nuclear Information System (INIS)

    Li Qi; Pan Shinong; Yin Yuming; Li Wei; Chen Zhian; Liu Yunhui; Wu Zhenhua; Guo Qiyong

    2011-01-01

    Objective: To determine MRI appearances of normal age-related cranial bone marrow and the relationship between MRI patterns and apparent diffusion coefficient (ADC) values. Methods: Five hundred subjects were divided into seven groups based on ages. Cranial bone marrow MRI patterns were performed based on different thickness of the diploe and signal intensity distribution characteristics. ADC values of the frontal, parietal, occipital and temporal bones on DWI were measured and calculated. Correlations between ages and ADC values, between patterns and ADC values, as well as the distribution of ADC values were analyzed. Results: Normal cranial bone marrow was divided into four types and six subtypes, Type I, II, III and IV, which had positive correlation with age increasing (χ 2 = 266.36, P 0.05). In addition, there was significant negative correlation between the ADC values and MRI patterns in the normal parietal and occipital bones (r = -0.691 and -0.750, P < 0.01). Conclusion: The combination of MRI features and ADC values changes in different cranial bones showed significant correlation with age increasing. Familiar with the MRI appearance of the normal bone marrow conversion pattern in different age group and their ADC value will aid the diagnosis and differential of the cranial bone pathology.

  6. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  7. Volatility modeling for IDR exchange rate through APARCH model with student-t distribution

    Science.gov (United States)

    Nugroho, Didit Budi; Susanto, Bambang

    2017-08-01

    The aim of this study is to empirically investigate the performance of APARCH(1,1) volatility model with the Student-t error distribution on five foreign currency selling rates to Indonesian rupiah (IDR), including the Swiss franc (CHF), the Euro (EUR), the British pound (GBP), Japanese yen (JPY), and the US dollar (USD). Six years daily closing rates over the period of January 2010 to December 2016 for a total number of 1722 observations have analysed. The Bayesian inference using the efficient independence chain Metropolis-Hastings and adaptive random walk Metropolis methods in the Markov chain Monte Carlo (MCMC) scheme has been applied to estimate the parameters of model. According to the DIC criterion, this study has found that the APARCH(1,1) model under Student-t distribution is a better fit than the model under normal distribution for any observed rate return series. The 95% highest posterior density interval suggested the APARCH models to model the IDR/JPY and IDR/USD volatilities. In particular, the IDR/JPY and IDR/USD data, respectively, have significant negative and positive leverage effect in the rate returns. Meanwhile, the optimal power coefficient of volatility has been found to be statistically different from 2 in adopting all rate return series, save the IDR/EUR rate return series.

  8. Breast cancer subtype distribution is different in normal weight, overweight, and obese women.

    Science.gov (United States)

    Gershuni, Victoria; Li, Yun R; Williams, Austin D; So, Alycia; Steel, Laura; Carrigan, Elena; Tchou, Julia

    2017-06-01

    Obesity is associated with tumor promoting pathways related to insulin resistance and chronic low-grade inflammation which have been linked to various disease states, including cancer. Many studies have focused on the relationship between obesity and increased estrogen production, which contributes to the pathogenesis of estrogen receptor-positive breast cancers. The link between obesity and other breast cancer subtypes, such as triple-negative breast cancer (TNBC) and Her2/neu+ (Her2+) breast cancer, is less clear. We hypothesize that obesity may be associated with the pathogenesis of specific breast cancer subtypes resulting in a different subtype distribution than normal weight women. A single-institution, retrospective analysis of tumor characteristics of 848 patients diagnosed with primary operable breast cancer between 2000 and 2013 was performed to evaluate the association between BMI and clinical outcome. Patients were grouped based on their BMI at time of diagnosis stratified into three subgroups: normal weight (BMI = 18-24.9), overweight (BMI = 25-29.9), and obese (BMI > 30). The distribution of breast cancer subtypes across the three BMI subgroups was compared. Obese and overweight women were more likely to present with TNBC and normal weight women with Her2+ breast cancer (p = 0.008). We demonstrated, for the first time, that breast cancer subtype distribution varied significantly according to BMI status. Our results suggested that obesity might activate molecular pathways other than the well-known obesity/estrogen circuit in the pathogenesis of breast cancer. Future studies are needed to understand the molecular mechanisms that drive the variation in subtype distribution across BMI subgroups.

  9. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    Science.gov (United States)

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  10. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  11. STOCHASTIC MODEL OF THE SPIN DISTRIBUTION OF DARK MATTER HALOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Juhan [Center for Advanced Computation, Korea Institute for Advanced Study, Heogiro 85, Seoul 130-722 (Korea, Republic of); Choi, Yun-Young [Department of Astronomy and Space Science, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of); Kim, Sungsoo S.; Lee, Jeong-Eun [School of Space Research, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of)

    2015-09-15

    We employ a stochastic approach to probing the origin of the log-normal distributions of halo spin in N-body simulations. After analyzing spin evolution in halo merging trees, it was found that a spin change can be characterized by a stochastic random walk of angular momentum. Also, spin distributions generated by random walks are fairly consistent with those directly obtained from N-body simulations. We derived a stochastic differential equation from a widely used spin definition and measured the probability distributions of the derived angular momentum change from a massive set of halo merging trees. The roles of major merging and accretion are also statistically analyzed in evolving spin distributions. Several factors (local environment, halo mass, merging mass ratio, and redshift) are found to influence the angular momentum change. The spin distributions generated in the mean-field or void regions tend to shift slightly to a higher spin value compared with simulated spin distributions, which seems to be caused by the correlated random walks. We verified the assumption of randomness in the angular momentum change observed in the N-body simulation and detected several degrees of correlation between walks, which may provide a clue for the discrepancies between the simulated and generated spin distributions in the voids. However, the generated spin distributions in the group and cluster regions successfully match the simulated spin distribution. We also demonstrated that the log-normality of the spin distribution is a natural consequence of the stochastic differential equation of the halo spin, which is well described by the Geometric Brownian Motion model.

  12. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  13. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  14. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  15. A model for fission product distribution in CANDU fuel

    International Nuclear Information System (INIS)

    Muzumdar, A.P.

    1983-01-01

    This paper describes a model to estimate the distribution of active fission products among the UO 2 grains, grain-boundaries, and the free void spaces in CANDU fuel elements during normal operation. This distribution is required for the calculation of the potential release of activity from failed fuel sheaths during a loss-of-coolant accident. The activity residing in the free spaces (''free'' inventory) is available for release upon sheath rupture, whereas relatively high fuel temperatures and/or thermal shock are required to release the activity in the grain boundaries or grains. A preliminary comparison of the model with the data from in-reactor sweep-gas experiments performed in Canada yields generally good agreement, with overprediction rather than under prediction of radiologically important isotopes, such as I 131 . The model also appears to generally agree with the ''free'' inventory release calculated using ANS-5.4. (author)

  16. A distributed dynamic model of a monolith hydrogen membrane reactor

    International Nuclear Information System (INIS)

    Michelsen, Finn Are; Wilhelmsen, Øivind; Zhao, Lei; Aasen, Knut Ingvar

    2013-01-01

    Highlights: ► We model a rigorous distributed dynamic model for a HMR unit. ► The model includes enough complexity for steady-state and dynamic analysis. ► Simulations show that the model is non-linear within the normal operating range. ► The model is useful for studying and handling disturbances such as inlet changes and membrane leakage. - Abstract: This paper describes a distributed mechanistic dynamic model of a hydrogen membrane reformer unit (HMR) used for methane steam reforming. The model is based on a square channel monolith structure concept, where air flows adjacent to a mix of natural gas and water distributed in a chess pattern of channels. Combustion of hydrogen gives energy to the endothermic steam reforming reactions. The model is used for both steady state and dynamic analyses. It therefore needs to be computationally attractive, but still include enough complexity to study the important steady state and dynamic features of the process. Steady-state analysis of the model gives optimum for the steam to carbon and steam to oxygen ratios, where the conversion of methane is 92% and the hydrogen used as energy for the endothermic reactions is 28% at the nominal optimum. The dynamic analysis shows that non-linear control schemes may be necessary for satisfactory control performance

  17. Carbon K-shell photoionization of CO: Molecular frame angular distributions of normal and conjugate shakeup satellites

    International Nuclear Information System (INIS)

    Jahnke, T.; Titze, J.; Foucar, L.; Wallauer, R.; Osipov, T.; Benis, E.P.; Jagutzki, O.; Arnold, W.; Czasch, A.; Staudte, A.; Schoeffler, M.; Alnaser, A.; Weber, T.; Prior, M.H.; Schmidt-Boecking, H.; Doerner, R.

    2011-01-01

    We have measured the molecular frame angular distributions of photoelectrons emitted from the Carbon K-shell of fixed-in-space CO molecules for the case of simultaneous excitation of the remaining molecular ion. Normal and conjugate shakeup states are observed. Photoelectrons belonging to normal Σ-satellite lines show an angular distribution resembling that observed for the main photoline at the same electron energy. Surprisingly a similar shape is found for conjugate shakeup states with Π-symmetry. In our data we identify shake rather than electron scattering (PEVE) as the mechanism producing the conjugate lines. The angular distributions clearly show the presence of a Σ shape resonance for all of the satellite lines.

  18. Distribution of the anticancer drugs doxorubicin, mitoxantrone and topotecan in tumors and normal tissues.

    Science.gov (United States)

    Patel, Krupa J; Trédan, Olivier; Tannock, Ian F

    2013-07-01

    Pharmacokinetic analyses estimate the mean concentration of drug within a given tissue as a function of time, but do not give information about the spatial distribution of drugs within that tissue. Here, we compare the time-dependent spatial distribution of three anticancer drugs within tumors, heart, kidney, liver and brain. Mice bearing various xenografts were treated with doxorubicin, mitoxantrone or topotecan. At various times after injection, tumors and samples of heart, kidney, liver and brain were excised. Within solid tumors, the distribution of doxorubicin, mitoxantrone and topotecan was limited to perivascular regions at 10 min after administration and the distance from blood vessels at which drug intensity fell to half was ~25-75 μm. Although drug distribution improved after 3 and 24 h, there remained a significant decrease in drug fluorescence with increasing distance from tumor blood vessels. Drug distribution was relatively uniform in the heart, kidney and liver with substantially greater perivascular drug uptake than in tumors. There was significantly higher total drug fluorescence in the liver than in tumors after 10 min, 3 and 24 h. Little to no drug fluorescence was observed in the brain. There are marked differences in the spatial distributions of three anticancer drugs within tumor tissue and normal tissues over time, with greater exposure to most normal tissues and limited drug distribution to many cells in tumors. Studies of the spatial distribution of drugs are required to complement pharmacokinetic data in order to better understand and predict drug effects and toxicities.

  19. Local stem cell depletion model for normal tissue damage

    International Nuclear Information System (INIS)

    Yaes, R.J.; Keland, A.

    1987-01-01

    The hypothesis that radiation causes normal tissue damage by completely depleting local regions of tissue of viable stem cells leads to a simple mathematical model for such damage. In organs like skin and spinal cord where destruction of a small volume of tissue leads to a clinically apparent complication, the complication probability is expressed as a function of dose, volume and stem cell number by a simple triple negative exponential function analogous to the double exponential function of Munro and Gilbert for tumor control. The steep dose response curves for radiation myelitis that are obtained with our model are compared with the experimental data for radiation myelitis in laboratory rats. The model can be generalized to include other types or organs, high LET radiation, fractionated courses of radiation, and cases where an organ with a heterogeneous stem cell population receives an inhomogeneous dose of radiation. In principle it would thus be possible to determine the probability of tumor control and of damage to any organ within the radiation field if the dose distribution in three dimensional space within a patient is known

  20. Dynamic modeling method of the bolted joint with uneven distribution of joint surface pressure

    Science.gov (United States)

    Li, Shichao; Gao, Hongli; Liu, Qi; Liu, Bokai

    2018-03-01

    The dynamic characteristics of the bolted joints have a significant influence on the dynamic characteristics of the machine tool. Therefore, establishing a reasonable bolted joint dynamics model is helpful to improve the accuracy of machine tool dynamics model. Because the pressure distribution on the joint surface is uneven under the concentrated force of bolts, a dynamic modeling method based on the uneven pressure distribution of the joint surface is presented in this paper to improve the dynamic modeling accuracy of the machine tool. The analytic formulas between the normal, tangential stiffness per unit area and the surface pressure on the joint surface can be deduced based on the Hertz contact theory, and the pressure distribution on the joint surface can be obtained by the finite element software. Futhermore, the normal and tangential stiffness distribution on the joint surface can be obtained by the analytic formula and the pressure distribution on the joint surface, and assigning it into the finite element model of the joint. Qualitatively compared the theoretical mode shapes and the experimental mode shapes, as well as quantitatively compared the theoretical modal frequencies and the experimental modal frequencies. The comparison results show that the relative error between the first four-order theoretical modal frequencies and the first four-order experimental modal frequencies is 0.2% to 4.2%. Besides, the first four-order theoretical mode shapes and the first four-order experimental mode shapes are similar and one-to-one correspondence. Therefore, the validity of the theoretical model is verified. The dynamic modeling method proposed in this paper can provide a theoretical basis for the accurate dynamic modeling of the bolted joint in machine tools.

  1. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  2. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  3. Neutron importance and the generalized Green function for the conventionally critical reactor with normalized neutron distribution

    International Nuclear Information System (INIS)

    Khromov, V.V.

    1978-01-01

    The notion of neutron importance when applied to nuclear reactor statics problems described by time-independent homogeneous equations of neutron transport with provision for normalization of neutron distribution is considered. An equation has been obtained for the function of neutron importance in a conditionally critical reactor with respect to an arbitrary nons linear functional determined for the normalized neutron distribution. Relation between this function and the generalized Green function of the selfconjugated operator of the reactor equation is determined and the formula of small perturbations for the functionals of a conditionally critical reactor is deduced

  4. Elastin distribution in the normal uterus, uterine leiomyomas, adenomyosis and adenomyomas: a comparison.

    Science.gov (United States)

    Zheng, Wei-Qiang; Ma, Rong; Zheng, Jian-Ming; Gong, Zhi-Jing

    2006-04-01

    To describe the histologic distribution of elastin in the nonpregnant human uterus, uterine leiomyomas, adenomyosis and adenomyomas. Uteri were obtained from women undergoing hysterectomy for benign conditions, including 26 cases of uterine leiomyomas, 24 cases of adenomyosis, 18 adenomyomas and 6 cases of autopsy specimens. Specific histochemical staining techniques were employed in order to demonstrate the distribution of elastin. The distribution of elastin components in the uterus was markedly uneven and showed a decreasing gradient from outer to inner myometrium. No elastin was present within leiomyomas, adenomyomas or adenomyosis. The distribution of elastin may help explain the normal function of the myometrium in labor. It implies that the uneven distribution of elastin components and absence of elastin within leiomyomas, adenomyomas and adenomyosis could be of some clinical significance. The altered elastin distribution in disease states may help explain such symptoms as dysmenorrhea in uterine endometriosis.

  5. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  6. The normal distribution of thoracoabdominal aorta small branch artery ostia

    International Nuclear Information System (INIS)

    Cronin, Paul; Williams, David M.; Vellody, Ranjith; Kelly, Aine Marie; Kazerooni, Ella A.; Carlos, Ruth C.

    2011-01-01

    The purpose of this study was to determine the normal distribution of aortic branch artery ostia. CT scans of 100 subjects were retrospectively reviewed. The angular distributions of the aorta with respect to the center of the T3 to L4 vertebral bodies, and of branch artery origins with respect to the center of the aorta were measured. At each vertebral body level the distribution of intercostal/lumbar arteries and other branch arteries were calculated. The proximal descending aorta is posteriorly placed becoming a midline structure, at the thoracolumbar junction, and remains anterior to the vertebral bodies within the abdomen. The intercostal and lumbar artery ostia have a distinct distribution. At each vertebral level from T3 caudally, one intercostal artery originates from the posterior wall of the aorta throughout the thoracic aorta, while the other intercostal artery originates from the medial wall of the descending thoracic aorta high in the chest, posteromedially from the mid-thoracic aorta, and from the posterior wall of the aorta low in the chest. Mediastinal branches of the thoracic aorta originate from the medial and anterior wall. Lumbar branches originate only from the posterior wall of the abdominal aorta. Aortic branch artery origins arise with a bimodal distribution and have a characteristic location. Mediastinal branches of the thoracic aorta originate from the medial and anterior wall. Knowing the location of aortic branch artery ostia may help distinguish branch artery pseudoaneurysms from penetrating ulcers.

  7. Probability distribution of atmospheric pollutants: comparison among four methods for the determination of the log-normal distribution parameters; La distribuzione di probabilita` degli inquinanti atmosferici: confronto tra quattro metodi per la determinazione dei parametri della distribuzione log-normale

    Energy Technology Data Exchange (ETDEWEB)

    Bellasio, R [Enviroware s.r.l., Agrate Brianza, Milan (Italy). Centro Direzionale Colleoni; Lanzani, G; Ripamonti, M; Valore, M [Amministrazione Provinciale, Como (Italy)

    1998-04-01

    This work illustrates the possibility to interpolate the measured concentrations of CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} during one year (1995) at the 13 stations of the air quality monitoring station network of the Provinces of Como and Lecco (Italy) by means of a log-normal distribution. Particular attention was given in choosing the method for the determination of the log-normal distribution parameters among four possible methods: I natural, II percentiles, III moments, IV maximum likelihood. In order to evaluate the goodness of fit a ranking procedure was carried out over the values of four indices: absolute deviation, weighted absolute deviation, Kolmogorov-Smirnov index and Cramer-von Mises-Smirnov index. The capability of the log-normal distribution to fit the measured data is then discussed as a function of the pollutant and of the monitoring station. Finally an example of application is given: the effect of an emission reduction strategy in Lombardy Region (the so called `bollino blu`) is evaluated using a log-normal distribution. [Italiano] In questo lavoro si discute la possibilita` di interpolare le concentrazioni misurate di CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} durante un anno solare (il 1995) nelle 13 stazioni della Rete di Rilevamento della qualita` dell`aria delle Provincie di Como e di Lecco mediante una funzione log-normale. In particolare si discute quale metodo e` meglio usare per l`individuazione dei 2 parametri caratteristici della log-normale, tra 4 teoreticamente possibili: I naturale, II dei percentili, III dei momenti, IV della massima verosimiglianza. Per valutare i risultati ottenuti si usano: la deviazione assoluta, la deviazione pesata, il parametro di Kolmogorov-Smirnov e quello di Cramer-von Mises-Smirnov effettuando un ranking tra i metodi in funzione degli inquinanti e della stazione di misura. Ancora in funzione degli inquinanti e delle diverse stazioni di misura si discute poi la capacita` della funzione log-normale di

  8. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  9. Normal and Special Models of Neutrino Masses and Mixings

    CERN Document Server

    Altarelli, Guido

    2005-01-01

    One can make a distinction between "normal" and "special" models. For normal models $\\theta_{23}$ is not too close to maximal and $\\theta_{13}$ is not too small, typically a small power of the self-suggesting order parameter $\\sqrt{r}$, with $r=\\Delta m_{sol}^2/\\Delta m_{atm}^2 \\sim 1/35$. Special models are those where some symmetry or dynamical feature assures in a natural way the near vanishing of $\\theta_{13}$ and/or of $\\theta_{23}- \\pi/4$. Normal models are conceptually more economical and much simpler to construct. Here we focus on special models, in particular a recent one based on A4 discrete symmetry and extra dimensions that leads in a natural way to a Harrison-Perkins-Scott mixing matrix.

  10. Externally studentized normal midrange distribution

    Directory of Open Access Journals (Sweden)

    Ben Dêivide de Oliveira Batista

    Full Text Available ABSTRACT The distribution of externally studentized midrange was created based on the original studentization procedures of Student and was inspired in the distribution of the externally studentized range. The large use of the externally studentized range in multiple comparisons was also a motivation for developing this new distribution. This work aimed to derive analytic equations to distribution of the externally studentized midrange, obtaining the cumulative distribution, probability density and quantile functions and generating random values. This is a new distribution that the authors could not find any report in the literature. A second objective was to build an R package for obtaining numerically the probability density, cumulative distribution and quantile functions and make it available to the scientific community. The algorithms were proposed and implemented using Gauss-Legendre quadrature and the Newton-Raphson method in R software, resulting in the SMR package, available for download in the CRAN site. The implemented routines showed high accuracy proved by using Monte Carlo simulations and by comparing results with different number of quadrature points. Regarding to the precision to obtain the quantiles for cases where the degrees of freedom are close to 1 and the percentiles are close to 100%, it is recommended to use more than 64 quadrature points.

  11. Modeling and simulation of normal and hemiparetic gait

    Science.gov (United States)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  12. Energy dependence of angular distributions of sputtered particles by ion-beam bombardment at normal incidence

    International Nuclear Information System (INIS)

    Matsuda, Yoshinobu; Ueda, Yasutoshi; Uchino, Kiichiro; Muraoka, Katsunori; Maeda, Mitsuo; Akazaki, Masanori; Yamamura, Yasunori.

    1986-01-01

    The angular distributions of sputtered Fe-atoms were measured using the laser fluorescence technique during Ar-ion bombardment for energies of 0.6, 1, 2 and 3 keV at normal incidence. The measured cosine distribution at 0.6 keV progressively deviated to an over-cosine distribution at higher energies, and at 3 keV the angular distribution was an overcosine distribution of about 20 %. The experimental results agree qualitatively with calculations by a recent computer simulation code, ACAT. The results are explained by the competition between surface scattering and the effects of primary knock-on atoms, which tend to make the angular distributions over-cosine and under-cosine, respectively. (author)

  13. Normal distribution of /sup 111/In chloride on scintigram

    Energy Technology Data Exchange (ETDEWEB)

    Oyama, K; Machida, K; Hayashi, S; Watari, T; Akaike, A

    1977-05-01

    Indium-111-chloride (/sup 111/InCl/sub 3/) was used as a bone marrow imaging and a tumor-localizing agent in 38 patients (46 scintigrams), who were suspected of, or diagnosed as, having malignant disease, and who were irradiated for malignant disease. The regions of suspected malignant disease, of abnormally accumulated on scintigrams, and the target irradiated, were excluded to estimate the normal distribution of /sup 111/InCl/sub 3/. Scintigrams were taken 48 hrs after intravenous injection of /sup 111/InCl/sub 3/ 1 to 3 mCi. The percent and score distribution of /sup 111/InCl/sub 3/ were noted in 23 regions. As the liver showed the highest accumulation of /sup 111/In on all scintigrams, the liver was designated as 2+. Comparing with the radioactivity in the liver, other regions had similar (2+), moderately decreased (+), or severely decreased (-) accumulation on scintigram. The score is given one for 2+, 0.5 for +, 0 for -. The score and percentage distributions were: liver 100 (100%), lumbar vertebra 58.5 (100%), mediastinum 55 (100%), nasopharynx 50 (100%), testis 47.5 (59%), heart 44.5 (89%), and pelvis 43.5 (78%). Comparing this study with a previous study of /sup 111/In-BLM, score distribution in lumbar vertebra, pelvis, and skull were similar. /sup 111/In-BLM is excreted rapidly after injection, but little /sup 111/InCl/sub 3/ is excreted. Accumulation of /sup 111/In in bone marrow depends upon the amount of /sup 111/In-transferrin in blood. High accumulation in the lumbar vertebra and pelvis shows that /sup 111/InCl/sub 3/ would be effective as a bone marrow imaging agent.

  14. Impact of foot progression angle on the distribution of plantar pressure in normal children.

    Science.gov (United States)

    Lai, Yu-Cheng; Lin, Huey-Shyan; Pan, Hui-Fen; Chang, Wei-Ning; Hsu, Chien-Jen; Renn, Jenn-Huei

    2014-02-01

    Plantar pressure distribution during walking is affected by several gait factors, most especially the foot progression angle which has been studied in children with neuromuscular diseases. However, this relationship in normal children has only been reported in limited studies. The purpose of this study is to clarify the correlation between foot progression angle and plantar pressure distribution in normal children, as well as the impacts of age and sex on this correlation. This study retrospectively reviewed dynamic pedobarographic data that were included in the gait laboratory database of our institution. In total, 77 normally developed children aged 5-16 years who were treated between 2004 and 2009 were included. Each child's footprint was divided into 5 segments: lateral forefoot, medial forefoot, lateral midfoot, medial midfoot, and heel. The percentages of impulse exerted at the medial foot, forefoot, midfoot, and heel were calculated. The average foot progression angle was 5.03° toe-out. Most of the total impulse was exerted on the forefoot (52.0%). Toe-out gait was positively correlated with high medial (r = 0.274; P plantar pressure as part of the treatment of various foot pathologies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Drug binding affinities and potencies are best described by a log-normal distribution and use of geometric means

    International Nuclear Information System (INIS)

    Stanisic, D.; Hancock, A.A.; Kyncl, J.J.; Lin, C.T.; Bush, E.N.

    1986-01-01

    (-)-Norepinephrine (NE) is used as an internal standard in their in vitro adrenergic assays, and the concentration of NE which produces a half-maximal inhibition of specific radioligand binding (affinity; K/sub I/), or half-maximal contractile response (potency; ED 50 ) has been measured numerous times. The goodness-of-fit test for normality was performed on both normal (Gaussian) or log 10 -normal frequency histograms of these data using the SAS Univariate procedure. Specific binding of 3 H-prazosin to rat liver (α 1 -), 3 H rauwolscine to rat cortex (α 2 -) and 3 H-dihydroalprenolol to rat ventricle (β 1 -) or rat lung (β 2 -receptors) was inhibited by NE; the distributions of NE K/sub I/'s at all these sites were skewed to the right, with highly significant (p 50 's of NE in isolated rabbit aorta (α 1 ), phenoxybenzamine-treated dog saphenous vein (α 2 ) and guinea pig atrium (β 1 ). The vasorelaxant potency of atrial natriuretic hormone in histamine-contracted rabbit aorta also was better described by a log-normal distribution, indicating that log-normalcy is probably a general phenomenon of drug-receptor interactions. Because data of this type appear to be log-normally distributed, geometric means should be used in parametric statistical analyses

  16. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  17. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  18. Vaginal drug distribution modeling.

    Science.gov (United States)

    Katz, David F; Yuan, Andrew; Gao, Yajing

    2015-09-15

    This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Modelling of extreme minimum rainfall using generalised extreme value distribution for Zimbabwe

    Directory of Open Access Journals (Sweden)

    Delson Chikobvu

    2015-09-01

    Full Text Available We modelled the mean annual rainfall for data recorded in Zimbabwe from 1901 to 2009. Extreme value theory was used to estimate the probabilities of meteorological droughts. Droughts can be viewed as extreme events which go beyond and/or below normal rainfall occurrences, such as exceptionally low mean annual rainfall. The duality between the distribution of the minima and maxima was exploited and used to fit the generalised extreme value distribution (GEVD to the data and hence find probabilities of extreme low levels of mean annual rainfall. The augmented Dickey Fuller test confirmed that rainfall data were stationary, while the normal quantile-quantile plot indicated that rainfall data deviated from the normality assumption at both ends of the tails of the distribution. The maximum likelihood estimation method and the Bayesian approach were used to find the parameters of the GEVD. The Kolmogorov-Smirnov and Anderson-Darling goodnessof- fit tests showed that the Weibull class of distributions was a good fit to the minima mean annual rainfall using the maximum likelihood estimation method. The mean return period estimate of a meteorological drought using the threshold value of mean annual rainfall of 473 mm was 8 years. This implies that if in the year there is a meteorological drought then another drought of the same intensity or greater is expected after 8 years. It is expected that the use of Bayesian inference may better quantify the level of uncertainty associated with the GEVD parameter estimates than with the maximum likelihood estimation method. The Markov chain Monte Carlo algorithm for the GEVD was applied to construct the model parameter estimates using the Bayesian approach. These findings are significant because results based on non-informative priors (Bayesian method and the maximum likelihood method approach are expected to be similar.

  20. Simulation study of pO2 distribution in induced tumour masses and normal tissues within a microcirculation environment.

    Science.gov (United States)

    Li, Mao; Li, Yan; Wen, Peng Paul

    2014-01-01

    The biological microenvironment is interrupted when tumour masses are introduced because of the strong competition for oxygen. During the period of avascular growth of tumours, capillaries that existed play a crucial role in supplying oxygen to both tumourous and healthy cells. Due to limitations of oxygen supply from capillaries, healthy cells have to compete for oxygen with tumourous cells. In this study, an improved Krogh's cylinder model which is more realistic than the previously reported assumption that oxygen is homogeneously distributed in a microenvironment, is proposed to describe the process of the oxygen diffusion from a capillary to its surrounding environment. The capillary wall permeability is also taken into account. The simulation study is conducted and the results show that when tumour masses are implanted at the upstream part of a capillary and followed by normal tissues, the whole normal tissues suffer from hypoxia. In contrast, when normal tissues are ahead of tumour masses, their pO2 is sufficient. In both situations, the pO2 in the whole normal tissues drops significantly due to the axial diffusion at the interface of normal tissues and tumourous cells. As the existence of the axial oxygen diffusion cannot supply the whole tumour masses, only these tumourous cells that are near the interface can be partially supplied, and have a small chance to survive.

  1. Broadband model of the distribution network

    DEFF Research Database (Denmark)

    Jensen, Martin Høgdahl

    for circular conductors involving Bessel series. The two methods show equal values of resistance, but there is considerable difference in the values of internal inductance. A method for calculation of proximity effect is derived for a two-conductor configuration. This method is expanded to the use...... of frequency up to 200 kHz. The square wave measurements reveal the complete capacitance matrice at a frequency of approximately 12.5 MHz as well as the series inductance between the four conductors. The influence of non-ideal ground could not be measured due to the high impedance of the grounding device...... measurement and simulation, once the Phase model is used. No explanation is found on why the new material properties cause error in the Phase model. At the kyndby 10 kV test site a non-linear load is inserted on the secondary side of normal distribution transformer and the phase voltage and current...

  2. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  3. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  4. Elastic microfibril distribution in the cornea: Differences between normal and keratoconic stroma.

    Science.gov (United States)

    White, Tomas L; Lewis, Philip N; Young, Robert D; Kitazawa, Koji; Inatomi, Tsutomu; Kinoshita, Shigeru; Meek, Keith M

    2017-06-01

    The optical and biomechanical properties of the cornea are largely governed by the collagen-rich stroma, a layer that represents approximately 90% of the total thickness. Within the stroma, the specific arrangement of superimposed lamellae provides the tissue with tensile strength, whilst the spatial arrangement of individual collagen fibrils within the lamellae confers transparency. In keratoconus, this precise stromal arrangement is lost, resulting in ectasia and visual impairment. In the normal cornea, we previously characterised the three-dimensional arrangement of an elastic fiber network spanning the posterior stroma from limbus-to-limbus. In the peripheral cornea/limbus there are elastin-containing sheets or broad fibers, most of which become microfibril bundles (MBs) with little or no elastin component when reaching the central cornea. The purpose of the current study was to compare this network with the elastic fiber distribution in post-surgical keratoconic corneal buttons, using serial block face scanning electron microscopy and transmission electron microscopy. We have demonstrated that the MB distribution is very different in keratoconus. MBs are absent from a region of stroma anterior to Descemet's membrane, an area that is densely populated in normal cornea, whilst being concentrated below the epithelium, an area in which they are absent in normal cornea. We contend that these latter microfibrils are produced as a biomechanical response to provide additional strength to the anterior stroma in order to prevent tissue rupture at the apex of the cone. A lack of MBs anterior to Descemet's membrane in keratoconus would alter the biomechanical properties of the tissue, potentially contributing to the pathogenesis of the disease. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Modeling the distribution of Culex tritaeniorhynchus to predict Japanese encephalitis distribution in the Republic of Korea

    Directory of Open Access Journals (Sweden)

    Penny Masuoka

    2010-11-01

    Full Text Available Over 35,000 cases of Japanese encephalitis (JE are reported worldwide each year. Culex tritaeniorhynchus is the primary vector of the JE virus, while wading birds are natural reservoirs and swine amplifying hosts. As part of a JE risk analysis, the ecological niche modeling programme, Maxent, was used to develop a predictive model for the distribution of Cx. tritaeniorhynchus in the Republic of Korea, using mosquito collection data, temperature, precipitation, elevation, land cover and the normalized difference vegetation index (NDVI. The resulting probability maps from the model were consistent with the known environmental limitations of the mosquito with low probabilities predicted for forest covered mountains. July minimum temperature and land cover were the most important variables in the model. Elevation, summer NDVI (July-September, precipitation in July, summer minimum temperature (May-August and maximum temperature for fall and winter months also contributed to the model. Comparison of the Cx. tritaeniorhynchus model to the distribution of JE cases in the Republic of Korea from 2001 to 2009 showed that cases among a highly vaccinated Korean population were located in high-probability areas for Cx. tritaeniorhynchus. No recent JE cases were reported from the eastern coastline, where higher probabilities of mosquitoes were predicted, but where only small numbers of pigs are raised. The geographical distribution of reported JE cases corresponded closely with the predicted high-probability areas for Cx. tritaeniorhynchus, making the map a useful tool for health risk analysis that could be used for planning preventive public health measures.

  6. A revisited Johnson-Mehl-Avrami-Kolmogorov model and the evolution of grain-size distributions in steel

    OpenAIRE

    Hömberg, D.; Patacchini, F. S.; Sakamoto, K.; Zimmer, J.

    2016-01-01

    The classical Johnson-Mehl-Avrami-Kolmogorov approach for nucleation and growth models of diffusive phase transitions is revisited and applied to model the growth of ferrite in multiphase steels. For the prediction of mechanical properties of such steels, a deeper knowledge of the grain structure is essential. To this end, a Fokker-Planck evolution law for the volume distribution of ferrite grains is developed and shown to exhibit a log-normally distributed solution. Numerical parameter studi...

  7. Distribution and elimination of intravenously administered atrial natriuretic hormone(ANH) to normal and nephrectomized rats

    International Nuclear Information System (INIS)

    Devine, E.; Artman, L.; Budzik, G.; Bush, E.; Holleman, W.

    1986-01-01

    The 24 amino acid peptide, ANH(5-28), was N-terminally labeled with I-125 Bolton-Hunter reagent, iodo-N-succinimidyl 3-(4-hydroxyphenyl)propionate. The I-125 peptide plus 1μg/kg of the I-127 Bolton-Hunter peptide was injected into normal and nephrectomized anesthetized (Nembutal) rats. Blood samples were drawn into a cocktail developed to inhibit plasma induced degradation. Radiolabeled peptides were analyzed by HPLC. A biphasic curve of I-125 ANH(5-28) elimination was obtained, the first phase (t 1/2 = 15 sec) representing in vivo distribution and the second phase (t 1/2 = 7-10 min) a measurement of elimination. This biphasic elimination curve was similar in normal and nephrectomized rats. The apparent volumes of distribution were 15-20 ml for the first phase and > 300 ml for the second phase. In order to examine the tissue distribution of the peptide, animals were sacrificed at 2 minutes and the I-125 tissue contents were quantitated. The majority of the label was located in the liver (50%), kidneys (21%) and the lung (5%). The degradative peptides appearing in the plasma and urine of normal rats were identical. No intact radiolabeled ANH(5-28) was found in the urine. In conclusion, iodinated Bolton-Hunter labeled ANH(5-28) is rapidly removed from the circulation by the liver and to a lesser extent by the kidney, but the rate of elimination is not decreased by nephrectomy

  8. Modeling water vapor and heat transfer in the normal and the intubated airways.

    Science.gov (United States)

    Tawhai, Merryn H; Hunter, Peter J

    2004-04-01

    Intubation of the artificially ventilated patient with an endotracheal tube bypasses the usual conditioning regions of the nose and mouth. In this situation any deficit in heat or moisture in the air is compensated for by evaporation and thermal transfer from the pulmonary airway walls. To study the dynamics of heat and water transport in the intubated airway, a coupled system of nonlinear equations is solved in airway models with symmetric geometry and anatomically based geometry. Radial distribution of heat, water vapor, and velocity in the airway are described by power-law equations. Solution of the time-dependent system of equations yields dynamic airstream and mucosal temperatures and air humidity. Comparison of model results with two independent experimental studies in the normal and intubated airway shows a close correlation over a wide range of minute ventilation. Using the anatomically based model a range of spatially distributed temperature paths is demonstrated, which highlights the model's ability to predict thermal behavior in airway regions currently inaccessible to measurement. Accurate representation of conducting airway geometry is shown to be necessary for simulating mouth-breathing at rates between 15 and 100 l x min(-1), but symmetric geometry is adequate for the low minute ventilation and warm inspired air conditions that are generally supplied to the intubated patient.

  9. Effects of adipose tissue distribution on maximum lipid oxidation rate during exercise in normal-weight women.

    Science.gov (United States)

    Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N

    2014-06-01

    Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  10. Extravascular transport in normal and tumor tissues.

    Science.gov (United States)

    Jain, R K; Gerlowski, L E

    1986-01-01

    The transport characteristics of the normal and tumor tissue extravascular space provide the basis for the determination of the optimal dosage and schedule regimes of various pharmacological agents in detection and treatment of cancer. In order for the drug to reach the cellular space where most therapeutic action takes place, several transport steps must first occur: (1) tissue perfusion; (2) permeation across the capillary wall; (3) transport through interstitial space; and (4) transport across the cell membrane. Any of these steps including intracellular events such as metabolism can be the rate-limiting step to uptake of the drug, and these rate-limiting steps may be different in normal and tumor tissues. This review examines these transport limitations, first from an experimental point of view and then from a modeling point of view. Various types of experimental tumor models which have been used in animals to represent human tumors are discussed. Then, mathematical models of extravascular transport are discussed from the prespective of two approaches: compartmental and distributed. Compartmental models lump one or more sections of a tissue or body into a "compartment" to describe the time course of disposition of a substance. These models contain "effective" parameters which represent the entire compartment. Distributed models consider the structural and morphological aspects of the tissue to determine the transport properties of that tissue. These distributed models describe both the temporal and spatial distribution of a substance in tissues. Each of these modeling techniques is described in detail with applications for cancer detection and treatment in mind.

  11. Particle size distribution models of small angle neutron scattering pattern on ferro fluids

    International Nuclear Information System (INIS)

    Sistin Asri Ani; Darminto; Edy Giri Rachman Putra

    2009-01-01

    The Fe 3 O 4 ferro fluids samples were synthesized by a co-precipitation method. The investigation of ferro fluids microstructure is known to be one of the most important problems because the presence of aggregates and their internal structure influence greatly the properties of ferro fluids. The size and the size dispersion of particle in ferro fluids were determined assuming a log normal distribution of particle radius. The scattering pattern of the measurement by small angle neutron scattering were fitted by the theoretical scattering function of two limitation models are log normal sphere distribution and fractal aggregate. Two types of particle are detected, which are presumably primary particle of 30 Armstrong in radius and secondary fractal aggregate of 200 Armstrong with polydispersity of 0.47 up to 0.53. (author)

  12. Using an APOS Framework to Understand Teachers' Responses to Questions on the Normal Distribution

    Science.gov (United States)

    Bansilal, Sarah

    2014-01-01

    This study is an exploration of teachers' engagement with concepts embedded in the normal distribution. The participants were a group of 290 in-service teachers enrolled in a teacher development program. The research instrument was an assessment task that can be described as an "unknown percentage" problem, which required the application…

  13. Partial LVAD restores ventricular outputs and normalizes LV but not RV stress distributions in the acutely failing heart in silico

    OpenAIRE

    Sack, Kevin L.; Baillargeon, Brian; Acevedo-Bolton, Gabriel; Genet, Martin; Rebelo, Nuno; Kuhl, Ellen; Klein, Liviu; Weiselthaler, Georg M.; Burkhoff, Daniel; Franz, Thomas; Guccione, Julius M.

    2016-01-01

    Purpose: Heart failure is a worldwide epidemic that is unlikely to change as the population ages and life expectancy increases. We sought to detail significant recent improvements to the Dassault Systèmes Living Heart Model (LHM) and use the LHM to compute left ventricular (LV) and right ventricular (RV) myofiber stress distributions under the following 4 conditions: (1) normal cardiac function; (2) acute left heart failure (ALHF); (3) ALHF treated using an LV assist device (LVAD) flow rate o...

  14. On the possible ''normalization'' of experimental curves of 230Th vertical distribution in abyssal oceanic sediments

    International Nuclear Information System (INIS)

    Kuznetsov, Yu.V.; Al'terman, Eh.I.; Lisitsyn, A.P.; AN SSSR, Moscow. Inst. Okeanologii)

    1981-01-01

    The possibilities of the method of normalization of experimental ionic curves in reference to dating of abyssal sediments and establishing their accumulation rapidities are studied. The method is based on using correlation between ionic curves extrema and variations of Fe, Mn, C org., and P contents in abyssal oceanic sediments. It has been found that the above method can be successfully applied for correction of 230 Th vertical distribution data obtained by low-background γ-spectrometry. The method leads to most reliable results in those cases when the vertical distribution curves in sediments of elements concentrators of 230 Th are symbasic between themselves. The normalization of experimental ionic curves in many cases gives the possibility to realize the sediment age stratification [ru

  15. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  16. On the generation of log-Levy distributions and extreme randomness

    International Nuclear Information System (INIS)

    Eliazar, Iddo; Klafter, Joseph

    2011-01-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Levy distributions. The log-Levy distributions are the Levy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Levy distributions emerge universally-the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot's extreme randomness. (paper)

  17. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  18. One-dimensional time-dependent conduction states and temperature distribution along a normal zone during a quench

    International Nuclear Information System (INIS)

    Lopez, G.

    1991-01-01

    The quench simulations of a superconducting (s.c.) magnet requires some assumptions about the evolution of the normal zone and its temperature profile. The axial evolution of the normal zone is considered through the longitudinal quench velocity. However, the transversal quench propagation may be considered through the transversal quench velocity or with the turn-to-turn time delay quench propagation. The temperature distribution has been assumed adiabatic-like or cosine-like in two different computer programs. Although both profiles are different, they bring about more or less the same qualitative quench results differing only in about 8%. Unfortunately, there are not experimental data for the temperature profile along the conductor in a quench event to have a realistic comparison. Little attention has received the temperature profile, mainly because it is not so critical parameter in the quench analysis. Nonetheless, a confident quench analysis requires that the temperature distribution along the normal zone be taken into account with good approximation. In this paper, an analytical study is made about the temperature profile

  19. The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J

    2018-03-23

    In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.

  20. Distribution of CD163-positive cell and MHC class II-positive cell in the normal equine uveal tract.

    Science.gov (United States)

    Sano, Yuto; Matsuda, Kazuya; Okamoto, Minoru; Takehana, Kazushige; Hirayama, Kazuko; Taniyama, Hiroyuki

    2016-02-01

    Antigen-presenting cells (APCs) in the uveal tract participate in ocular immunity including immune homeostasis and the pathogenesis of uveitis. In horses, although uveitis is the most common ocular disorder, little is known about ocular immunity, such as the distribution of APCs. In this study, we investigated the distribution of CD163-positive and MHC II-positive cells in the normal equine uveal tract using an immunofluorescence technique. Eleven eyes from 10 Thoroughbred horses aged 1 to 24 years old were used. Indirect immunofluorescence was performed using the primary antibodies CD163, MHC class II (MHC II) and CD20. To demonstrate the site of their greatest distribution, positive cells were manually counted in 3 different parts of the uveal tract (ciliary body, iris and choroid), and their average number was assessed by statistical analysis. The distribution of pleomorphic CD163- and MHC II-expressed cells was detected throughout the equine uveal tract, but no CD20-expressed cells were detected. The statistical analysis demonstrated the distribution of CD163- and MHC II-positive cells focusing on the ciliary body. These results demonstrated that the ciliary body is the largest site of their distribution in the normal equine uveal tract, and the ciliary body is considered to play important roles in uveal and/or ocular immune homeostasis. The data provided in this study will help further understanding of equine ocular immunity in the normal state and might be beneficial for understanding of mechanisms of ocular disorders, such as equine uveitis.

  1. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  2. Quasi-normal modes from non-commutative matrix dynamics

    Science.gov (United States)

    Aprile, Francesco; Sanfilippo, Francesco

    2017-09-01

    We explore similarities between the process of relaxation in the BMN matrix model and the physics of black holes in AdS/CFT. Focusing on Dyson-fluid solutions of the matrix model, we perform numerical simulations of the real time dynamics of the system. By quenching the equilibrium distribution we study quasi-normal oscillations of scalar single trace observables, we isolate the lowest quasi-normal mode, and we determine its frequencies as function of the energy. Considering the BMN matrix model as a truncation of N=4 SYM, we also compute the frequencies of the quasi-normal modes of the dual scalar fields in the AdS5-Schwarzschild background. We compare the results, and we finda surprising similarity.

  3. Parameter Estimations and Optimal Design of Simple Step-Stress Model for Gamma Dual Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Hamdy Mohamed Salem

    2018-03-01

    Full Text Available This paper considers life-testing experiments and how it is effected by stress factors: namely temperature, electricity loads, cycling rate and pressure. A major type of accelerated life tests is a step-stress model that allows the experimenter to increase stress levels more than normal use during the experiment to see the failure items. The test items are assumed to follow Gamma Dual Weibull distribution. Different methods for estimating the parameters are discussed. These include Maximum Likelihood Estimations and Confidence Interval Estimations which is based on asymptotic normality generate narrow intervals to the unknown distribution parameters with high probability. MathCAD (2001 program is used to illustrate the optimal time procedure through numerical examples.

  4. Stochastic Frontier Models with Dependent Errors based on Normal and Exponential Margins || Modelos de frontera estocástica con errores dependientes basados en márgenes normal y exponencial

    Directory of Open Access Journals (Sweden)

    Gómez-Déniz, Emilio

    2017-06-01

    Full Text Available Following the recent work of Gómez-Déniz and Pérez-Rodríguez (2014, this paper extends the results obtained there to the normal-exponential distribution with dependence. Accordingly, the main aim of the present paper is to enhance stochastic production frontier and stochastic cost frontier modelling by proposing a bivariate distribution for dependent errors which allows us to nest the classical models. Closed-form expressions for the error term and technical efficiency are provided. An illustration using real data from the econometric literature is provided to show the applicability of the model proposed. || Continuando el reciente trabajo de Gómez-Déniz y Pérez-Rodríguez (2014, el presente artículo extiende los resultados obtenidos a la distribución normal-exponencial con dependencia. En consecuencia, el principal propósito de este artículo es mejorar el modelado de la frontera estocástica tanto de producción como de coste proponiendo para ello una distribución bivariante para errores dependientes que nos permitan encajar los modelos clásicos. Se obtienen las expresiones en forma cerrada para el término de error y la eficiencia técnica. Se ilustra la aplicabilidad del modelo propouesto usando datos reales existentes en la literatura econométrica.

  5. Basic study on radiation distribution sensing with normal optical fiber

    International Nuclear Information System (INIS)

    Naka, R.; Kawarabayashi, J.; Uritani, A.; Iguchi, T.; Kaneko, J.; Takeuchi, H.; Kakuta, T.

    2000-01-01

    Recently, some methods of radiation distribution sensing with optical fibers have been proposed. These methods employ scintillating fibers or scintillators with wavelength-shifting fibers. The positions of radiation interactions are detected by applying a time-of-flight (TOF) technique to the scintillation photon propagation. In the former method, the attenuation length for the scintillation photons in the scintillating fiber is relatively short, so that the operating length of the sensor is limited to several meters. In the latter method, a radiation distribution cannot continuously be obtained but discretely. To improve these shortcomings, a normal optical fiber made of polymethyl methacrylate (PMMA) is used in this study. Although the scintillation efficiency of PMMA is very low, several photons are emitted through interaction with a radiation. The fiber is transparent for the emitted photons to have a relatively long operating length. A radiation distribution can continuously be obtained. This paper describes a principle of the position sensing method based on the time of flight technique and preliminary results obtained for 90 Sr- 90 Y beta rays, 137 Cs gamma rays, and 14 MeV neutrons. The spatial resolutions for the above three kinds of radiations are 0.30 m, 0.37 m, 0.13 m, and the detection efficiencies are 1.1 x 10 -3 , 1.6 x 10 -7 , 5.4 x 10 -6 , respectively, with 10 m operation length. The results of a spectroscopic study on the optical property of the fiber are also described. (author)

  6. The distribution of YKL-40 in osteoarthritic and normal human articular cartilage

    DEFF Research Database (Denmark)

    Volck, B; Ostergaard, K; Johansen, J S

    1999-01-01

    YKL-40, also called human cartilage glycoprotein-39, is a major secretory protein of human chondrocytes in cell culture. YKL-40 mRNA is expressed by cartilage from patients with rheumatoid arthritis, but is not detectable in normal human cartilage. The aim was to investigate the distribution of YKL......-40 in osteoarthritic (n=9) and macroscopically normal (n=5) human articular cartilage, collected from 12 pre-selected areas of the femoral head, to discover a potential role for YKL-40 in cartilage remodelling in osteoarthritis. Immunohistochemical analysis showed that YKL-40 staining was found...... in chondrocytes of osteoarthritic cartilage mainly in the superficial and middle zone of the cartilage rather than the deep zone. There was a tendency for high number of YKL-40 positive chondrocytes in areas of the femoral head with a considerable biomechanical load. The number of chondrocytes with a positive...

  7. Statistical Evidence for the Preference of Frailty Distributions with Regularly-Varying-at-Zero Densities

    DEFF Research Database (Denmark)

    Missov, Trifon I.; Schöley, Jonas

    to this criterion admissible distributions are, for example, the gamma, the beta, the truncated normal, the log-logistic and the Weibull, while distributions like the log-normal and the inverse Gaussian do not satisfy this condition. In this article we show that models with admissible frailty distributions...... and a Gompertz baseline provide a better fit to adult human mortality data than the corresponding models with non-admissible frailty distributions. We implement estimation procedures for mixture models with a Gompertz baseline and frailty that follows a gamma, truncated normal, log-normal, or inverse Gaussian...

  8. Distribution of crushing strength of tablets

    DEFF Research Database (Denmark)

    Sonnergaard, Jørn

    2002-01-01

    The distribution of a given set of data is important since most parametric statistical tests are based on the assumption that the studied data are normal distributed. In analysis of fracture mechanics the Weibull distribution is widely used and the derived Weibull modulus is interpreted as a mate...... data from nine model tablet formulations and four commercial tablets are shown to follow the normal distribution. The importance of proper cleaning of the crushing strength apparatus is demonstrated....

  9. THE BANKRUPT RISK IN FEED DISTRIBUTION BRANCH IN DOLJ DISTRICT – FDR MODEL

    Directory of Open Access Journals (Sweden)

    Ovidiu CĂPRARIU

    2010-01-01

    Full Text Available Abstract:In this article, we are intending to present a score function in order to calculate the bankrupt risk for a special domain: feed distribution.All analysis models of the bankruptcy risk have at their basis a score function according to which it is determined with approximation whether the company would get bankruptcy or would have performing economic results, in a period immediately following the analysis.Having a personal analysis in feed distribution branch, I elaborated a score function for counting bankrupt risk, based on financial and non-financial studies of many companies and we called this model “Feed Distribution Risk Model” (FDR. The target was to obtain a high level of precision, so I choose the feed industry and more specific only feed distribution branch and I analyzed statistics about the evolution of the feed distribution companies in Romania and about the normal level of some financial or non-financial indicators for these companies.I have choose five feed distribution companies and I counted two international score functions and two Romanian score function with FDR function. Finally, I concluded that the three main differences between the classic models and this one are that the FDR model is for a specified branch – the feed distribution, it uses an important number of indicators and uses non-financial indicators, which explain the shareholders bonity. As directions to continue the investigations, I propose the elaboration of another models for other branches and adjust the financial information with true dates.

  10. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering in Outer RB

    Science.gov (United States)

    Khazanov, G. V.; Gamayunov, K. V.

    2007-01-01

    We present the equatorial and bounce average pitch angle diffusion coefficients for scattering of relativistic electrons by the H+ mode of EMIC waves. Both the model (prescribed) and self consistent distributions over the wave normal angle are considered. The main results of our calculation can be summarized as follows: First, in comparison with field aligned waves, the intermediate and highly oblique waves reduce the pitch angle range subject to diffusion, and strongly suppress the scattering rate for low energy electrons (E less than 2 MeV). Second, for electron energies greater than 5 MeV, the |n| = 1 resonances operate only in a narrow region at large pitch-angles, and despite their greatest contribution in case of field aligned waves, cannot cause electron diffusion into the loss cone. For those energies, oblique waves at |n| greater than 1 resonances are more effective, extending the range of pitch angle diffusion down to the loss cone boundary, and increasing diffusion at small pitch angles by orders of magnitude.

  11. Differentiation in boron distribution in adult male and female rats' normal brain: A BNCT approach

    International Nuclear Information System (INIS)

    Goodarzi, Samereh; Pazirandeh, Ali; Jameie, Seyed Behnamedin; Baghban Khojasteh, Nasrin

    2012-01-01

    Boron distribution in adult male and female rats' normal brain after boron carrier injection (0.005 g Boric Acid+0.005 g Borax+10 ml distilled water, pH: 7.4) was studied in this research. Coronal sections of control and trial animal tissue samples were irradiated with thermal neutrons. Using alpha autoradiography, significant differences in boron concentration were seen in forebrain, midbrain and hindbrain sections of male and female animal groups with the highest value, four hours after boron compound injection. - Highlights: ► Boron distribution in male and female rats' normal brain was studied in this research. ► Coronal sections of animal tissue samples were irradiated with thermal neutrons. ► Alpha and Lithium tracks were counted using alpha autoradiography. ► Different boron concentration was seen in brain sections of male and female rats. ► The highest boron concentration was seen in 4 h after boron compound injection.

  12. Log-normality of indoor radon data in the Walloon region of Belgium

    International Nuclear Information System (INIS)

    Cinelli, Giorgia; Tondeur, François

    2015-01-01

    The deviations of the distribution of Belgian indoor radon data from the log-normal trend are examined. Simulated data are generated to provide a theoretical frame for understanding these deviations. It is shown that the 3-component structure of indoor radon (radon from subsoil, outdoor air and building materials) generates deviations in the low- and high-concentration tails, but this low-C trend can be almost completely compensated by the effect of measurement uncertainties and by possible small errors in background subtraction. The predicted low-C and high-C deviations are well observed in the Belgian data, when considering the global distribution of all data. The agreement with the log-normal model is improved when considering data organised in homogeneous geological groups. As the deviation from log-normality is often due to the low-C tail for which there is no interest, it is proposed to use the log-normal fit limited to the high-C half of the distribution. With this prescription, the vast majority of the geological groups of data are compatible with the log-normal model, the remaining deviations being mostly due to a few outliers, and rarely to a “fat tail”. With very few exceptions, the log-normal modelling of the high-concentration part of indoor radon data is expected to give reasonable results, provided that the data are organised in homogeneous geological groups. - Highlights: • Deviations of the distribution of Belgian indoor Rn data from the log-normal trend. • 3-component structure of indoor Rn: subsoil, outdoor air and building materials. • Simulated data generated to provide a theoretical frame for understanding deviations. • Data organised in homogeneous geological groups; better agreement with the log-normal

  13. From Logical to Distributional Models

    Directory of Open Access Journals (Sweden)

    Anne Preller

    2014-12-01

    Full Text Available The paper relates two variants of semantic models for natural language, logical functional models and compositional distributional vector space models, by transferring the logic and reasoning from the logical to the distributional models. The geometrical operations of quantum logic are reformulated as algebraic operations on vectors. A map from functional models to vector space models makes it possible to compare the meaning of sentences word by word.

  14. Analysis of a hundred-years series of magnetic activity indices. III. Is the frequency distribution logarithmo-normal

    International Nuclear Information System (INIS)

    Mayaud, P.N.

    1976-01-01

    Because of the various components of positive conservation existing in the series of aa indices, their frequency distribution is necessarily distorted with respect to any random distribution. However when one takes these various components into account, the observed distribution can be considered as being a logarithmo-normal distribution. This implies that the geomagnetic activity satisfies the conditions of the central limit theorem, according to which a phenomenon which presents such a distribution is due to independent causes whose effects are multiplicative. Furthermore, the distorsion of the frequency distribution caused by the 11-year and 90-year cycles corresponds to a pure attenuation effect; an interpretation by the solar 'coronal holes' is proposed [fr

  15. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    Science.gov (United States)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  16. Distribution pattern of urine albumin creatinine ratio and the prevalence of high-normal levels in untreated asymptomatic non-diabetic hypertensive patients.

    Science.gov (United States)

    Ohmaru, Natsuki; Nakatsu, Takaaki; Izumi, Reishi; Mashima, Keiichi; Toki, Misako; Kobayashi, Asako; Ogawa, Hiroko; Hirohata, Satoshi; Ikeda, Satoru; Kusachi, Shozo

    2011-01-01

    Even high-normal albuminuria is reportedly associated with cardiovascular events. We determined the urine albumin creatinine ratio (UACR) in spot urine samples and analyzed the UACR distribution and the prevalence of high-normal levels. The UACR was determined using immunoturbidimetry in 332 untreated asymptomatic non-diabetic Japanese patients with hypertension and in 69 control subjects. The microalbuminuria and macroalbuminuria levels were defined as a UCAR ≥30 and creatinine and a UCAR ≥300 µg/mg·creatinine, respectively. The distribution patterns showed a highly skewed distribution for the lower levels, and a common logarithmic transformation produced a close fit to a Gaussian distribution with median, 25th and 75th percentile values of 22.6, 13.5 and 48.2 µg/mg·creatinine, respectively. When a high-normal UACR was set at >20 to creatinine, 19.9% (66/332) of the hypertensive patients exhibited a high-normal UACR. Microalbuminuria and macroalbuminuria were observed in 36.1% (120/336) and 2.1% (7/332) of the patients, respectively. UACR was significantly correlated with the systolic and diastolic blood pressures and the pulse pressure. A stepwise multivariate analysis revealed that these pressures as well as age were independent factors that increased UACR. The UACR distribution exhibited a highly skewed pattern, with approximately 60% of untreated, non-diabetic hypertensive patients exhibiting a high-normal or larger UACR. Both hypertension and age are independent risk factors that increase the UACR. The present study indicated that a considerable percentage of patients require anti-hypertensive drugs with antiproteinuric effects at the start of treatment.

  17. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  18. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Smith, Donald L.; Capote, Roberto

    2013-01-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue

  19. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, Gašper, E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, Andrej, E-mail: andrej.trkov@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Smith, Donald L., E-mail: donald.l.smith@anl.gov [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States); Capote, Roberto, E-mail: roberto.capotenoy@iaea.org [NAPC–Nuclear Data Section, International Atomic Energy Agency, PO Box 100, Vienna-A-1400 (Austria)

    2013-11-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue.

  20. The STIRPAT Analysis on Carbon Emission in Chinese Cities: An Asymmetric Laplace Distribution Mixture Model

    Directory of Open Access Journals (Sweden)

    Shanshan Wang

    2017-12-01

    Full Text Available In cities’ policy-making, it is a hot issue to grasp the determinants of carbon dioxide emission in Chinese cities. And the common method is to use the STIRPAT model, where its coefficients represent the influence intensity of each determinants of carbon emission. However, less work discusses estimation accuracy, especially in the framework of non-normal distribution and heterogeneity among cities’ emission. To improve the estimation accuracy, this paper employs a new method to estimate the STIRPAT model. The method uses a mixture of Asymmetric Laplace distributions (ALDs to approximate the true distribution of the error term. Meantime, a designed two-layer EM algorithm is used to obtain estimators. We test the robustness via the comparison results of five different models. We find that the ALDs Mixture Model is more reliable the others. Further, a significant Kuznets curve relationship is identified in China.

  1. A Box-Cox normal model for response times

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box–Cox transformations for response

  2. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  3. A probability distribution model of tooth pits for evaluating time-varying mesh stiffness of pitting gears

    Science.gov (United States)

    Lei, Yaguo; Liu, Zongyao; Wang, Delong; Yang, Xiao; Liu, Huan; Lin, Jing

    2018-06-01

    Tooth damage often causes a reduction in gear mesh stiffness. Thus time-varying mesh stiffness (TVMS) can be treated as an indication of gear health conditions. This study is devoted to investigating the mesh stiffness variations of a pair of external spur gears with tooth pitting, and proposes a new model for describing tooth pitting based on probability distribution. In the model, considering the appearance and development process of tooth pitting, we model the pitting on the surface of spur gear teeth as a series of pits with a uniform distribution in the direction of tooth width and a normal distribution in the direction of tooth height, respectively. In addition, four pitting degrees, from no pitting to severe pitting, are modeled. Finally, influences of tooth pitting on TVMS are analyzed in details and the proposed model is validated by comparing with a finite element model. The comparison results show that the proposed model is effective for the TVMS evaluations of pitting gears.

  4. Modeling the distribution of Schistosoma mansoni and host snails in Uganda using satellite sensor data and Geographical Information Systems

    DEFF Research Database (Denmark)

    Stensgaard, Anna-Sofie; Jørgensen, A; Kabatereine, N B

    2005-01-01

    The potential value of MODIS satellite sensor data on Normalized Difference Vegetation Index (NDVI) and land surface temperatures (LST) for describing the distribution of the Schistosoma mansoni-"Biomphalaria pfeifferi"/Biomphalaria sudanica parasite-snail system in inland Uganda, were tested...... by developing annual and seasonal composite models, and iteratively analysing for their relationship with parasite and snail distribution. The dry season composite model predicted an endemic area that produced the best fit with the distribution of schools with > or =5% prevalence. NDVI values of 151-174, day...

  5. Modeling of parallel-plate regenerators with non-uniform plate distributions

    DEFF Research Database (Denmark)

    Jensen, Jesper Buch; Engelbrecht, Kurt; Bahl, Christian Robert Haffenden

    2010-01-01

    plate spacing distributions are presented in order to understand the impact of spacing non-uniformity. Simulations of more realistic distributions where the plate spacings follow normal distributions are then discussed in order to describe the deviation of the performance of a regenerator relative...

  6. The analysis of annual dose distributions for radiation workers

    International Nuclear Information System (INIS)

    Mill, A.J.

    1984-05-01

    The system of dose limitation recommended by the ICRP includes the requirement that no worker shall exceed the current dose limit of 50mSv/a. Continuous exposure at this limit corresponds to an annual death rate comparable with 'high risk' industries if all workers are continuously exposed at the dose limit. In practice, there is a distribution of doses with an arithmetic mean lower than the dose limit. In its 1977 report UNSCEAR defined a reference dose distribution for the purposes of comparison. However, this two parameter distribution does not show the departure from log-normality normally observed for actual distributions at doses which are a significant proportion of the annual limit. In this report an alternative model is suggested, based on a three parameter log-normal distribution. The third parameter is an ''effective dose limit'' and such a model fits very well the departure from log-normality observed in actual dose distributions. (author)

  7. Angular momentum dependence of the distribution of shell model eigenenergies

    International Nuclear Information System (INIS)

    Yen, M.K.

    1974-01-01

    In the conventional shell model calculation the many-particle energy matrices are constructed and diagonalized for definite angular momentum and parity. However the resulting set of eigenvalues possess a near normal behavior and hence a simple statistical description is possible. Usually one needs only about four parameters to capture the average level densities if the size of the set is not too small. The parameters are essentially moments of the distribution. But the difficulty lies in the yet unsolved problem of calculating moments in the fixed angular momentum subspace. We have derived a formula to approximate the angular momentum projection dependence of any operator averaged in a shell model basis. This approximate formula which is a truncated series in Hermite polynomials has been proved very good numerically and justified analytically for large systems. Applying this formula to seven physical cases we have found that the fixed angular momentum projection energy centroid, width and higher central moments can be obtained accurately provided for even-even nuclei the even and odd angular momentum projections are treated separately. Using this information one can construct the energy distribution for fixed angular momentum projection assuming normal behavior. Then the fixed angular momentum level densities are deduced and spectra are extracted. Results are in reasonably good agreement with the exact values although not as good as those obtained using exact fixed angular momentum moments. (Diss. Abstr. Int., B)

  8. Distribution system modeling and analysis

    CERN Document Server

    Kersting, William H

    2001-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...

  9. Physics of collisionless scrape-off-layer plasma during normal and off-normal Tokamak operating conditions

    International Nuclear Information System (INIS)

    Hassanein, A.; Konkashbaev, I.

    1999-01-01

    The structure of a collisionless scrape-off-layer (SOL) plasma in tokamak reactors is being studied to define the electron distribution function and the corresponding sheath potential between the divertor plate and the edge plasma. The collisionless model is shown to be valid during the thermal phase of a plasma disruption, as well as during the newly desired low-recycling normal phase of operation with low-density, high-temperature, edge plasma conditions. An analytical solution is developed by solving the Fokker-Planck equation for electron distribution and balance in the SOL. The solution is in good agreement with numerical studies using Monte-Carlo methods. The analytical solutions provide an insight to the role of different physical and geometrical processes in a collisionless SOL during disruptions and during the enhanced phase of normal operation over a wide range of parameters

  10. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  11. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Multivariate extended skew-t distributions and related families

    KAUST Repository

    Arellano-Valle, Reinaldo B.

    2010-12-01

    A class of multivariate extended skew-t (EST) distributions is introduced and studied in detail, along with closely related families such as the subclass of extended skew-normal distributions. Besides mathematical tractability and modeling flexibility in terms of both skewness and heavier tails than the normal distribution, the most relevant properties of the EST distribution include closure under conditioning and ability to model lighter tails as well. The first part of the present paper examines probabilistic properties of the EST distribution, such as various stochastic representations, marginal and conditional distributions, linear transformations, moments and in particular Mardia’s measures of multivariate skewness and kurtosis. The second part of the paper studies statistical properties of the EST distribution, such as likelihood inference, behavior of the profile log-likelihood, the score vector and the Fisher information matrix. Especially, unlike the extended skew-normal distribution, the Fisher information matrix of the univariate EST distribution is shown to be non-singular when the skewness is set to zero. Finally, a numerical application of the conditional EST distribution is presented in the context of confidential data perturbation.

  13. Multivariate extended skew-t distributions and related families

    KAUST Repository

    Arellano-Valle, Reinaldo B.; Genton, Marc G.

    2010-01-01

    A class of multivariate extended skew-t (EST) distributions is introduced and studied in detail, along with closely related families such as the subclass of extended skew-normal distributions. Besides mathematical tractability and modeling flexibility in terms of both skewness and heavier tails than the normal distribution, the most relevant properties of the EST distribution include closure under conditioning and ability to model lighter tails as well. The first part of the present paper examines probabilistic properties of the EST distribution, such as various stochastic representations, marginal and conditional distributions, linear transformations, moments and in particular Mardia’s measures of multivariate skewness and kurtosis. The second part of the paper studies statistical properties of the EST distribution, such as likelihood inference, behavior of the profile log-likelihood, the score vector and the Fisher information matrix. Especially, unlike the extended skew-normal distribution, the Fisher information matrix of the univariate EST distribution is shown to be non-singular when the skewness is set to zero. Finally, a numerical application of the conditional EST distribution is presented in the context of confidential data perturbation.

  14. Group normalization for genomic data.

    Science.gov (United States)

    Ghandi, Mahmoud; Beer, Michael A

    2012-01-01

    Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  15. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  16. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  17. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM).

    Science.gov (United States)

    West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-10-11

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  18. Integrating remote sensing with species distribution models; Mapping tamarisk invasions using the Software for Assisted Habitat Modeling (SAHM)

    Science.gov (United States)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  19. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    Science.gov (United States)

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  20. Transforming between discrete and continuous angle distribution models: application to protein χ1 torsions

    International Nuclear Information System (INIS)

    Schmidt, Jürgen M.

    2012-01-01

    Two commonly employed angular-mobility models for describing amino-acid side-chain χ 1 torsion conformation, the staggered-rotamer jump and the normal probability density, are discussed and performance differences in applications to scalar-coupling data interpretation highlighted. Both models differ in their distinct statistical concepts, representing discrete and continuous angle distributions, respectively. Circular statistics, introduced for describing torsion-angle distributions by using a universal circular order parameter central to all models, suggest another distribution of the continuous class, here referred to as the elliptic model. Characteristic of the elliptic model is that order parameter and circular variance form complementary moduli. Transformations between the parameter sets that describe the probability density functions underlying the different models are provided. Numerical aspects of parameter optimization are considered. The issues are typified by using a set of χ 1 related 3 J coupling constants available for FK506-binding protein. The discrete staggered-rotamer model is found generally to produce lower order parameters, implying elevated rotatory variability in the amino-acid side chains, whereas continuous models tend to give higher order parameters that suggest comparatively less variation in angle conformations. The differences perceived regarding angular mobility are attributed to conceptually different features inherent to the models.

  1. Modeling of Bacillus cereus distribution in pasteurized milk at the time of consumption

    Directory of Open Access Journals (Sweden)

    Ľubomír Valík

    2013-02-01

    Full Text Available Normal 0 21 false false false SK X-NONE X-NONE Modelling of Bacillus cereus distribution, using data from pasteurized milk produced in Slovakia, at the time of consumption was performed in this study. The Modular Process Risk Model (MPRM methodology was applied to over all the consecutive steps in the food chain. The main factors involved in the risk of being exposed to unacceptable levels of B. cereus (model output were the initial density of B. cereus after milk pasteurization, storage temperatures and times (model input. Monte Carlo simulations were used for probability calculation of B. cereus density. By applying the sensitivity analysis influence of the input factors and their threshold values on the final count of B. cereus were determined. The results of the general case exposure assessment indicated that almost 14 % of Tetra Brik cartons can contain > 104 cfu/ml of B. cereus at the temperature distribution taken into account and time of pasteurized milk consumption. doi:10.5219/264

  2. The distribution of YKL-40 in osteoarthritic and normal human articular cartilage

    DEFF Research Database (Denmark)

    Volck, B; Ostergaard, K; Johansen, J S

    1999-01-01

    YKL-40, also called human cartilage glycoprotein-39, is a major secretory protein of human chondrocytes in cell culture. YKL-40 mRNA is expressed by cartilage from patients with rheumatoid arthritis, but is not detectable in normal human cartilage. The aim was to investigate the distribution of YKL...... in chondrocytes of osteoarthritic cartilage mainly in the superficial and middle zone of the cartilage rather than the deep zone. There was a tendency for high number of YKL-40 positive chondrocytes in areas of the femoral head with a considerable biomechanical load. The number of chondrocytes with a positive...

  3. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  4. Pricing FX Options in the Heston/CIR Jump-Diffusion Model with Log-Normal and Log-Uniform Jump Amplitudes

    Directory of Open Access Journals (Sweden)

    Rehez Ahlip

    2015-01-01

    model for the exchange rate with log-normal jump amplitudes and the volatility model with log-uniformly distributed jump amplitudes. We assume that the domestic and foreign stochastic interest rates are governed by the CIR dynamics. The instantaneous volatility is correlated with the dynamics of the exchange rate return, whereas the domestic and foreign short-term rates are assumed to be independent of the dynamics of the exchange rate and its volatility. The main result furnishes a semianalytical formula for the price of the foreign exchange European call option.

  5. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-01-01

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO 2 )]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO 2 ), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO 2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO 2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower

  6. Differentiation in boron distribution in adult male and female rats' normal brain: A BNCT approach

    Energy Technology Data Exchange (ETDEWEB)

    Goodarzi, Samereh, E-mail: samere.g@gmail.com [Department of Nuclear Engineering, Science and Research Branch, Islamic Azad University, PO Box 19395-1943, Tehran (Iran, Islamic Republic of); Pazirandeh, Ali, E-mail: paziran@yahoo.com [Department of Nuclear Engineering, Science and Research Branch, Islamic Azad University, PO Box 19395-1943, Tehran (Iran, Islamic Republic of); Jameie, Seyed Behnamedin, E-mail: behnamjameie@tums.ac.ir [Basic Science Department, Faculty of Allied Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Anatomy, Faculty of Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Baghban Khojasteh, Nasrin, E-mail: khojasteh_n@yahoo.com [Department of Nuclear Engineering, Science and Research Branch, Islamic Azad University, PO Box 19395-1943, Tehran (Iran, Islamic Republic of)

    2012-06-15

    Boron distribution in adult male and female rats' normal brain after boron carrier injection (0.005 g Boric Acid+0.005 g Borax+10 ml distilled water, pH: 7.4) was studied in this research. Coronal sections of control and trial animal tissue samples were irradiated with thermal neutrons. Using alpha autoradiography, significant differences in boron concentration were seen in forebrain, midbrain and hindbrain sections of male and female animal groups with the highest value, four hours after boron compound injection. - Highlights: Black-Right-Pointing-Pointer Boron distribution in male and female rats' normal brain was studied in this research. Black-Right-Pointing-Pointer Coronal sections of animal tissue samples were irradiated with thermal neutrons. Black-Right-Pointing-Pointer Alpha and Lithium tracks were counted using alpha autoradiography. Black-Right-Pointing-Pointer Different boron concentration was seen in brain sections of male and female rats. Black-Right-Pointing-Pointer The highest boron concentration was seen in 4 h after boron compound injection.

  7. BAYESIAN MODELS FOR SPECIES DISTRIBUTION MODELLING WITH ONLY-PRESENCE RECORDS

    Directory of Open Access Journals (Sweden)

    Bartolo de Jesús Villar-Hernández

    2015-08-01

    Full Text Available One of the central issues in ecology is the study of geographical distribution of species of flora and fauna through Species Distribution Models (SDM. Recently, scientific interest has focused on presence-only records. Two recent approaches have been proposed for this problem: a model based on maximum likelihood method (Maxlike and an inhomogeneous poisson process model (IPP. In this paper we discussed two bayesian approaches called MaxBayes and IPPBayes based on Maxlike and IPP model, respectively. To illustrate these proposals, we implemented two study examples: (1 both models were implemented on a simulated dataset, and (2 we modeled the potencial distribution of genus Dalea in the Tehuacan-Cuicatlán biosphere reserve with both models, the results was compared with that of Maxent. The results show that both models, MaxBayes and IPPBayes, are viable alternatives when species distribution are modeled with only-presence records. For simulated dataset, MaxBayes achieved prevalence estimation, even when the number of records was small. In the real dataset example, both models predict similar potential distributions like Maxent does. Â

  8. Interlayer material transport during layer-normal shortening. Part I. The model

    NARCIS (Netherlands)

    Molen, I. van der

    1985-01-01

    To analyse mass-transfer during deformation, the case is considered of a multilayer experiencing a layer-normal shortening that is volume constant on the scale of many layers. Strain rate is homogeneously distributed on the layer-scale if diffusion is absent; when transport of matter between the

  9. Bounding species distribution models

    Directory of Open Access Journals (Sweden)

    Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE

    2011-10-01

    Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].

  10. Bounding Species Distribution Models

    Science.gov (United States)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  11. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Normal Brain-Skull Development with Hybrid Deformable VR Models Simulation.

    Science.gov (United States)

    Jin, Jing; De Ribaupierre, Sandrine; Eagleson, Roy

    2016-01-01

    This paper describes a simulation framework for a clinical application involving skull-brain co-development in infants, leading to a platform for craniosynostosis modeling. Craniosynostosis occurs when one or more sutures are fused early in life, resulting in an abnormal skull shape. Surgery is required to reopen the suture and reduce intracranial pressure, but is difficult without any predictive model to assist surgical planning. We aim to study normal brain-skull growth by computer simulation, which requires a head model and appropriate mathematical methods for brain and skull growth respectively. On the basis of our previous model, we further specified suture model into fibrous and cartilaginous sutures and develop algorithm for skull extension. We evaluate the resulting simulation by comparison with datasets of cases and normal growth.

  13. Determining Normal-Distribution Tolerance Bounds Graphically

    Science.gov (United States)

    Mezzacappa, M. A.

    1983-01-01

    Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.

  14. Group normalization for genomic data.

    Directory of Open Access Journals (Sweden)

    Mahmoud Ghandi

    Full Text Available Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN, to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  15. Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.

    Science.gov (United States)

    Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel

    2014-01-01

    Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.

  16. Reply to: Are There More Gifted People than Would Be Expected on a Normal Distribution?

    Science.gov (United States)

    Gallagher, James J.

    2014-01-01

    The author responds to the article by Warne, Godwin, and Smith (2013) on the question of whether there are more gifted people than would be expected in a Gaussian normal distribution. He asserts that the answer to this question is yes, based on (a) data that he and his colleagues have collected, (b) data that are already available and quoted by…

  17. Nonstationary ARCH and GARCH with t-distributed Innovations

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    Consistency and asymptotic normality are established for the maximum likelihood estimators in the nonstationary ARCH and GARCH models with general t-distributed innovations. The results hold for joint estimation of (G)ARCH effects and the degrees of freedom parameter parametrizing the t-distribut......Consistency and asymptotic normality are established for the maximum likelihood estimators in the nonstationary ARCH and GARCH models with general t-distributed innovations. The results hold for joint estimation of (G)ARCH effects and the degrees of freedom parameter parametrizing the t......-distribution. With T denoting sample size, classic square-root T-convergence is shown to hold with closed form expressions for the multivariate covariances....

  18. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  19. A direct comparison of popular models of normal memory loss and Alzheimer's disease in samples of African Americans, Mexican Americans, and refugees and immigrants from the former Soviet Union.

    Science.gov (United States)

    Schrauf, Robert W; Iris, Madelyn

    2011-04-01

    To understand how people differentiate normal memory loss from Alzheimer's disease (AD) by investigating cultural models of these conditions. Ethnographic interviews followed by a survey. Cultural consensus analysis was used to test for the presence of group models, derive the "culturally correct" set of beliefs, and compare models of normal memory loss and AD. Chicago, Illinois. One hundred eight individuals from local neighborhoods: African Americans, Mexican Americans, and refugees and immigrants from the former Soviet Union. Participants responded to yes-or-no questions about the nature and causes of normal memory loss and AD and provided information on ethnicity, age, sex, acculturation, and experience with AD. Groups held a common model of AD as a brain-based disease reflecting irreversible cognitive decline. Higher levels of acculturation predicted greater knowledge of AD. Russian speakers favored biological over psychological models of the disease. Groups also held a common model of normal memory loss, including the important belief that "normal" forgetting involves eventual recall of the forgotten material. Popular models of memory loss and AD confirm that patients and clinicians are speaking the same "language" in their discussions of memory loss and AD. Nevertheless, the presence of coherent models of memory loss and AD, and the unequal distribution of that knowledge across groups, suggests that clinicians should include wider circles of patients' families and friends in their consultations. These results frame knowledge as distributed across social groups rather than simply the possession of individual minds. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.

  20. Distributions with given marginals and statistical modelling

    CERN Document Server

    Fortiana, Josep; Rodriguez-Lallena, José

    2002-01-01

    This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.

  1. Exploring the Potential Use of the Birnbaum-Saunders Distribution in Inventory Management

    Directory of Open Access Journals (Sweden)

    Peter Wanke

    2015-01-01

    Full Text Available Choosing the suitable demand distribution during lead-time is an important issue in inventory models. Much research has explored the advantage of following a distributional assumption different from the normality. The Birnbaum-Saunders (BS distribution is a probabilistic model that has its genesis in engineering but is also being widely applied to other fields including business, industry, and management. We conduct numeric experiments using the R statistical software to assess the adequacy of the BS distribution against the normal and gamma distributions in light of the traditional lot size-reorder point inventory model, known as (Q, r. The BS distribution is well-known to be robust to extreme values; indeed, results indicate that it is a more adequate assumption under higher values of the lead-time demand coefficient of variation, thus outperforming the gamma and the normal assumptions.

  2. Log-Normal Distribution in a Growing System with Weighted and Multiplicatively Interacting Particles

    Science.gov (United States)

    Fujihara, Akihiro; Tanimoto, Satoshi; Yamamoto, Hiroshi; Ohtsuki, Toshiya

    2018-03-01

    A growing system with weighted and multiplicatively interacting particles is investigated. Each particle has a quantity that changes multiplicatively after a binary interaction, with its growth rate controlled by a weight parameter in a homogeneous symmetric kernel. We consider the system using moment inequalities and analytically derive the log-normal-type tail in the probability distribution function of quantities when the parameter is negative, which is different from the result for single-body multiplicative processes. We also find that the system approaches a winner-take-all state when the parameter is positive.

  3. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  4. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  5. Preparation, distribution, stability and tumor imaging properties of [62Zn] Bleomycin complex in normal and tumor-bearing mice

    International Nuclear Information System (INIS)

    Jalilian, A.R.; Fateh, B.; Ghergherehchi, M.; Karimian, A.; Matloobi, M.; Moradkhani, S.; Kamalidehghan, M.; Tabeie, F.

    2003-01-01

    Backgrounds: Bleomycin (BLM) has been labeled with radioisotopes and widely used in therapy and diagnosis. In this study BLM was labeled with [ 62 Zn] zinc chloride for oncologic PET studies. Materials and methods: The complex was obtained at the P H=2 normal saline at 90 d eg C in 60 min. Radio-TLC showed on overall radiochemical yield of 95-97% (radiochemical purity>97%). Stability of complex was checked in vitro in mice and human plasma/urine. Results: Preliminary in vitro studies performed to determined complex stability and distribution of [ 62 Zn] BLM in normal and fibrosarcoma tumors in mice according to bio-distribution/imaging studies. Conclusion: [ 62 Zn] BLM can be used in PET oncology studies due to its suitable physico-chemical propertied as a diagnostic complex behavior in higher animals

  6. Multivariate phase type distributions - Applications and parameter estimation

    DEFF Research Database (Denmark)

    Meisch, David

    The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... projects and depend on reliable cost estimates. The Successive Principle is a group analysis method primarily used for analyzing medium to large projects in relation to cost or duration. We believe that the mathematical modeling used in the Successive Principle can be improved. We suggested a novel...

  7. Neuronal variability during handwriting: lognormal distribution.

    Directory of Open Access Journals (Sweden)

    Valery I Rupasov

    Full Text Available We examined time-dependent statistical properties of electromyographic (EMG signals recorded from intrinsic hand muscles during handwriting. Our analysis showed that trial-to-trial neuronal variability of EMG signals is well described by the lognormal distribution clearly distinguished from the Gaussian (normal distribution. This finding indicates that EMG formation cannot be described by a conventional model where the signal is normally distributed because it is composed by summation of many random sources. We found that the variability of temporal parameters of handwriting--handwriting duration and response time--is also well described by a lognormal distribution. Although, the exact mechanism of lognormal statistics remains an open question, the results obtained should significantly impact experimental research, theoretical modeling and bioengineering applications of motor networks. In particular, our results suggest that accounting for lognormal distribution of EMGs can improve biomimetic systems that strive to reproduce EMG signals in artificial actuators.

  8. Modeled ground water age distributions

    Science.gov (United States)

    Woolfenden, Linda R.; Ginn, Timothy R.

    2009-01-01

    The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.

  9. Software Application Profile: RVPedigree: a suite of family-based rare variant association tests for normally and non-normally distributed quantitative traits.

    Science.gov (United States)

    Oualkacha, Karim; Lakhal-Chaieb, Lajmi; Greenwood, Celia Mt

    2016-04-01

    RVPedigree (Rare Variant association tests in Pedigrees) implements a suite of programs facilitating genome-wide analysis of association between a quantitative trait and autosomal region-based genetic variation. The main features here are the ability to appropriately test for association of rare variants with non-normally distributed quantitative traits, and also to appropriately adjust for related individuals, either from families or from population structure and cryptic relatedness. RVPedigree is available as an R package. The package includes calculation of kinship matrices, various options for coping with non-normality, three different ways of estimating statistical significance incorporating triaging to enable efficient use of the most computationally-intensive calculations, and a parallelization option for genome-wide analysis. The software is available from the Comprehensive R Archive Network [CRAN.R-project.org] under the name 'RVPedigree' and at [https://github.com/GreenwoodLab]. It has been published under General Public License (GPL) version 3 or newer. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  10. New trends in species distribution modelling

    Science.gov (United States)

    Zimmermann, Niklaus E.; Edwards, Thomas C.; Graham, Catherine H.; Pearman, Peter B.; Svenning, Jens-Christian

    2010-01-01

    Species distribution modelling has its origin in the late 1970s when computing capacity was limited. Early work in the field concentrated mostly on the development of methods to model effectively the shape of a species' response to environmental gradients (Austin 1987, Austin et al. 1990). The methodology and its framework were summarized in reviews 10–15 yr ago (Franklin 1995, Guisan and Zimmermann 2000), and these syntheses are still widely used as reference landmarks in the current distribution modelling literature. However, enormous advancements have occurred over the last decade, with hundreds – if not thousands – of publications on species distribution model (SDM) methodologies and their application to a broad set of conservation, ecological and evolutionary questions. With this special issue, originating from the third of a set of specialized SDM workshops (2008 Riederalp) entitled 'The Utility of Species Distribution Models as Tools for Conservation Ecology', we reflect on current trends and the progress achieved over the last decade.

  11. Presenting Thin Media Models Affects Women's Choice of Diet or Normal Snacks

    Science.gov (United States)

    Krahe, Barbara; Krause, Christina

    2010-01-01

    Our study explored the influence of thin- versus normal-size media models and of self-reported restrained eating behavior on women's observed snacking behavior. Fifty female undergraduates saw a set of advertisements for beauty products showing either thin or computer-altered normal-size female models, allegedly as part of a study on effective…

  12. Deterministic Properties of Serially Connected Distributed Lag Models

    Directory of Open Access Journals (Sweden)

    Piotr Nowak

    2013-01-01

    Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract

  13. A scan statistic for continuous data based on the normal probability model

    Directory of Open Access Journals (Sweden)

    Huang Lan

    2009-10-01

    Full Text Available Abstract Temporal, spatial and space-time scan statistics are commonly used to detect and evaluate the statistical significance of temporal and/or geographical disease clusters, without any prior assumptions on the location, time period or size of those clusters. Scan statistics are mostly used for count data, such as disease incidence or mortality. Sometimes there is an interest in looking for clusters with respect to a continuous variable, such as lead levels in children or low birth weight. For such continuous data, we present a scan statistic where the likelihood is calculated using the the normal probability model. It may also be used for other distributions, while still maintaining the correct alpha level. In an application of the new method, we look for geographical clusters of low birth weight in New York City.

  14. [Prediction of potential geographic distribution of Lyme disease in Qinghai province with Maximum Entropy model].

    Science.gov (United States)

    Zhang, Lin; Hou, Xuexia; Liu, Huixin; Liu, Wei; Wan, Kanglin; Hao, Qin

    2016-01-01

    To predict the potential geographic distribution of Lyme disease in Qinghai by using Maximum Entropy model (MaxEnt). The sero-diagnosis data of Lyme disease in 6 counties (Huzhu, Zeku, Tongde, Datong, Qilian and Xunhua) and the environmental and anthropogenic data including altitude, human footprint, normalized difference vegetation index (NDVI) and temperature in Qinghai province since 1990 were collected. By using the data of Huzhu Zeku and Tongde, the prediction of potential distribution of Lyme disease in Qinghai was conducted with MaxEnt. The prediction results were compared with the human sero-prevalence of Lyme disease in Datong, Qilian and Xunhua counties in Qinghai. Three hot spots of Lyme disease were predicted in Qinghai, which were all in the east forest areas. Furthermore, the NDVI showed the most important role in the model prediction, followed by human footprint. Datong, Qilian and Xunhua counties were all in eastern Qinghai. Xunhua was in hot spot areaⅡ, Datong was close to the north of hot spot area Ⅲ, while Qilian with lowest sero-prevalence of Lyme disease was not in the hot spot areas. The data were well modeled in MaxEnt (Area Under Curve=0.980). The actual distribution of Lyme disease in Qinghai was in consistent with the results of the model prediction. MaxEnt could be used in predicting the potential distribution patterns of Lyme disease. The distribution of vegetation and the range and intensity of human activity might be related with Lyme disease distribution.

  15. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    Science.gov (United States)

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  16. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  17. Perhitungan Iuran Normal Program Pensiun dengan Asumsi Suku Bunga Mengikuti Model Vasicek

    Directory of Open Access Journals (Sweden)

    I Nyoman Widana

    2017-12-01

    Full Text Available Labor has a very important role for national development. One way to optimize their productivity is to guarantee a certainty to earn income after retirement. Therefore the government and the private sector must have a program that can ensure the sustainability of this financial support. One option is a pension plan. The purpose of this study is to calculate the  normal cost  with the interest rate assumed to follow the Vasicek model and analyze the normal contribution of the pension program participants. Vasicek model is used to match with  the actual conditions. The method used in this research is the Projected Unit Credit Method and the Entry Age Normal method. The data source of this research is lecturers of FMIPA Unud. In addition, secondary data is also used in the form of the interest  rate of Bank Indonesia for the period of January 2006-December 2015. The results of this study indicate that  the older the age of the participants, when starting the pension program, the greater the first year normal cost  and the smaller the benefit which he or she  will get. Then, normal cost with constant interest rate  greater than normal cost with Vasicek interest rate. This occurs because the Vasicek model predicts interest between 4.8879%, up to 6.8384%. While constant interest is only 4.25%.  In addition, using normal cost that proportional to salary, it is found that the older the age of the participants the greater the proportion of the salary for normal cost.

  18. A two-compartment model of VEGF distribution in the mouse.

    Directory of Open Access Journals (Sweden)

    Phillip Yen

    Full Text Available Vascular endothelial growth factor (VEGF is a key regulator of angiogenesis--the growth of new microvessels from existing microvasculature. Angiogenesis is a complex process involving numerous molecular species, and to better understand it, a systems biology approach is necessary. In vivo preclinical experiments in the area of angiogenesis are typically performed in mouse models; this includes drug development targeting VEGF. Thus, to quantitatively interpret such experimental results, a computational model of VEGF distribution in the mouse can be beneficial. In this paper, we present an in silico model of VEGF distribution in mice, determine model parameters from existing experimental data, conduct sensitivity analysis, and test the validity of the model. The multiscale model is comprised of two compartments: blood and tissue. The model accounts for interactions between two major VEGF isoforms (VEGF(120 and VEGF(164 and their endothelial cell receptors VEGFR-1, VEGFR-2, and co-receptor neuropilin-1. Neuropilin-1 is also expressed on the surface of parenchymal cells. The model includes transcapillary macromolecular permeability, lymphatic transport, and macromolecular plasma clearance. Simulations predict that the concentration of unbound VEGF in the tissue is approximately 50-fold greater than in the blood. These concentrations are highly dependent on the VEGF secretion rate. Parameter estimation was performed to fit the simulation results to available experimental data, and permitted the estimation of VEGF secretion rate in healthy tissue, which is difficult to measure experimentally. The model can provide quantitative interpretation of preclinical animal data and may be used in conjunction with experimental studies in the development of pro- and anti-angiogenic agents. The model approximates the normal tissue as skeletal muscle and includes endothelial cells to represent the vasculature. As the VEGF system becomes better characterized in

  19. Probabilistic analysis in normal operation of distribution system with distributed generation

    DEFF Research Database (Denmark)

    Villafafila-Robles, R.; Sumper, A.; Bak-Jensen, B.

    2011-01-01

    Nowadays, the incorporation of high levels of small-scale non-dispatchable distributed generation is leading to the transition from the traditional 'vertical' power system structure to a 'horizontally-operated' power system, where the distribution networks contain both stochastic generation...... and load. This fact increases the number of stochastic inputs and dependence structures between them need to be considered. The deterministic analysis is not enough to cope with these issues and a new approach is needed. Probabilistic analysis provides a better approach. Moreover, as distribution systems...

  20. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  2. A skewed distribution with asset pricing applications

    NARCIS (Netherlands)

    de Roon, Frans; Karehnke, P.

    2017-01-01

    Recent research has identified skewness and downside risk as one of the most important features of risk. We present a new distribution which makes modeling skewed risks no more difficult than normally distributed (symmetric) risks. Our distribution is a combination of the “downside” and “upside”

  3. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  4. Three-dimensional finite analysis of acetabular contact pressure and contact area during normal walking.

    Science.gov (United States)

    Wang, Guangye; Huang, Wenjun; Song, Qi; Liang, Jinfeng

    2017-11-01

    This study aims to analyze the contact areas and pressure distributions between the femoral head and mortar during normal walking using a three-dimensional finite element model (3D-FEM). Computed tomography (CT) scanning technology and a computer image processing system were used to establish the 3D-FEM. The acetabular mortar model was used to simulate the pressures during 32 consecutive normal walking phases and the contact areas at different phases were calculated. The distribution of the pressure peak values during the 32 consecutive normal walking phases was bimodal, which reached the peak (4.2 Mpa) at the initial phase where the contact area was significantly higher than that at the stepping phase. The sites that always kept contact were concentrated on the acetabular top and leaned inwards, while the anterior and posterior acetabular horns had no pressure concentration. The pressure distributions of acetabular cartilage at different phases were significantly different, the zone of increased pressure at the support phase distributed at the acetabular top area, while that at the stepping phase distributed in the inside of acetabular cartilage. The zones of increased contact pressure and the distributions of acetabular contact areas had important significance towards clinical researches, and could indicate the inductive factors of acetabular osteoarthritis. Copyright © 2016. Published by Elsevier Taiwan.

  5. A Post-Truncation Parameterization of Truncated Normal Technical Inefficiency

    OpenAIRE

    Christine Amsler; Peter Schmidt; Wen-Jen Tsay

    2013-01-01

    In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N^+ (μ,σ^2). This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier. We will distinguish the pre-truncation mean (μ) and variance (σ^2) from the post-truncation mean μ_*=E(u) and var...

  6. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Science.gov (United States)

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  7. Mathematical models of tumour and normal tissue response

    International Nuclear Information System (INIS)

    Jones, B.; Dale, R.G.; Charing Cross Group of Hospitals, London

    1999-01-01

    The historical application of mathematics in the natural sciences and in radiotherapy is compared. The various forms of mathematical models and their limitations are discussed. The Linear Quadratic (LQ) model can be modified to include (i) radiobiological parameter changes that occur during fractionated radiotherapy, (ii) situations such as focal forms of radiotherapy, (iii) normal tissue responses, and (iv) to allow for the process of optimization. The inclusion of a variable cell loss factor in the LQ model repopulation term produces a more flexible clonogenic doubling time, which can simulate the phenomenon of 'accelerated repopulation'. Differential calculus can be applied to the LQ model after elimination of the fraction number integers. The optimum dose per fraction (maximum cell kill relative to a given normal tissue fractionation sensitivity) is then estimated from the clonogen doubling times and the radiosensitivity parameters (or α/β ratios). Economic treatment optimization is described. Tumour volume studies during or following teletherapy are used to optimize brachytherapy. The radiation responses of both individual tumours and tumour populations (by random sampling 'Monte-Carlo' techniques from statistical ranges of radiobiological and physical parameters) can be estimated. Computerized preclinical trials can be used to guide choice of dose fractionation scheduling in clinical trials. The potential impact of gene and other biological therapies on the results of radical radiotherapy are testable. New and experimentally testable hypotheses are generated from limited clinical data by exploratory modelling exercises. (orig.)

  8. Data Normalization to Accelerate Training for Linear Neural Net to Predict Tropical Cyclone Tracks

    Directory of Open Access Journals (Sweden)

    Jian Jin

    2015-01-01

    Full Text Available When pure linear neural network (PLNN is used to predict tropical cyclone tracks (TCTs in South China Sea, whether the data is normalized or not greatly affects the training process. In this paper, min.-max. method and normal distribution method, instead of standard normal distribution, are applied to TCT data before modeling. We propose the experimental schemes in which, with min.-max. method, the min.-max. value pair of each variable is mapped to (−1, 1 and (0, 1; with normal distribution method, each variable’s mean and standard deviation pair is set to (0, 1 and (100, 1. We present the following results: (1 data scaled to the similar intervals have similar effects, no matter the use of min.-max. or normal distribution method; (2 mapping data to around 0 gains much faster training speed than mapping them to the intervals far away from 0 or using unnormalized raw data, although all of them can approach the same lower level after certain steps from their training error curves. This could be useful to decide data normalization method when PLNN is used individually.

  9. Pharmacodynamic and pharmacokinetic studies and prostatic tissue distribution of fosfomycin tromethamine in bacterial prostatitis or normal rats.

    Science.gov (United States)

    Fan, L; Shang, X; Zhu, J; Ma, B; Zhang, Q

    2018-05-02

    In this study, we assessed the therapeutic effects of fosfomycin tromethamine (FT) in a bacterial prostatitis (BP) rat model. The BP model was induced by Escherichia coli and was demonstrated after 7 days microbiologically and histologically. Then, 25 BP rats selected were randomly divided into five treatment groups: model group, positive group, FT-3 day group, FT-7 day group and FT-14 day group. Ventral lobes of prostate from all animals were removed, and the serum samples were collected at the end of the experiments. Microbiological cultures and histological findings of the prostate samples demonstrated reduced bacterial growth and improved inflammatory responses in FT-treatment groups compared with the model group, indicating that FT against prostatic infection induced by E. coli showed good antibacterial effects. Moreover, plasma pharmacokinetics and prostatic distribution of fosfomycin were studied and compared in BP and normal rats. The concentrations of fosfomycin in samples were analysed by liquid chromatography-tandem mass spectrometry. There were no differences in plasma pharmacokinetic parameters between two groups. But significantly higher penetration of fosfomycin into prostatic tissues was found in BP rats. We therefore suggested that FT had a good therapeutic effect on BP and it might be used in curing masculine reproductive system diseases. © 2018 Blackwell Verlag GmbH.

  10. Distribution Log Normal of 222 Rn in the state of Zacatecas, Mexico

    International Nuclear Information System (INIS)

    Garcia, M.L.; Mireles, F.; Quirino, L.; Davila, I.; Rios, C.; Pinedo, J.L.

    2006-01-01

    In this work the evaluation of the concentration of 222 Rn in air for Zacatecas is shown. The Solid State Nuclear Track Detectors were used as the technique for the realization of the measurements in large scale with cellulose nitrate LR-115, type 2, in open chambers of 222 Rn. The measurements were carried out during three months in different times of the year. In the results it is presented the log normal distribution, arithmetic mean and geometric media for the concentration at indoor and outdoor of residence constructions, the concentration at indoor of occupational constructions and in the 57 municipal heads of the state of Zacatecas. The statistics of the values in the concentration showed variation according to the time of the year, obtaining high quantities in winter seasons for both cases. The distribution of the concentration of 222 Rn is presented in the state map for each one of the municipalities, representing the measurement places in the entire state of Zacatecas. Finally the places where the values in the concentration of 222 Rn in air are near to the one limit settled down by the EPA of 148 Bq/m 3 are presented. (Author)

  11. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  12. Online modelling of water distribution systems: a UK case study

    Directory of Open Access Journals (Sweden)

    J. Machell

    2010-03-01

    Full Text Available Hydraulic simulation models of water distribution networks are routinely used for operational investigations and network design purposes. However, their full potential is often never realised because, in the majority of cases, they have been calibrated with data collected manually from the field during a single historic time period and, as such, reflect the network operational conditions that were prevalent at that time, and they are then applied as part of a reactive, desktop investigation. In order to use a hydraulic model to assist proactive distribution network management its element asset information must be up to date and it should be able to access current network information to drive simulations. Historically this advance has been restricted by the high cost of collecting and transferring the necessary field measurements. However, recent innovation and cost reductions associated with data transfer is resulting in collection of data from increasing numbers of sensors in water supply systems, and automatic transfer of the data to point of use. This means engineers potentially have access to a constant stream of current network data that enables a new era of "on-line" modelling that can be used to continually assess standards of service compliance for pressure and reduce the impact of network events, such as mains bursts, on customers. A case study is presented here that shows how an online modelling system can give timely warning of changes from normal network operation, providing capacity to minimise customer impact.

  13. Eliciting hyperparameters of prior distributions for the parameters of paired comparison models

    Directory of Open Access Journals (Sweden)

    Nasir Abbas

    2013-02-01

    Full Text Available Normal 0 false false false EN-US X-NONE AR-SA In the study of paired comparisons (PC, items may be ranked or issues may be prioritized through subjective assessment of certain judges. PC models are developed and then used to serve the purpose of ranking. The PC models may be studied through classical or Bayesian approach. Bayesian inference is a modern statistical technique used to draw conclusions about the population parameters. Its beauty lies in incorporating prior information about the parameters into the analysis in addition to current information (i.e. data. The prior and current information are formally combined to yield a posterior distribution about the population parameters, which is the work bench of the Bayesian statisticians. However, the problems the Bayesians face correspond to the selection and formal utilization of prior distribution. Once the type of prior distribution is decided to be used, the problem of estimating the parameters of the prior distribution (i.e. elicitation still persists. Different methods are devised to serve the purpose. In this study an attempt is made to use Minimum Chi-square (hence forth MCS for the elicitation purpose. Though it is a classical estimation technique, but is used here for the election purpose. The entire elicitation procedure is illustrated through a numerical data set.

  14. Modelling growth curves of Nigerian indigenous normal feather ...

    African Journals Online (AJOL)

    This study was conducted to predict the growth curve parameters using Bayesian Gompertz and logistic models and also to compare the two growth function in describing the body weight changes across age in Nigerian indigenous normal feather chicken. Each chick was wing-tagged at day old and body weights were ...

  15. Score distributions in information retrieval

    NARCIS (Netherlands)

    Arampatzis, A.; Robertson, S.; Kamps, J.

    2009-01-01

    We review the history of modeling score distributions, focusing on the mixture of normal-exponential by investigating the theoretical as well as the empirical evidence supporting its use. We discuss previously suggested conditions which valid binary mixture models should satisfy, such as the

  16. Topographical Distribution of Arsenic, Manganese, and Selenium in the Normal Human Brain

    DEFF Research Database (Denmark)

    Larsen, Niels Agersnap; Pakkenberg, H.; Damsgaard, Else

    1979-01-01

    The concentrations of arsenic, manganese and selenium per gram wet tissue weight were determined in samples from 24 areas of normal human brains from 5 persons with ages ranging from 15 to 81 years of age. The concentrations of the 3 elements were determined for each sample by means of neutron...... activation analysis with radiochemical separation. Distinct patterns of distribution were shown for each of the 3 elements. Variations between individuals were found for some but not all brain areas, resulting in coefficients of variation between individuals of about 30% for arsenic, 10% for manganese and 20......% for selenium. The results seem to indicate that arsenic is associated with the lipid phase, manganese with the dry matter and selenium with the aqueous phase of brain tissue....

  17. Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper

    2007-01-01

    Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice......, and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared...... with discrete time processes in the setting of the present paper as well as other spatial-temporal situations....

  18. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    Science.gov (United States)

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in

  19. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  20. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  1. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  2. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  3. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  4. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    Science.gov (United States)

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  5. Normal and Pathological NCAT Image and PhantomData Based onPhysiologically Realistic Left Ventricle Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui,Benjamin M.W.; Gullberg, Grant T.

    2006-08-02

    between thesubendocardial and transmural infarcts manifest themselves in myocardialSPECT images. The normal FE model produced strain distributions that wereconsistent with those reported in the literature and a motion consistentwith that defined in the normal 4D NCAT beating heart model based ontagged MRI data. The addition of a subendocardial ischemic region changedthe average transmural circumferential strain from a contractile value of0.19 to a tensile value of 0.03. The addition of a transmural ischemicregion changed average circumferential strain to a value of 0.16, whichis consistent with data reported in the literature. Model resultsdemonstrated differences in contractile function between subendocardialand transmural infarcts and how these differences in function aredocumented in simulated myocardial SPECT images produced using the 4DNCAT phantom. In comparison to the original NCAT beating heart model, theFE mechanical model produced a more accurate simulation for the cardiacmotion abnormalities. Such a model, when incorporated into the 4D NCATphantom, has great potential for use in cardiac imaging research. Withits enhanced physiologically-based cardiac model, the 4D NCAT phantom canbe used to simulate realistic, predictive imaging data of a patientpopulation with varying whole-body anatomy and with varying healthy anddiseased states of the heart that will provide a known truth from whichto evaluate and improve existing and emerging 4D imaging techniques usedin the diagnosis of cardiac disease.

  6. Mathematical Models for Room Air Distribution

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    1982-01-01

    A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....

  7. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    International Nuclear Information System (INIS)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro

    2011-01-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≅10 -2.5 Mpc -1 with the upper limit B < or approx. 3 nG.

  8. Individual vision and peak distribution in collective actions

    Science.gov (United States)

    Lu, Peng

    2017-06-01

    People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.

  9. Distributed Generation Market Demand Model (dGen): Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Sigrin, Benjamin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Preus, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-02-01

    The Distributed Generation Market Demand model (dGen) is a geospatially rich, bottom-up, market-penetration model that simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the continental United States through 2050. The National Renewable Energy Laboratory (NREL) developed dGen to analyze the key factors that will affect future market demand for distributed solar, wind, storage, and other DER technologies in the United States. The new model builds off, extends, and replaces NREL's SolarDS model (Denholm et al. 2009a), which simulates the market penetration of distributed PV only. Unlike the SolarDS model, dGen can model various DER technologies under one platform--it currently can simulate the adoption of distributed solar (the dSolar module) and distributed wind (the dWind module) and link with the ReEDS capacity expansion model (Appendix C). The underlying algorithms and datasets in dGen, which improve the representation of customer decision making as well as the spatial resolution of analyses (Figure ES-1), also are improvements over SolarDS.

  10. Reflectance spectrometry of normal and bruised human skins: experiments and modeling

    International Nuclear Information System (INIS)

    Kim, Oleg; Alber, Mark; McMurdy, John; Lines, Collin; Crawford, Gregory; Duffy, Susan

    2012-01-01

    A stochastic photon transport model in multilayer skin tissue combined with reflectance spectroscopy measurements is used to study normal and bruised skins. The model is shown to provide a very good approximation to both normal and bruised real skin tissues by comparing experimental and simulated reflectance spectra. The sensitivity analysis of the skin reflectance spectrum to variations of skin layer thicknesses, blood oxygenation parameter and concentrations of main chromophores is performed to optimize model parameters. The reflectance spectrum of a developed bruise in a healthy adult is simulated, and the concentrations of bilirubin, blood volume fraction and blood oxygenation parameter are determined for different times as the bruise progresses. It is shown that bilirubin and blood volume fraction reach their peak values at 80 and 55 h after contusion, respectively, and the oxygenation parameter is lower than its normal value during 80 h after contusion occurred. The obtained time correlations of chromophore concentrations in developing contusions are shown to be consistent with previous studies. The developed model uses a detailed seven-layer skin approximation for contusion and allows one to obtain more biologically relevant results than those obtained with previous models using one- to three-layer skin approximations. A combination of modeling with spectroscopy measurements provides a new tool for detailed biomedical studies of human skin tissue and for age determination of contusions. (paper)

  11. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed

    2017-08-28

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  12. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2017-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  13. The aerosol distribution in Europe derived with the Community Multiscale Air Quality (CMAQ model: comparison to near surface in situ and sunphotometer measurements

    Directory of Open Access Journals (Sweden)

    V. Matthias

    2008-09-01

    Full Text Available The aerosol distribution in Europe was simulated with the Community Multiscale Air Quality (CMAQ model system version 4.5 for the years 2000 and 2001. The results were compared with daily averages of PM10 measurements taken in the framework of EMEP and with aerosol optical depth (AOD values measured within AERONET. The modelled total aerosol mass is typically about 30–60% lower than the corresponding measurements. However a comparison of the chemical composition of the aerosol revealed a considerably better agreement between the modelled and the measured aerosol components for ammonium, nitrate and sulfate, which are on average only 15–20% underestimated. Sligthly worse agreement was determined for sea salt, that was only avaliable at two sites. The largest discrepancies result from the aerosol mass which was not chemically specified by the measurements. The agreement between measurements and model is better in winter than in summer. The modelled organic aerosol mass is higher in summer than in winter but it is significantly underestimated by the model. This could be one of the main reasons for the discrepancies between measurements and model results. The other is that primary coarse particles are underestimated in the emissions. The probability distribution function of the PM10 measurements follows a log-normal distribution at most sites. The model is only able to reproduce this distribution function at non-coastal low altitude stations. The AOD derived from the model results is 20–70% lower than the values observed within AERONET. This is mainly attributed to the missing aerosol mass in the model. The day-to-day variability of the AOD and the log-normal distribution functions are quite well reproduced by the model. The seasonality on the other hand is underestimated by the model results because better agreement is achieved in winter.

  14. On the fission gas release from oxide fuels during normal grain growth

    International Nuclear Information System (INIS)

    Paraschiv, M.C.; Paraschiv, A.; Glodeanu, F.

    1997-01-01

    A mathematical formalism for calculating the fission gas release from oxide fuels considering an arbitrary distribution of fuel grain size with only zero boundary condition for gas diffusion at the grain boundary is proposed. It has also been proved that it becomes unnecessary to consider the grain volume distribution function for fission products diffusion when the grain boundary gas resolution is considered, if thermodynamic forces on grain boundaries are only time dependent. In order to highlight the effect of the normal grain growth on fission gas release from oxide fuels Hillert's and Lifshitz and Slyozov's theories have been selected. The last one was used to give an adequate treatment of normal grain growth for the diffusion-controlled grain boundary movement in oxide fuels. It has been shown that during the fuel irradiation, the asymptotic form of the grain volume distribution functions given by Hillert and Lifshitz and Slyozov models can be maintained but the grain growth rate constant becomes time dependent itself. Experimental results have been used to correlate the two theoretical models of normal grain growth to the fission gas release from oxide fuels. (orig.)

  15. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Science.gov (United States)

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let

  16. Pharmacokinetics and tissue distribution of five active ingredients of Eucommiae cortex in normal and ovariectomized mice by UHPLC-MS/MS.

    Science.gov (United States)

    An, Jing; Hu, Fangdi; Wang, Changhong; Zhang, Zijia; Yang, Li; Wang, Zhengtao

    2016-09-01

    1. Pinoresinol di-O-β-d-glucopyranoside (PDG), geniposide (GE), geniposidic acid (GA), aucubin (AN) and chlorogenic acid (CA) are the representative active ingredients in Eucommiae cortex (EC), which may be estrogenic. 2. The ultra high-performance liquid chromatography/tandem mass spectrometry (UHPLC-MS/MS) method for simultaneous determination of the five ingredients showed good linearity, low limits of quantification and high extraction recoveries, as well as acceptable precision, accuracy and stability in mice plasma and tissue samples (liver, spleen, kidney and uterus). It was successfully applied to the comparative study on pharmacokinetics and tissue distribution of PDG, GE, GA, AN and CA between normal and ovariectomized (OVX) mice. 3. The results indicated that except CA, the plasma and tissue concentrations of PDG, GE, GA in OVX mice were all greater than those in normal mice. AN could only be detected in the plasma and liver homogenate of normal mice, which was poorly absorbed in OVX mice and low in other measured tissues. PDG, GE and GA seem to be better absorbed in OVX mice than in normal mice proved by the remarkable increased value of AUC0-∞ and Cmax. It is beneficial that PDG, GE, GA have better plasma absorption and tissue distribution in pathological state.

  17. Comparison of the procedures of Fleishman and Ramberg et al. for generating non-normal data in simulation studies

    Directory of Open Access Journals (Sweden)

    Rebecca Bendayan

    2014-01-01

    Full Text Available Simulation techniques must be able to generate the types of distributions most commonly encountered in real data, for example, non-normal distributions. Two recognized procedures for generating non-normal data are Fleishman's linear transformation method and the method proposed by Ramberg et al. that is based on generalization of the Tukey lambda distribution. This study compares tríese procedures in terms of the extent to which the distributions they generate fit their respective theoretical models, and it also examines the number of simulations needed to achieve this fit. To this end, the paper considers, in addition to the normal distribution, a series of non-normal distributions that are commonly found in real data, and then analyses fit according to the extent to which normality is violated and the number of simulations performed. The results show that the two data generation procedures behave similarly. As the degree of contamination of the theoretical distribution increases, so does the number of simulations required to ensure a good fit to the generated data. The two procedures generate more accurate normal and non-normal distributions when at least 7000 simulations are performed, although when the degree of contamination is severe (with values of skewness and kurtosis of 2 and 6, respectively it is advisable to perform 15000 simulations.

  18. Determining prescription durations based on the parametric waiting time distribution

    DEFF Research Database (Denmark)

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-01-01

    two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users...... in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies......-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide...

  19. Unpolarized structure functions and the parton distributions for nucleon in an independent quark model

    International Nuclear Information System (INIS)

    Barik, N.; Mishra, R.N.

    2001-01-01

    Considering the nucleon as consisting entirely of its valence quarks confined independently in a scalar-vector harmonic potential; unpolarized structure functions F 1 (x, μ 2 ) and F 2 (x, μ 2 ) are derived in the Bjorken limit under certain simplifying assumptions; from which valence quark distribution functions u v (x, μ 2 ) and d v (x, μ 2 ) are appropriately extracted satisfying the normalization constraints. QCD-evolution of these input distributions from a model scale of μ 2 = 0.07 GeV 2 to a higher Q 2 scale of Q 0 2 = 15 GeV 2 yields xu v (x, Q 0 2 ) and xd v (x, Q 0 2 ) in good agreement with experimental data. The gluon and sea-quark distributions such as G (x, Q 0 2 ) and q s (x, Q 0 2 ) are dynamically generated with a reasonable qualitative agreement with the available data; using the leading order renormalization group equations with appropriate valence-quark distributions as the input. (author)

  20. A multilayer electro-thermal model of pouch battery during normal discharge and internal short circuit process

    International Nuclear Information System (INIS)

    Chen, Mingbiao; Bai, Fanfei; Song, Wenji; Lv, Jie; Lin, Shili

    2017-01-01

    Highlights: • 2D network equivalent circuit considers the interplay of cell units. • The temperature non-uniformity Φ of multilayer model is bigger than that of lumped model. • The temperature non-uniformity is quantified and the reason of non-uniformity is analyzed. • Increasing the thermal conductivity of the separator can effectively relieve the heat spot effect of ISC. - Abstract: As the electrical and thermal characteristic will affect the batteries’ safety, performance, calendar life and capacity fading, an electro-thermal coupled model for pouch battery LiFePO_4/C is developed in normal discharge and internal short circuit process. The battery is discretized into many cell elements which are united as a 2D network equivalent circuit. The electro-thermal model is solved with finite difference method. Non-uniformity of current distribution and temperature distribution is simulated and the result is validated with experiment data at various discharge rates. Comparison of the lumped model and the multilayer structure model shows that the temperature non-uniformity Φ of multilayer model is bigger than that of lumped model and shows more precise. The temperature non-uniformity is quantified and the reason of non-uniformity is analyzed. The electro-thermal model can also be used to guide the safety design of battery. The temperature of the ISC element near tabs is the highest because the equivalent resistance of the external circuit (not including the ISC element) is the smallest when the resistance of cell units is small. It is found that increasing the thermal conductivity of integrated layer can effectively relieve the heat spot effect of ISC.

  1. Normal modes of vibration in nickel

    Energy Technology Data Exchange (ETDEWEB)

    Birgeneau, R J [Yale Univ., New Haven, Connecticut (United States); Cordes, J [Cambridge Univ., Cambridge (United Kingdom); Dolling, G; Woods, A D B

    1964-07-01

    The frequency-wave-vector dispersion relation, {nu}(q), for the normal vibrations of a nickel single crystal at 296{sup o}K has been measured for the [{zeta}00], [{zeta}00], [{zeta}{zeta}{zeta}], and [0{zeta}1] symmetric directions using inelastic neutron scattering. The results can be described in terms of the Born-von Karman theory of lattice dynamics with interactions out to fourth-nearest neighbors. The shapes of the dispersion curves are very similar to those of copper, the normal mode frequencies in nickel being about 1.24 times the corresponding frequencies in copper. The fourth-neighbor model was used to calculate the frequency distribution function g({nu}) and related thermodynamic properties. (author)

  2. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    Science.gov (United States)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  3. A Bayesian Nonparametric Meta-Analysis Model

    Science.gov (United States)

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  4. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  5. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  6. Distribution characteristics of stock market liquidity

    Science.gov (United States)

    Luo, Jiawen; Chen, Langnan; Liu, Hao

    2013-12-01

    We examine the distribution characteristics of stock market liquidity by employing the generalized additive models for location, scale and shape (GAMLSS) model and three-minute frequency data from Chinese stock markets. We find that the BCPE distribution within the GAMLSS framework fits the distributions of stock market liquidity well with the diagnosis test. We also find that the stock market index exhibits a significant impact on the distributions of stock market liquidity. The stock market liquidity usually exhibits a positive skewness, but a normal distribution at a low level of stock market index and a high-peak and fat-tail shape at a high level of stock market index.

  7. Normal and Pathological NCAT Image and Phantom Data Based on Physiologically Realistic Left Ventricle Finite-Element Models

    International Nuclear Information System (INIS)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui, Benjamin M.W.; Gullberg, Grant T.

    2006-01-01

    differences in contractile function between the subendocardial and transmural infarcts manifest themselves in myocardial SPECT images. The normal FE model produced strain distributions that were consistent with those reported in the literature and a motion consistent with that defined in the normal 4D NCAT beating heart model based on tagged MRI data. The addition of a subendocardial ischemic region changed the average transmural circumferential strain from a contractile value of 0.19 to a tensile value of 0.03. The addition of a transmural ischemic region changed average circumferential strain to a value of 0.16, which is consistent with data reported in the literature. Model results demonstrated differences in contractile function between subendocardial and transmural infarcts and how these differences in function are documented in simulated myocardial SPECT images produced using the 4DNCAT phantom. In comparison to the original NCAT beating heart model, the FE mechanical model produced a more accurate simulation for the cardiac motion abnormalities. Such a model, when incorporated into the 4D NCAT phantom, has great potential for use in cardiac imaging research. With its enhanced physiologically-based cardiac model, the 4D NCAT phantom can be used to simulate realistic, predictive imaging data of a patient population with varying whole-body anatomy and with varying healthy and diseased states of the heart that will provide a known truth from which to evaluate and improve existing and emerging 4D imaging techniques used in the diagnosis of cardiac disease

  8. From explicit to implicit normal mode initialization of a limited-area model

    Energy Technology Data Exchange (ETDEWEB)

    Bijlsma, S.J.

    2013-02-15

    In this note the implicit normal mode initialization of a limited-area model is discussed from a different point of view. To that end it is shown that the equations describing the explicit normal mode initialization applied to the shallow water equations in differentiated form on the sphere can readily be derived in normal mode space if the model equations are separable, but only in the case of stationary Rossby modes can be transformed into the implicit equations in physical space. This is a consequence of the simple relations between the components of the different modes in that case. In addition a simple eigenvalue problem is given for the frequencies of the gravity waves. (orig.)

  9. Water Distribution and Removal Model

    International Nuclear Information System (INIS)

    Y. Deng; N. Chipman; E.L. Hardin

    2005-01-01

    The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD and R) Model; (2) EBS Physical and Chemical Environment (P and CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD and R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment

  10. Water Distribution and Removal Model

    Energy Technology Data Exchange (ETDEWEB)

    Y. Deng; N. Chipman; E.L. Hardin

    2005-08-26

    The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes

  11. Distribution of erlotinib in rash and normal skin in cancer patients receiving erlotinib visualized by matrix assisted laser desorption/ionization mass spectrometry imaging.

    Science.gov (United States)

    Nishimura, Meiko; Hayashi, Mitsuhiro; Mizutani, Yu; Takenaka, Kei; Imamura, Yoshinori; Chayahara, Naoko; Toyoda, Masanori; Kiyota, Naomi; Mukohara, Toru; Aikawa, Hiroaki; Fujiwara, Yasuhiro; Hamada, Akinobu; Minami, Hironobu

    2018-04-06

    The development of skin rashes is the most common adverse event observed in cancer patients treated with epidermal growth factor receptor-tyrosine kinase inhibitors such as erlotinib. However, the pharmacological evidence has not been fully revealed. Erlotinib distribution in the rashes was more heterogeneous than that in the normal skin, and the rashes contained statistically higher concentrations of erlotinib than adjacent normal skin in the superficial skin layer (229 ± 192 vs. 120 ± 103 ions/mm 2 ; P = 0.009 in paired t -test). LC-MS/MS confirmed that the concentration of erlotinib in the skin rashes was higher than that in normal skin in the superficial skin layer (1946 ± 1258 vs. 1174 ± 662 ng/cm 3 ; P = 0.028 in paired t -test). The results of MALDI-MSI and LC-MS/MS were well correlated (coefficient of correlation 0.879, P distribution of erlotinib in the skin tissue was visualized using non-labeled MALDI-MSI. Erlotinib concentration in the superficial layer of the skin rashes was higher than that in the adjacent normal skin. We examined patients with advanced pancreatic cancer who developed skin rashes after treatment with erlotinib and gemcitabine. We biopsied both the rash and adjacent normal skin tissues, and visualized and compared the distribution of erlotinib within the skin using matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI). The tissue concentration of erlotinib was also measured by liquid chromatography-tandem mass spectrometry (LC-MS/MS) with laser microdissection.

  12. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    Science.gov (United States)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  13. Visualizing Tensor Normal Distributions at Multiple Levels of Detail.

    Science.gov (United States)

    Abbasloo, Amin; Wiens, Vitalis; Hermann, Max; Schultz, Thomas

    2016-01-01

    Despite the widely recognized importance of symmetric second order tensor fields in medicine and engineering, the visualization of data uncertainty in tensor fields is still in its infancy. A recently proposed tensorial normal distribution, involving a fourth order covariance tensor, provides a mathematical description of how different aspects of the tensor field, such as trace, anisotropy, or orientation, vary and covary at each point. However, this wealth of information is far too rich for a human analyst to take in at a single glance, and no suitable visualization tools are available. We propose a novel approach that facilitates visual analysis of tensor covariance at multiple levels of detail. We start with a visual abstraction that uses slice views and direct volume rendering to indicate large-scale changes in the covariance structure, and locations with high overall variance. We then provide tools for interactive exploration, making it possible to drill down into different types of variability, such as in shape or orientation. Finally, we allow the analyst to focus on specific locations of the field, and provide tensor glyph animations and overlays that intuitively depict confidence intervals at those points. Our system is demonstrated by investigating the effects of measurement noise on diffusion tensor MRI, and by analyzing two ensembles of stress tensor fields from solid mechanics.

  14. Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions

    Science.gov (United States)

    Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.

    2014-12-01

    The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.

  15. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  16. A Complex Network Approach to Distributional Semantic Models.

    Directory of Open Access Journals (Sweden)

    Akira Utsumi

    Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.

  17. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  18. The Czech Wage Distribution and the Minimum Wage Impacts: the Empirical Analysis

    Directory of Open Access Journals (Sweden)

    Kateřina Duspivová

    2013-06-01

    Full Text Available A well-fi tting wage distribution is a crucial precondition for economic modeling of the labour market processes.In the fi rst part, this paper provides the evidence that – as for wages in the Czech Republic – the most oft enused log-normal distribution failed and the best-fi tting one is the Dagum distribution. Th en we investigatethe role of wage distribution in the process of the economic modeling. By way of an example of the minimumwage impacts on the Czech labour market, we examine the response of Meyer and Wise’s (1983 model to theDagum and log-normal distributions. Th e results suggest that the wage distribution has important implicationsfor the eff ects of the minimum wage on the shape of the lower tail of the measured wage distribution andis thus an important feature for interpreting the eff ects of minimum wages.

  19. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  20. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  1. Population Synthesis Models for Normal Galaxies with Dusty Disks

    Directory of Open Access Journals (Sweden)

    Kyung-Won Suh

    2003-09-01

    Full Text Available To investigate the SEDs of galaxies considering the dust extinction processes in the galactic disks, we present the population synthesis models for normal galaxies with dusty disks. We use PEGASE (Fioc & Rocca-Volmerange 1997 to model them with standard input parameters for stars and new dust parameters. We find that the model results are strongly dependent on the dust parameters as well as other parameters (e.g. star formation history. We compare the model results with the observations and discuss about the possible explanations. We find that the dust opacity functions derived from studies of asymptotic giant branch stars are useful for modeling a galaxy with a dusty disk.

  2. Normal Spin Asymmetries in Elastic Electron-Proton Scattering

    International Nuclear Information System (INIS)

    M. Gorchtein; P.A.M. Guichon; M. Vanderhaeghen

    2004-01-01

    We discuss the two-photon exchange contribution to observables which involve lepton helicity flip in elastic lepton-nucleon scattering. This contribution is accessed through the single spin asymmetry for a lepton beam polarized normal to the scattering plane. We estimate this beam normal spin asymmetry at large momentum transfer using a parton model and we express the corresponding amplitude in terms of generalized parton distributions. We further discuss this observable in the quasi-RCS kinematics which may be dominant at certain kinematical conditions and find it to be governed by the photon helicity-flip RCS amplitudes

  3. Normal Spin Asymmetries in Elastic Electron-Proton Scattering

    International Nuclear Information System (INIS)

    Gorchtein, M.; Guichon, P.A.M.; Vanderhaeghen, M.

    2005-01-01

    We discuss the two-photon exchange contribution to observables which involve lepton helicity flip in elastic lepton-nucleon scattering. This contribution is accessed through the single spin asymmetry for a lepton beam polarized normal to the scattering plane. We estimate this beam normal spin asymmetry at large momentum transfer using a parton model and we express the corresponding amplitude in terms of generalized parton distributions. We further discuss this observable in the quasi-RCS kinematics which may be dominant at certain kinematical conditions and find it to be governed by the photon helicity-flip RCS amplitudes

  4. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  5. Modeling the cool down of the primary heat transport system using shut down cooling system in normal operation and after events such as LOCA

    International Nuclear Information System (INIS)

    Icleanu, D.L.; Prisecaru, I.

    2015-01-01

    This paper aims at modeling the cooling of the primary heat transport system using shutdown cooling system (SDCS), for a CANDU 6 NPP in all operating modes, normal and abnormal (particularly in case of LOCA accident), using the Flowmaster calculation code. The modelling of heavy water flow through the shutdown cooling system and primary heat transport system was performed to determine the distribution of flows, pressure in various areas of the hydraulic circuit and the pressure loss corresponding to the components but also for the heat calculation of the heat exchangers related to the system. The results of the thermo-hydraulic analysis show that in all cases analyzed, normal operation and for LOCA accident regime, the performance requirements are confirmed by analysis

  6. Multiple-parameter bifurcation analysis in a Kuramoto model with time delay and distributed shear

    Science.gov (United States)

    Niu, Ben; Zhang, Jiaming; Wei, Junjie

    2018-05-01

    In this paper, time delay effect and distributed shear are considered in the Kuramoto model. On the Ott-Antonsen's manifold, through analyzing the associated characteristic equation of the reduced functional differential equation, the stability boundary of the incoherent state is derived in multiple-parameter space. Moreover, very rich dynamical behavior such as stability switches inducing synchronization switches can occur in this equation. With the loss of stability, Hopf bifurcating coherent states arise, and the criticality of Hopf bifurcations is determined by applying the normal form theory and the center manifold theorem. On one hand, theoretical analysis indicates that the width of shear distribution and time delay can both eliminate the synchronization then lead the Kuramoto model to incoherence. On the other, time delay can induce several coexisting coherent states. Finally, some numerical simulations are given to support the obtained results where several bifurcation diagrams are drawn, and the effect of time delay and shear is discussed.

  7. Normal tissue complication probabilities: dependence on choice of biological model and dose-volume histogram reduction scheme

    International Nuclear Information System (INIS)

    Moiseenko, Vitali; Battista, Jerry; Van Dyk, Jake

    2000-01-01

    Purpose: To evaluate the impact of dose-volume histogram (DVH) reduction schemes and models of normal tissue complication probability (NTCP) on ranking of radiation treatment plans. Methods and Materials: Data for liver complications in humans and for spinal cord in rats were used to derive input parameters of four different NTCP models. DVH reduction was performed using two schemes: 'effective volume' and 'preferred Lyman'. DVHs for competing treatment plans were derived from a sample DVH by varying dose uniformity in a high dose region so that the obtained cumulative DVHs intersected. Treatment plans were ranked according to the calculated NTCP values. Results: Whenever the preferred Lyman scheme was used to reduce the DVH, competing plans were indistinguishable as long as the mean dose was constant. The effective volume DVH reduction scheme did allow us to distinguish between these competing treatment plans. However, plan ranking depended on the radiobiological model used and its input parameters. Conclusions: Dose escalation will be a significant part of radiation treatment planning using new technologies, such as 3-D conformal radiotherapy and tomotherapy. Such dose escalation will depend on how the dose distributions in organs at risk are interpreted in terms of expected complication probabilities. The present study indicates considerable variability in predicted NTCP values because of the methods used for DVH reduction and radiobiological models and their input parameters. Animal studies and collection of standardized clinical data are needed to ascertain the effects of non-uniform dose distributions and to test the validity of the models currently in use

  8. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  9. A NEW STATISTICAL PERSPECTIVE TO THE COSMIC VOID DISTRIBUTION

    International Nuclear Information System (INIS)

    Pycke, J-R; Russell, E.

    2016-01-01

    In this study, we obtain the size distribution of voids as a three-parameter redshift-independent log-normal void probability function (VPF) directly from the Cosmic Void Catalog (CVC). Although many statistical models of void distributions are based on the counts in randomly placed cells, the log-normal VPF that we obtain here is independent of the shape of the voids due to the parameter-free void finder of the CVC. We use three void populations drawn from the CVC generated by the Halo Occupation Distribution (HOD) Mocks, which are tuned to three mock SDSS samples to investigate the void distribution statistically and to investigate the effects of the environments on the size distribution. As a result, it is shown that void size distributions obtained from the HOD Mock samples are satisfied by the three-parameter log-normal distribution. In addition, we find that there may be a relation between the hierarchical formation, skewness, and kurtosis of the log-normal distribution for each catalog. We also show that the shape of the three-parameter distribution from the samples is strikingly similar to the galaxy log-normal mass distribution obtained from numerical studies. This similarity between void size and galaxy mass distributions may possibly indicate evidence of nonlinear mechanisms affecting both voids and galaxies, such as large-scale accretion and tidal effects. Considering the fact that in this study, all voids are generated by galaxy mocks and show hierarchical structures in different levels, it may be possible that the same nonlinear mechanisms of mass distribution affect the void size distribution.

  10. Currents, HF Radio-derived, Monterey Bay, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  11. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  12. Understanding the implementation of complex interventions in health care: the normalization process model

    Directory of Open Access Journals (Sweden)

    Rogers Anne

    2007-09-01

    Full Text Available Abstract Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration. Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model.

  13. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  14. A model for the distribution channels planning process

    NARCIS (Netherlands)

    Neves, M.F.; Zuurbier, P.; Campomar, M.C.

    2001-01-01

    Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these

  15. A log-sinh transformation for data normalization and variance stabilization

    Science.gov (United States)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  16. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    Science.gov (United States)

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  17. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Directory of Open Access Journals (Sweden)

    Mike S Fowler

    Full Text Available The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies dominate in red environments, rapid fluctuations (high frequencies in blue environments and white environments are purely random (no frequencies dominate. Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental series used in combination with population (dynamical feedback models: autoregressive [AR(1] and sinusoidal (1/f models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1 models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1 methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We

  18. Distribution of Basement Membrane Molecules, Laminin and Collagen Type IV, in Normal and Degenerated Cartilage Tissues.

    Science.gov (United States)

    Foldager, Casper Bindzus; Toh, Wei Seong; Gomoll, Andreas H; Olsen, Bjørn Reino; Spector, Myron

    2014-04-01

    The objective of the present study was to investigate the presence and distribution of 2 basement membrane (BM) molecules, laminin and collagen type IV, in healthy and degenerative cartilage tissues. Normal and degenerated tissues were obtained from goats and humans, including articular knee cartilage, the intervertebral disc, and meniscus. Normal tissue was also obtained from patella-tibial enthesis in goats. Immunohistochemical analysis was performed using anti-laminin and anti-collagen type IV antibodies. Human and goat skin were used as positive controls. The percentage of cells displaying the pericellular presence of the protein was graded semiquantitatively. When present, laminin and collagen type IV were exclusively found in the pericellular matrix, and in a discrete layer on the articulating surface of normal articular cartilage. In normal articular (hyaline) cartilage in the human and goat, the proteins were found co-localized pericellularly. In contrast, in human osteoarthritic articular cartilage, collagen type IV but not laminin was found in the pericellular region. Nonpathological fibrocartilaginous tissues from the goat, including the menisci and the enthesis, were also positive for both laminin and collagen type IV pericellularly. In degenerated fibrocartilage, including intervertebral disc, as in degenerated hyaline cartilage only collagen type IV was found pericellularly around chondrocytes but with less intense staining than in non-degenerated tissue. In calcified cartilage, some cells were positive for laminin but not type IV collagen. We report differences in expression of the BM molecules, laminin and collagen type IV, in normal and degenerative cartilaginous tissues from adult humans and goats. In degenerative tissues laminin is depleted from the pericellular matrix before collagen type IV. The findings may inform future studies of the processes underlying cartilage degeneration and the functional roles of these 2 extracellular matrix proteins

  19. Distribution of Basement Membrane Molecules, Laminin and Collagen Type IV, in Normal and Degenerated Cartilage Tissues

    Science.gov (United States)

    Toh, Wei Seong; Gomoll, Andreas H.; Olsen, Bjørn Reino; Spector, Myron

    2014-01-01

    Objective: The objective of the present study was to investigate the presence and distribution of 2 basement membrane (BM) molecules, laminin and collagen type IV, in healthy and degenerative cartilage tissues. Design: Normal and degenerated tissues were obtained from goats and humans, including articular knee cartilage, the intervertebral disc, and meniscus. Normal tissue was also obtained from patella-tibial enthesis in goats. Immunohistochemical analysis was performed using anti-laminin and anti–collagen type IV antibodies. Human and goat skin were used as positive controls. The percentage of cells displaying the pericellular presence of the protein was graded semiquantitatively. Results: When present, laminin and collagen type IV were exclusively found in the pericellular matrix, and in a discrete layer on the articulating surface of normal articular cartilage. In normal articular (hyaline) cartilage in the human and goat, the proteins were found co-localized pericellularly. In contrast, in human osteoarthritic articular cartilage, collagen type IV but not laminin was found in the pericellular region. Nonpathological fibrocartilaginous tissues from the goat, including the menisci and the enthesis, were also positive for both laminin and collagen type IV pericellularly. In degenerated fibrocartilage, including intervertebral disc, as in degenerated hyaline cartilage only collagen type IV was found pericellularly around chondrocytes but with less intense staining than in non-degenerated tissue. In calcified cartilage, some cells were positive for laminin but not type IV collagen. Conclusions: We report differences in expression of the BM molecules, laminin and collagen type IV, in normal and degenerative cartilaginous tissues from adult humans and goats. In degenerative tissues laminin is depleted from the pericellular matrix before collagen type IV. The findings may inform future studies of the processes underlying cartilage degeneration and the functional

  20. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Directory of Open Access Journals (Sweden)

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  1. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    Science.gov (United States)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  2. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  3. A random effects meta-analysis model with Box-Cox transformation

    Directory of Open Access Journals (Sweden)

    Yusuke Yamaguchi

    2017-07-01

    Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and

  4. Mathematical Models for Room Air Distribution - Addendum

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    1982-01-01

    A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....

  5. Therapeutic analysis of high-dose-rate {sup 192}Ir vaginal cuff brachytherapy for endometrial cancer using a cylindrical target volume model and varied cancer cell distributions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu; Donnelly, Eric D.; Strauss, Jonathan B. [Department of Radiation Oncology, Robert H. Lurie Comprehensive Cancer Center, Northwestern University Feinberg School of Medicine, Northwestern Memorial Hospital, Chicago, Illinois 60611 (United States); Qi, Yujin [Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia)

    2016-01-15

    Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model was used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0){sup 4} to (13

  6. Condition monitoring with wind turbine SCADA data using Neuro-Fuzzy normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2012-01-01

    System (ANFIS) models are employed to learn the normal behavior in a training phase, where the component condition can be considered healthy. In the application phase the trained models are applied to predict the target signals, e.g. temperatures, pressures, currents, power output, etc. The behavior......This paper presents the latest research results of a project that focuses on normal behavior models for condition monitoring of wind turbines and their components, via ordinary Supervisory Control And Data Acquisition (SCADA) data. In this machine learning approach Adaptive Neuro-Fuzzy Interference...... of the prediction error is used as an indicator for normal and abnormal behavior, with respect to the learned behavior. The advantage of this approach is that the prediction error is widely decoupled from the typical fluctuations of the SCADA data caused by the different turbine operational modes. To classify...

  7. A Distributional Representation Model For Collaborative Filtering

    OpenAIRE

    Junlin, Zhang; Heng, Cai; Tongwen, Huang; Huiping, Xue

    2015-01-01

    In this paper, we propose a very concise deep learning approach for collaborative filtering that jointly models distributional representation for users and items. The proposed framework obtains better performance when compared against current state-of-art algorithms and that made the distributional representation model a promising direction for further research in the collaborative filtering.

  8. Rapid Prototyping of Formally Modelled Distributed Systems

    OpenAIRE

    Buchs, Didier; Buffo, Mathieu; Titsworth, Frances M.

    1999-01-01

    This paper presents various kinds of prototypes, used in the prototyping of formally modelled distributed systems. It presents the notions of prototyping techniques and prototype evolution, and shows how to relate them to the software life-cycle. It is illustrated through the use of the formal modelling language for distributed systems CO-OPN/2.

  9. Modified model of neutron resonance widths distribution. Results of total gamma-widths approximation

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.

    2011-01-01

    Functional dependences of probability to observe given Γ n 0 value and algorithms for determination of the most probable magnitudes of the modified model of resonance parameter distributions were used for analysis of the experimental data on the total radiative widths of neutron resonances. As in the case of neutron widths, precise description of the Γ γ spectra requires a superposition of three and more probability distributions for squares of the random normally distributed values with different nonzero average and nonunit dispersion. This result confirms the preliminary conclusion obtained earlier at analysis of Γ n 0 that practically in all 56 tested sets of total gamma widths there are several groups noticeably differing from each other by the structure of their wave functions. In addition, it was determined that radiative widths are much more sensitive than the neutron ones to resonance wave functions structure. Analysis of early obtained neutron reduced widths distribution parameters for 157 resonance sets in the mass region of nuclei 35 ≤ A ≤ 249 was also performed. It was shown that the experimental values of widths can correspond with high probability to superposition of several expected independent distributions with their nonzero mean values and nonunit dispersion

  10. A micromechanical study of porous composites under longitudinal shear and transverse normal loading

    DEFF Research Database (Denmark)

    Ashouri Vajari, Danial

    2015-01-01

    The mechanical response of porous unidirectional composites under transverse normal and longitudinal shear loading is studied using the finite element analysis. The 3D model includes discrete and random distribution of fibers and voids. The micromechanical failure mechanisms are taken into account....... Finally, the computational prediction of the porous composite in the transverse normal-longitudinal shear stress space is obtained and compared with Puck's model. The results show that both interfaces with low fracture toughness and microvoids with even small void volume fraction can significantly reduce...

  11. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  12. Molecular dynamics study of lipid bilayers modeling the plasma membranes of normal murine thymocytes and leukemic GRSL cells.

    Science.gov (United States)

    Andoh, Yoshimichi; Okazaki, Susumu; Ueoka, Ryuichi

    2013-04-01

    Molecular dynamics (MD) calculations for the plasma membranes of normal murine thymocytes and thymus-derived leukemic GRSL cells in water have been performed under physiological isothermal-isobaric conditions (310.15K and 1 atm) to investigate changes in membrane properties induced by canceration. The model membranes used in our calculations for normal and leukemic thymocytes comprised 23 and 25 kinds of lipids, respectively, including phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, lysophospholipids, and cholesterol. The mole fractions of the lipids adopted here were based on previously published experimental values. Our calculations clearly showed that the membrane area was increased in leukemic cells, and that the isothermal area compressibility of the leukemic plasma membranes was double that of normal cells. The calculated membranes of leukemic cells were thus considerably bulkier and softer in the lateral direction compared with those of normal cells. The tilt angle of the cholesterol and the conformation of the phospholipid fatty acid tails both showed a lower level of order in leukemic cell membranes compared with normal cell membranes. The lateral radial distribution function of the lipids also showed a more disordered structure in leukemic cell membranes than in normal cell membranes. These observations all show that, for the present thymocytes, the lateral structure of the membrane is considerably disordered by canceration. Furthermore, the calculated lateral self-diffusion coefficient of the lipid molecules in leukemic cell membranes was almost double that in normal cell membranes. The calculated rotational and wobbling autocorrelation functions also indicated that the molecular motion of the lipids was enhanced in leukemic cell membranes. Thus, here we have demonstrated that the membranes of thymocyte leukemic cells are more disordered and more fluid than normal cell membranes. Copyright © 2013

  13. Species Distribution Modeling: Comparison of Fixed and Mixed Effects Models Using INLA

    Directory of Open Access Journals (Sweden)

    Lara Dutra Silva

    2017-12-01

    Full Text Available Invasive alien species are among the most important, least controlled, and least reversible of human impacts on the world’s ecosystems, with negative consequences affecting biodiversity and socioeconomic systems. Species distribution models have become a fundamental tool in assessing the potential spread of invasive species in face of their native counterparts. In this study we compared two different modeling techniques: (i fixed effects models accounting for the effect of ecogeographical variables (EGVs; and (ii mixed effects models including also a Gaussian random field (GRF to model spatial correlation (Matérn covariance function. To estimate the potential distribution of Pittosporum undulatum and Morella faya (respectively, invasive and native trees, we used geo-referenced data of their distribution in Pico and São Miguel islands (Azores and topographic, climatic and land use EGVs. Fixed effects models run with maximum likelihood or the INLA (Integrated Nested Laplace Approximation approach provided very similar results, even when reducing the size of the presences data set. The addition of the GRF increased model adjustment (lower Deviance Information Criterion, particularly for the less abundant tree, M. faya. However, the random field parameters were clearly affected by sample size and species distribution pattern. A high degree of spatial autocorrelation was found and should be taken into account when modeling species distribution.

  14. Retention and subcellular distribution of 67Ga in normal organs

    International Nuclear Information System (INIS)

    Ando, A.; Ando, I.; Hiraki, T.

    1986-01-01

    Using normal rats, retention values and subcellular distribution of 67 Ga in each organ were investigated. At 10 min after administration of 67 Ga-citrate the retention value of 67 Ga in blood was 6.77% dose/g, and this value decreased with time. The values for skeletal muscle, lung, pancreas, adrenal, heart muscle, brain, small intestine, large intestine and spinal cord were the highest at 10 min after administration, and they decreased with time. Conversely this value in bone increased until 10 days after injection. But in the liver, kidney, and stomach, these values increased with time after administration and were highest 24 h or 48 h after injection. After that, they decreased with time. The value in spleen reached a plateau 48 h after administration, and hardly varied for 10 days. From the results of subcellular fractionation, it was deduced that lysosome plays quite an important role in the concentration of 67 Ga in small intestine, stomach, lung, kidney and pancreas; a lesser role in its concentration in heart muscle, and hardly any role in the 67 Ga accumulation in skeletal muscle. In spleen, the contents in nuclear, mitochrondrial, microsomal, and supernatant fractions all contributed to the accumulation of 67 Ga. (orig.) [de

  15. Best Statistical Distribution of flood variables for Johor River in Malaysia

    Science.gov (United States)

    Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.

    2012-12-01

    A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow

  16. Predicting Spatial Distribution of Key Honeybee Pests in Kenya Using Remotely Sensed and Bioclimatic Variables: Key Honeybee Pests Distribution Models

    Directory of Open Access Journals (Sweden)

    David M. Makori

    2017-02-01

    Full Text Available Bee keeping is indispensable to global food production. It is an alternate income source, especially in rural underdeveloped African settlements, and an important forest conservation incentive. However, dwindling honeybee colonies around the world are attributed to pests and diseases whose spatial distribution and influences are not well established. In this study, we used remotely sensed data to improve the reliability of pest ecological niche (EN models to attain reliable pest distribution maps. Occurrence data on four pests (Aethina tumida, Galleria mellonella, Oplostomus haroldi and Varroa destructor were collected from apiaries within four main agro-ecological regions responsible for over 80% of Kenya’s bee keeping. Africlim bioclimatic and derived normalized difference vegetation index (NDVI variables were used to model their ecological niches using Maximum Entropy (MaxEnt. Combined precipitation variables had a high positive logit influence on all remotely sensed and biotic models’ performance. Remotely sensed vegetation variables had a substantial effect on the model, contributing up to 40.8% for G. mellonella and regions with high rainfall seasonality were predicted to be high-risk areas. Projections (to 2055 indicated that, with the current climate change trend, these regions will experience increased honeybee pest risk. We conclude that honeybee pests could be modelled using bioclimatic data and remotely sensed variables in MaxEnt. Although the bioclimatic data were most relevant in all model results, incorporating vegetation seasonality variables to improve mapping the ‘actual’ habitat of key honeybee pests and to identify risk and containment zones needs to be further investigated.

  17. PHARMACOKINETIC VARIATIONS OF OFLOXACIN IN NORMAL AND FEBRILE RABBITS

    Directory of Open Access Journals (Sweden)

    M. AHMAD, H. RAZA, G. MURTAZA AND N. AKHTAR

    2008-12-01

    Full Text Available The influence of experimentally Escherichia coli-induced fever (EEIF on the pharmacokinetics of ofloxacin was evaluated. Ofloxacin was administered @ 20 mg.kg-1 body weight intravenously to a group of eight healthy rabbits and compared these results to values in same eight rabbits with EEIF. Pharmacokinetic parameters of ofloxacin in normal and febrile rabbits were determined by using two compartment open kinetic model. Peak plasma level (Cmax and area under the plasma concentration-time curve (AUC0-α in normal and febrile rabbits did not differ (P>0.05. However, area under first moment of plasma concentration-time curve (AUMC0-α in febrile rabbits was significantly (P<0.05 higher than that in normal rabbits. Mean values for elimination rate constant (Ke, elimination half life (t1/2β and apparent volume of distribution (Vd were significantly (P<0.05 lower in febrile rabbits compared to normal rabbits, while mean residence time (MRT and total body clearance (Cl of ofloxacin did not show any significant difference in the normal and febrile rabbits. Clinical significance of the above results can be related to the changes in the volume of distribution and elimination half life that illustrates an altered steady state in febrile condition; hence, the need for an adjustment of dosage regimen in EEIF is required.

  18. Normal values of regional left ventricular myocardial thickness, mass and distribution-assessed by 320-detector computed tomography angiography in the Copenhagen General Population Study

    DEFF Research Database (Denmark)

    Hindsø, Louise; Fuchs, Andreas; Kühl, Jørgen Tobias

    2017-01-01

    regional normal reference values of the left ventricle. The aim of this study was to derive reference values of regional LV myocardial thickness (LVMT) and mass (LVMM) from a healthy study group of the general population using cardiac computed tomography angiography (CCTA). We wanted to introduce LV...... myocardial distribution (LVMD) as a measure of regional variation of the LVMT. Moreover, we wanted to determine whether these parameters varied between men and women. We studied 568 (181 men; 32%) adults, free of cardiovascular disease and risk factors, who underwent 320-detector CCTA. Mean age was 55 (range...... 40-84) years. Regional LVMT and LVMM were measured, according to the American Heart Association's 17 segment model, using semi-automatic software. Mean LVMT were 6.6 mm for men and 5.4 mm for women (p normal LV was thickest in the basal septum (segment 3; men = 8.3 mm; women = 7.2 mm...

  19. Are your covariates under control? How normalization can re-introduce covariate effects.

    Science.gov (United States)

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  20. Are There More Gifted People Than Would Be Expected in a Normal Distribution? An Investigation of the Overabundance Hypothesis

    Science.gov (United States)

    Warne, Russell T.; Godwin, Lindsey R.; Smith, Kyle V.

    2013-01-01

    Among some gifted education researchers, advocates, and practitioners, it is sometimes believed that there is a larger number of gifted people in the general population than would be predicted from a normal distribution (e.g., Gallagher, 2008; N. M. Robinson, Zigler, & Gallagher, 2000; Silverman, 1995, 2009), a belief that we termed the…

  1. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    Science.gov (United States)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  2. Species Distribution modeling as a tool to unravel determinants of palm distribution in Thailand

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Balslev, Henrik

    2011-01-01

    As a consequence of the decimation of the forest cover in Thailand from 50% to ca. 20 % since the 1950ies, it is difficult to gain insight in the drivers behind past, present and future distribution ranges of plant species. Species distribution modeling allows visualization of potential species...... distribution under specific sets of assumptions. In this study we used maximum entropy to map potential distributions of 103 species of palms for which more than 5 herbarium records exist. Palms constitute key-stone plant group from both an ecological, economical and conservation perspective. The models were......) and the Area Under the Curve (AUC). All models performed well with AUC scores above 0.95. The predicted distribution ranges showed high suitability for palms in the southern region of Thailand. It also shows that spatial predictor variables are important in cases where historical processes may explain extant...

  3. A distributed snow-evolution modeling system (SnowModel)

    Science.gov (United States)

    Glen E. Liston; Kelly. Elder

    2006-01-01

    SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...

  4. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  5. Species distribution model transferability and model grain size - finer may not always be better.

    Science.gov (United States)

    Manzoor, Syed Amir; Griffiths, Geoffrey; Lukac, Martin

    2018-05-08

    Species distribution models have been used to predict the distribution of invasive species for conservation planning. Understanding spatial transferability of niche predictions is critical to promote species-habitat conservation and forecasting areas vulnerable to invasion. Grain size of predictor variables is an important factor affecting the accuracy and transferability of species distribution models. Choice of grain size is often dependent on the type of predictor variables used and the selection of predictors sometimes rely on data availability. This study employed the MAXENT species distribution model to investigate the effect of the grain size on model transferability for an invasive plant species. We modelled the distribution of Rhododendron ponticum in Wales, U.K. and tested model performance and transferability by varying grain size (50 m, 300 m, and 1 km). MAXENT-based models are sensitive to grain size and selection of variables. We found that over-reliance on the commonly used bioclimatic variables may lead to less accurate models as it often compromises the finer grain size of biophysical variables which may be more important determinants of species distribution at small spatial scales. Model accuracy is likely to increase with decreasing grain size. However, successful model transferability may require optimization of model grain size.

  6. Distributed Photovoltaics in the Swedish Energy System. Model Development and Simulations

    International Nuclear Information System (INIS)

    Widen, Joakim

    2009-06-01

    Application of photovoltaics (PV) is increasing worldwide, mainly due to extensive subsidy schemes for introducing CO 2 -free power generation. A majority of newly installed systems are distributed small-scale systems located in distribution grids, often at residential customers. Recent developments suggest that such distributed PV generation (PV-DG) could gain more interest in Sweden in the near future. With prospects of decreasing module prices, an extensive integration could be possible. This licentiate thesis presents the first part of a PhD project with the aim to determine the potential for domestic PV-DG in Sweden. Two aspects are treated in detail in the thesis: (1) the ability of PV to match a local domestic power demand and (2) impacts of extensive integration of PV-DG on power flow in low-voltage (LV) distribution grids. To make realistic studies for high-latitude conditions, there is a need for representative demand and PV generation data. As there is a lack of detailed domestic load data in Sweden, a major part of the work has been devoted to development of a stochastic load model. Interdisciplinary studies of household activities were performed to get insight into how domestic electricity use is embedded in the structure of everyday life. It was found that time-use (TU) data, normally used in the social sciences, can be used to model domestic power demand. Both a conversion model for estimating power demand from empirical TU data and a stochastic Markov-chain model for generating synthetic activity patterns and power demand were developed and extensively validated against measurements. Importantly, a realistic model of domestic lighting demand from occupancy patterns and irradiation data was developed, that preserves the negative correlation between irradiation and lighting demand. The models provide a basis for load matching studies and power-flow simulations, but can be used for other purposes as well. Case studies of individual households showed

  7. Distributed Photovoltaics in the Swedish Energy System. Model Development and Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Widen, Joakim

    2009-06-15

    Application of photovoltaics (PV) is increasing worldwide, mainly due to extensive subsidy schemes for introducing CO{sub 2}-free power generation. A majority of newly installed systems are distributed small-scale systems located in distribution grids, often at residential customers. Recent developments suggest that such distributed PV generation (PV-DG) could gain more interest in Sweden in the near future. With prospects of decreasing module prices, an extensive integration could be possible. This licentiate thesis presents the first part of a PhD project with the aim to determine the potential for domestic PV-DG in Sweden. Two aspects are treated in detail in the thesis: (1) the ability of PV to match a local domestic power demand and (2) impacts of extensive integration of PV-DG on power flow in low-voltage (LV) distribution grids. To make realistic studies for high-latitude conditions, there is a need for representative demand and PV generation data. As there is a lack of detailed domestic load data in Sweden, a major part of the work has been devoted to development of a stochastic load model. Interdisciplinary studies of household activities were performed to get insight into how domestic electricity use is embedded in the structure of everyday life. It was found that time-use (TU) data, normally used in the social sciences, can be used to model domestic power demand. Both a conversion model for estimating power demand from empirical TU data and a stochastic Markov-chain model for generating synthetic activity patterns and power demand were developed and extensively validated against measurements. Importantly, a realistic model of domestic lighting demand from occupancy patterns and irradiation data was developed, that preserves the negative correlation between irradiation and lighting demand. The models provide a basis for load matching studies and power-flow simulations, but can be used for other purposes as well. Case studies of individual households

  8. Normal bone and soft tissue distribution of fluorine-18-sodium fluoride and artifacts on 18F-NaF PET/CT bone scan: a pictorial review.

    Science.gov (United States)

    Sarikaya, Ismet; Elgazzar, Abdelhamid H; Sarikaya, Ali; Alfeeli, Mahmoud

    2017-10-01

    Fluorine-18-sodium fluoride (F-NaF) PET/CT is a relatively new and high-resolution bone imaging modality. Since the use of F-NaF PET/CT has been increasing, it is important to accurately assess the images and be aware of normal distribution and major artifacts. In this pictorial review article, we will describe the normal uptake patterns of F-NaF in the bone tissues, particularly in complex structures, as well as its physiologic soft tissue distribution and certain artifacts seen on F-NaF PET/CT images.

  9. Normal and abnormal distribution of the adrenomedullary imaging agent m-[I-131]iodobenzylguanidine (I-131 MIBG) in man; evaluation by scintigraphy

    International Nuclear Information System (INIS)

    Nakajo, M.; Shapiro, B.; Copp, J.; Kalff, V.; Gross, M.D.; Sisson, J.C.; Beierwaltes, W.H.

    1983-01-01

    The scintigraphic distribution of m-[ 131 I]iodobenzylguanidine (I-131 MIBG), an adrenal medullary imaging agent, was studied to determine the patterns of uptake of this agent in man. The normal distribution of I-131 MIBG includes clear portrayal of the salivary glands, liver, spleen, and urinary bladder. The heart, middle and lower lung zones, and colon were less frequently or less clearly seen. The upper lung zones and kidneys were seldom visualized. The thyroid appeared only in cases of inadequate thyroidal blockade. The normal adrenal glands were seldom seen and faintly imaged in 2% at 24 h after injection and in 16% at 48 h, in patients shown not to have pheochromocytomas, whereas intra-adrenal, extra-adrenal, and malignant pheochromocytomas usually appeared as intense focal areas of I-131 MIBG uptake at 24 through 72 h

  10. Power laws in citation distributions: evidence from Scopus.

    Science.gov (United States)

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  11. Charge distribution in an two-chain dual model

    International Nuclear Information System (INIS)

    Fialkowski, K.; Kotanski, A.

    1983-01-01

    Charge distributions in the multiple production processes are analysed using the dual chain model. A parametrisation of charge distributions for single dual chains based on the νp and anti vp data is proposed. The rapidity charge distributions are then calculated for pp and anti pp collisions and compared with the previous calculations based on the recursive cascade model of single chains. The results differ at the SPS collider energies and in the energy dependence of the net forward charge supplying the useful tests of the dual chain model. (orig.)

  12. A coupled classification - evolutionary optimization model for contamination event detection in water distribution systems.

    Science.gov (United States)

    Oliker, Nurit; Ostfeld, Avi

    2014-03-15

    This study describes a decision support system, alerts for contamination events in water distribution systems. The developed model comprises a weighted support vector machine (SVM) for the detection of outliers, and a following sequence analysis for the classification of contamination events. The contribution of this study is an improvement of contamination events detection ability and a multi-dimensional analysis of the data, differing from the parallel one-dimensional analysis conducted so far. The multivariate analysis examines the relationships between water quality parameters and detects changes in their mutual patterns. The weights of the SVM model accomplish two goals: blurring the difference between sizes of the two classes' data sets (as there are much more normal/regular than event time measurements), and adhering the time factor attribute by a time decay coefficient, ascribing higher importance to recent observations when classifying a time step measurement. All model parameters were determined by data driven optimization so the calibration of the model was completely autonomic. The model was trained and tested on a real water distribution system (WDS) data set with randomly simulated events superimposed on the original measurements. The model is prominent in its ability to detect events that were only partly expressed in the data (i.e., affecting only some of the measured parameters). The model showed high accuracy and better detection ability as compared to previous modeling attempts of contamination event detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Therapeutic analysis of high-dose-rate "1"9"2Ir vaginal cuff brachytherapy for endometrial cancer using a cylindrical target volume model and varied cancer cell distributions

    International Nuclear Information System (INIS)

    Zhang, Hualin; Donnelly, Eric D.; Strauss, Jonathan B.; Qi, Yujin

    2016-01-01

    Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model was used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0)"4 to (13.4)"4 for

  14. Uncertainty Visualization Using Copula-Based Analysis in Mixed Distribution Models.

    Science.gov (United States)

    Hazarika, Subhashis; Biswas, Ayan; Shen, Han-Wei

    2018-01-01

    Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.

  15. The κ-generalized distribution: A new descriptive model for the size distribution of incomes

    Science.gov (United States)

    Clementi, F.; Di Matteo, T.; Gallegati, M.; Kaniadakis, G.

    2008-05-01

    This paper proposes the κ-generalized distribution as a model for describing the distribution and dispersion of income within a population. Formulas for the shape, moments and standard tools for inequality measurement-such as the Lorenz curve and the Gini coefficient-are given. A method for parameter estimation is also discussed. The model is shown to fit extremely well the data on personal income distribution in Australia and in the United States.

  16. Competition between clonal plasma cells and normal cells for potentially overlapping bone marrow niches is associated with a progressively altered cellular distribution in MGUS vs myeloma.

    Science.gov (United States)

    Paiva, B; Pérez-Andrés, M; Vídriales, M-B; Almeida, J; de las Heras, N; Mateos, M-V; López-Corral, L; Gutiérrez, N C; Blanco, J; Oriol, A; Hernández, M T; de Arriba, F; de Coca, A G; Terol, M-J; de la Rubia, J; González, Y; Martín, A; Sureda, A; Schmidt-Hieber, M; Schmitz, A; Johnsen, H E; Lahuerta, J-J; Bladé, J; San-Miguel, J F; Orfao, A

    2011-04-01

    Disappearance of normal bone marrow (BM) plasma cells (PC) predicts malignant transformation of monoclonal gammopathy of undetermined significance (MGUS) and smoldering myeloma (SMM) into symptomatic multiple myeloma (MM). The homing, behavior and survival of normal PC, but also CD34(+) hematopoietic stem cells (HSC), B-cell precursors, and clonal PC largely depends on their interaction with stromal cell-derived factor-1 (SDF-1) expressing, potentially overlapping BM stromal cell niches. Here, we investigate the distribution, phenotypic characteristics and competitive migration capacity of these cell populations in patients with MGUS, SMM and MM vs healthy adults (HA) aged >60 years. Our results show that BM and peripheral blood (PB) clonal PC progressively increase from MGUS to MM, the latter showing a slightly more immature immunophenotype. Of note, such increased number of clonal PC is associated with progressive depletion of normal PC, B-cell precursors and CD34(+) HSC in the BM, also with a parallel increase in PB. In an ex vivo model, normal PC, B-cell precursors and CD34(+) HSC from MGUS and SMM, but not MM patients, were able to abrogate the migration of clonal PC into serial concentrations of SDF-1. Overall, our results show that progressive competition and replacement of normal BM cells by clonal PC is associated with more advanced disease in patients with MGUS, SMM and MM.

  17. Studying the Impact of Distributed Solar PV on Power Systems using Integrated Transmission and Distribution Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Himanshu [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krad, Ibrahim [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-24

    This paper presents the results of a distributed solar PV impact assessment study that was performed using a synthetic integrated transmission (T) and distribution (D) model. The primary objective of the study was to present a new approach for distributed solar PV impact assessment, where along with detailed models of transmission and distribution networks, consumer loads were modeled using the physics of end-use equipment, and distributed solar PV was geographically dispersed and connected to the secondary distribution networks. The highlights of the study results were (i) increase in the Area Control Error (ACE) at high penetration levels of distributed solar PV; and (ii) differences in distribution voltages profiles and voltage regulator operations between integrated T&D and distribution only simulations.

  18. Designing the Distributed Model Integration Framework – DMIF

    NARCIS (Netherlands)

    Belete, Getachew F.; Voinov, Alexey; Morales, Javier

    2017-01-01

    We describe and discuss the design and prototype of the Distributed Model Integration Framework (DMIF) that links models deployed on different hardware and software platforms. We used distributed computing and service-oriented development approaches to address the different aspects of

  19. The Analysis of Bankruptcy Risk Using the Normal Distribution Gauss-Laplace in Case of a Company is the Most Modern Romanian Sea-River Port on the Danube

    Directory of Open Access Journals (Sweden)

    Rodica Pripoaie

    2015-08-01

    Full Text Available This work presents the application of the normal distribution Gauss-Laplace in case of a company is the most modern Romanian sea-river port on the Danube, specialized service providers, with a handling capacity of approx. 20,000,000 tons / year. The normal distribution Gauss-Laplace is the most known and used probability distribution, because it surprises better the evolution of economic and financial phenomena. Around the average, which has the greatest frequency, gravitate values more to less distant than average, but with the same standard deviation. It is noted that, although used in the forecasting calculations, analysis of profitability threshold - even ignores the risk of decisional operations (regarding deviations between the forecast and achievements, which may, in certain circumstances, influence much the activity of the company. This can be held into account when carefully studying the evolution of turnover follows a law of probability. In case not exist any information on the law of probability of turnover and no reason that one case appear more than another, according of Laplace law, we consider that these cases are uniformly distributed, therefore they follow a normal distribution.

  20. Distribution of intravenously administered acetylcholinesterase inhibitor and acetylcholinesterase activity in the adrenal gland: 11C-donepezil PET study in the normal rat.

    Science.gov (United States)

    Watabe, Tadashi; Naka, Sadahiro; Ikeda, Hayato; Horitsugi, Genki; Kanai, Yasukazu; Isohashi, Kayako; Ishibashi, Mana; Kato, Hiroki; Shimosegawa, Eku; Watabe, Hiroshi; Hatazawa, Jun

    2014-01-01

    Acetylcholinesterase (AChE) inhibitors have been used for patients with Alzheimer's disease. However, its pharmacokinetics in non-target organs other than the brain has not been clarified yet. The purpose of this study was to evaluate the relationship between the whole-body distribution of intravenously administered (11)C-Donepezil (DNP) and the AChE activity in the normal rat, with special focus on the adrenal glands. The distribution of (11)C-DNP was investigated by PET/CT in 6 normal male Wistar rats (8 weeks old, body weight  = 220 ± 8.9 g). A 30-min dynamic scan was started simultaneously with an intravenous bolus injection of (11)C-DNP (45.0 ± 10.7 MBq). The whole-body distribution of the (11)C-DNP PET was evaluated based on the Vt (total distribution volume) by Logan-plot analysis. A fluorometric assay was performed to quantify the AChE activity in homogenized tissue solutions of the major organs. The PET analysis using Vt showed that the adrenal glands had the 2nd highest level of (11)C-DNP in the body (following the liver) (13.33 ± 1.08 and 19.43 ± 1.29 ml/cm(3), respectively), indicating that the distribution of (11)C-DNP was the highest in the adrenal glands, except for that in the excretory organs. The AChE activity was the third highest in the adrenal glands (following the small intestine and the stomach) (24.9 ± 1.6, 83.1 ± 3.0, and 38.5 ± 8.1 mU/mg, respectively), indicating high activity of AChE in the adrenal glands. We demonstrated the whole-body distribution of (11)C-DNP by PET and the AChE activity in the major organs by fluorometric assay in the normal rat. High accumulation of (11)C-DNP was observed in the adrenal glands, which suggested the risk of enhanced cholinergic synaptic transmission by the use of AChE inhibitors.

  1. Distribution of intravenously administered acetylcholinesterase inhibitor and acetylcholinesterase activity in the adrenal gland: 11C-donepezil PET study in the normal rat.

    Directory of Open Access Journals (Sweden)

    Tadashi Watabe

    Full Text Available PURPOSE: Acetylcholinesterase (AChE inhibitors have been used for patients with Alzheimer's disease. However, its pharmacokinetics in non-target organs other than the brain has not been clarified yet. The purpose of this study was to evaluate the relationship between the whole-body distribution of intravenously administered (11C-Donepezil (DNP and the AChE activity in the normal rat, with special focus on the adrenal glands. METHODS: The distribution of (11C-DNP was investigated by PET/CT in 6 normal male Wistar rats (8 weeks old, body weight  = 220 ± 8.9 g. A 30-min dynamic scan was started simultaneously with an intravenous bolus injection of (11C-DNP (45.0 ± 10.7 MBq. The whole-body distribution of the (11C-DNP PET was evaluated based on the Vt (total distribution volume by Logan-plot analysis. A fluorometric assay was performed to quantify the AChE activity in homogenized tissue solutions of the major organs. RESULTS: The PET analysis using Vt showed that the adrenal glands had the 2nd highest level of (11C-DNP in the body (following the liver (13.33 ± 1.08 and 19.43 ± 1.29 ml/cm(3, respectively, indicating that the distribution of (11C-DNP was the highest in the adrenal glands, except for that in the excretory organs. The AChE activity was the third highest in the adrenal glands (following the small intestine and the stomach (24.9 ± 1.6, 83.1 ± 3.0, and 38.5 ± 8.1 mU/mg, respectively, indicating high activity of AChE in the adrenal glands. CONCLUSIONS: We demonstrated the whole-body distribution of (11C-DNP by PET and the AChE activity in the major organs by fluorometric assay in the normal rat. High accumulation of (11C-DNP was observed in the adrenal glands, which suggested the risk of enhanced cholinergic synaptic transmission by the use of AChE inhibitors.

  2. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  3. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  4. Spin fluctuations in liquid 3He: a strong-coupling calculation of T/sub c/ and the normal-state distribution function

    International Nuclear Information System (INIS)

    Fay, D.; Layzer, A.

    1975-01-01

    The Berk--Schrieffer method of strong-coupling superconductivity for nearly ferromagnetic systems is generalized to arbitrary L-state pairing and realistic (hard-core) potentials. Application to 3 He yields a P-state transition but very low values for T/sub c/ and an unsatisfactory normal-state momentum distribution

  5. Simulation of Daily Weather Data Using Theoretical Probability Distributions.

    Science.gov (United States)

    Bruhn, J. A.; Fry, W. E.; Fick, G. W.

    1980-09-01

    A computer simulation model was constructed to supply daily weather data to a plant disease management model for potato late blight. In the weather model Monte Carlo techniques were employed to generate daily values of precipitation, maximum temperature, minimum temperature, minimum relative humidity and total solar radiation. Each weather variable is described by a known theoretical probability distribution but the values of the parameters describing each distribution are dependent on the occurrence of rainfall. Precipitation occurrence is described by a first-order Markov chain. The amount of rain, given that rain has occurred, is described by a gamma probability distribution. Maximum and minimum temperature are simulated with a trivariate normal probability distribution involving maximum temperature on the previous day, maximum temperature on the current day and minimum temperature on the current day. Parameter values for this distribution are dependent on the occurrence of rain on the previous day. Both minimum relative humidity and total solar radiation are assumed to be normally distributed. The values of the parameters describing the distribution of minimum relative humidity is dependent on rainfall occurrence on the previous day and current day. Parameter values for total solar radiation are dependent on the occurrence of rain on the current day. The assumptions made during model construction were found to be appropriate for actual weather data from Geneva, New York. The performance of the weather model was evaluated by comparing the cumulative frequency distributions of simulated weather data with the distributions of actual weather data from Geneva, New York and Fort Collins, Colorado. For each location, simulated weather data were similar to actual weather data in terms of mean response, variability and autocorrelation. The possible applications of this model when used with models of other components of the agro-ecosystem are discussed.

  6. Normal and Abnormal Scenario Modeling with GoldSim for Radioactive Waste Disposal System

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae

    2010-08-01

    A modeling study and development of a total system performance assessment (TSPA) template program, by which an assessment of safety and performance for the radioactive waste repository with normal and/or abnormal nuclide release cases could be assessed has been carried out by utilizing a commercial development tool program, GoldSim. Scenarios associated with the various FEPs and involved in the performance of the proposed repository in view of nuclide transport and transfer both in the geosphere and biosphere has been also carried out. Selected normal and abnormal scenarios that could alter groundwater flow scheme and then nuclide transport are modeled with the template program. To this end in-depth system models for the normal and abnormal well and earthquake scenarios that are conceptually and rather practically described and then ready for implementing into a GoldSim TSPA template program are introduced with conceptual schemes for each repository system. Illustrative evaluations with data currently available are also shown

  7. Modelling and validation of robust partial thawing of frozen convenience foods during distribution in the cold chain

    DEFF Research Database (Denmark)

    Adler-Nissen, Jens; Zammit, Gine Ørnholt

    2011-01-01

    with small blocks of a frozen model food (23 pct. Tylose® gel) and quipped with temperature loggers were distributed by trucks operating in the cold chain. In addition, controlled storage and temperature abuse experiments were conducted. To predict the product temperature–time relationship we developed a new...... frozen even after two days or more of distribution at +5oC, and that the temperatures inside the product and in the middle of the box were quite stable against the normal oscillations of the ambient temperature in the cold chain. The product temperature was also robust against temperature abuse......In collaboration with two commercial distributors we have tested a new concept for distribution, where convenience products for the food service industry are prepared, frozen and packed in cardboard boxes, but distributed in the chill chain at +5°C instead of in the frost chain. This will lead...

  8. Working toward integrated models of alpine plant distribution.

    Science.gov (United States)

    Carlson, Bradley Z; Randin, Christophe F; Boulangeat, Isabelle; Lavergne, Sébastien; Thuiller, Wilfried; Choler, Philippe

    2013-10-01

    Species distribution models (SDMs) have been frequently employed to forecast the response of alpine plants to global changes. Efforts to model alpine plant distribution have thus far been primarily based on a correlative approach, in which ecological processes are implicitly addressed through a statistical relationship between observed species occurrences and environmental predictors. Recent evidence, however, highlights the shortcomings of correlative SDMs, especially in alpine landscapes where plant species tend to be decoupled from atmospheric conditions in micro-topographic habitats and are particularly exposed to geomorphic disturbances. While alpine plants respond to the same limiting factors as plants found at lower elevations, alpine environments impose a particular set of scale-dependent and hierarchical drivers that shape the realized niche of species and that require explicit consideration in a modelling context. Several recent studies in the European Alps have successfully integrated both correlative and process-based elements into distribution models of alpine plants, but for the time being a single integrative modelling framework that includes all key drivers remains elusive. As a first step in working toward a comprehensive integrated model applicable to alpine plant communities, we propose a conceptual framework that structures the primary mechanisms affecting alpine plant distributions. We group processes into four categories, including multi-scalar abiotic drivers, gradient dependent species interactions, dispersal and spatial-temporal plant responses to disturbance. Finally, we propose a methodological framework aimed at developing an integrated model to better predict alpine plant distribution.

  9. The stochastic distribution of available coefficient of friction on quarry tiles for human locomotion.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2012-01-01

    The available coefficient of friction (ACOF) for human locomotion is the maximum coefficient of friction that can be supported without a slip at the shoe and floor interface. A statistical model was introduced to estimate the probability of slip by comparing the ACOF with the required coefficient of friction, assuming that both coefficients have stochastic distributions. This paper presents an investigation of the stochastic distributions of the ACOF of quarry tiles under dry, water and glycerol conditions. One hundred friction measurements were performed on a walkway under the surface conditions of dry, water and 45% glycerol concentration. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF appears to fit the normal and log-normal distributions better than the Weibull distribution for the water and glycerol conditions. However, no match was found between the distribution of ACOF under the dry condition and any of the three continuous distributions evaluated. Based on limited data, a normal distribution might be more appropriate due to its simplicity, practicality and familiarity among the three distributions evaluated.

  10. Fitting the Statistical Distribution for Daily Rainfall in Ibadan, Based ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... Abstract. This paper presents several types of statistical distributions to describe rainfall distribution in Ibadan metropolis over a period of 30 years. The exponential, gamma, normal and poison distributions are compared to identify the optimal model for daily rainfall amount based on data recorded at rain ...

  11. A Distributed Snow Evolution Modeling System (SnowModel)

    Science.gov (United States)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  12. Electric Power Distribution System Model Simplification Using Segment Substitution

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2018-05-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  13. Electric Power Distribution System Model Simplification Using Segment Substitution

    International Nuclear Information System (INIS)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2017-01-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  14. Distributing the Corporate Income Tax: Revised U.S. Treasury Methodology

    OpenAIRE

    Cronin, Julie Anne; Lin, Emily Y.; Power, Laura; Cooper, Michael

    2013-01-01

    The purpose of this analysis is to improve the U.S. Department of the Treasury’s distributional model and methodology by defining new model parameters. We compute the percentage of capital income attributable to normal versus supernormal return, the percentage of normal return attributable to the "cash flow tax" portion of the tax that does not impose a tax burden, and the portion of the burdensome tax on the normal return to capital borne by capital income versus labor income. In summary, 82...

  15. Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo

    Science.gov (United States)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2013-04-01

    We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.

  16. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    Science.gov (United States)

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  17. Numerical Modeling Describing the Effects of Heterogeneous Distributions of Asperities on the Quasi-static Evolution of Frictional Slip

    Science.gov (United States)

    Selvadurai, P. A.; Parker, J. M.; Glaser, S. D.

    2017-12-01

    A better understanding of how slip accumulates along faults and its relation to the breakdown of shear stress is beneficial to many engineering disciplines, such as, hydraulic fracture and understanding induced seismicity (among others). Asperities forming along a preexisting fault resist the relative motion of the two sides of the interface and occur due to the interaction of the surface topographies. Here, we employ a finite element model to simulate circular partial slip asperities along a nominally flat frictional interface. Shear behavior of our partial slip asperity model closely matched the theory described by Cattaneo. The asperity model was employed to simulate a small section of an experimental fault formed between two bodies of polymethyl methacrylate, which consisted of multiple asperities whose location and sizes were directly measured using a pressure sensitive film. The quasi-static shear behavior of the interface was modeled for cyclical loading conditions, and the frictional dissipation (hysteresis) was normal stress dependent. We further our understanding by synthetically modeling lognormal size distributions of asperities that were randomly distributed in space. Synthetic distributions conserved the real contact area and aspects of the size distributions from the experimental case, allowing us to compare the constitutive behaviors based solely on spacing effects. Traction-slip behavior of the experimental interface appears to be considerably affected by spatial clustering of asperities that was not present in the randomly spaced, synthetic asperity distributions. Estimates of bulk interfacial shear stiffness were determined from the constitutive traction-slip behavior and were comparable to the theoretical estimates of multi-contact interfaces with non-interacting asperities.

  18. Overhead distribution line models for harmonics studies

    Energy Technology Data Exchange (ETDEWEB)

    Nagpal, M.; Xu, W.; Dommel, H.W.

    1994-01-01

    Carson's formulae and Maxwell's potential coefficients are used for calculating the per unit length series impedances and shunt capacitances of the overhead lines. The per unit length values are then used for building the models, nominal pi-circuit, and equivalent pi-circuit at the harmonic frequencies. This paper studies the accuracy of these models for presenting the overhead distribution lines in steady-state harmonic solutions at frequencies up to 5 kHz. The models are verified with a field test on a 25 kV distribution line and the sensitivity of the models to ground resistivity, skin effect, and multiple grounding is reported.

  19. Modelling refrigerant distribution in minichannel evaporators

    DEFF Research Database (Denmark)

    Brix, Wiebke

    of the liquid and vapour in the inlet manifold. Combining non-uniform airflow and non-uniform liquid and vapour distribution shows that a non-uniform airflow distribution to some degree can be compensated by a suitable liquid and vapour distribution. Controlling the superheat out of the individual channels...... to be equal, results in a cooling capacity very close to the optimum. A sensitivity study considering parameter changes shows that the course of the pressure gradient in the channel is significant, considering the magnitude of the capacity reductions due to non-uniform liquid and vapour distribution and non......This thesis is concerned with numerical modelling of flow distribution in a minichannel evaporator for air-conditioning. The study investigates the impact of non-uniform airflow and non-uniform distribution of the liquid and vapour phases in the inlet manifold on the refrigerant mass flow...

  20. Rationalisation of distribution functions for models of nanoparticle magnetism

    International Nuclear Information System (INIS)

    El-Hilo, M.; Chantrell, R.W.

    2012-01-01

    A formalism is presented which reconciles the use of different distribution functions of particle diameter in analytical models of the magnetic properties of nanoparticle systems. For the lognormal distribution a transformation is derived which shows that a distribution of volume fraction transforms into a lognormal distribution of particle number albeit with a modified median diameter. This transformation resolves an apparent discrepancy reported in Tournus and Tamion [Journal of Magnetism and Magnetic Materials 323 (2011) 1118]. - Highlights: ► We resolve a problem resulting from the misunderstanding of the nature. ► The nature of dispersion functions in models of nanoparticle magnetism. ► The derived transformation between distributions will be of benefit in comparing models and experimental results.

  1. Pell-Sim - dynamic model for forecasting storage and distribution of wood pellets

    International Nuclear Information System (INIS)

    Vinterbaeck, Johan

    2004-01-01

    This study examined the system of wood pellet distribution to residential consumers. The distribution cost for a residential pellet consumer typically represents 30% of the per tonne price and of this share, the inventory cost could be more than 50%. Important administrative activities in physical distribution are forecasting demand and inventory control. One way to improve distribution systems would be to optimise inventory management for pellet distributors. The aim of this study was to propose improvements in pellet distribution management by using tools from systems analysis. The ultimate goal was to present an optimised storage level curve adapted to the mid-Swedish community of Avesta. An internal model for optimising inventory management, Pell-Sim, was constructed, composed of two integrated parts: a simulation unit to forecast residential wood pellet demand and a spreadsheet unit with inventory-related functions. Daily outdoor temperatures basically regulated the simulation unit. An order point system was chosen for reordering. The residential customers of a distribution company were divided into two groups, delivery and collecting customers, which were statistically treated separately. When collecting and delivery customer input inventories were normally distributed in the intervals from 0 to 3500 kg and 6500 kg, respectively, their annual means of total delivery were both about 7000 kg/customer, which was the desired and empirical level. The expected pellet customer orders were negatively correlated to mean daily temperatures, lagging behind about 1 month. Sensitivity analyses showed that monthly results for ordered quantity and total cost were particularly sensitive to ordering and carrying costs. The Pell-Sim programme can easily be adapted for distributors in other geographical regions. (Author)

  2. Pell-Sim - dynamic model for forecasting storage and distribution of wood pellets

    Energy Technology Data Exchange (ETDEWEB)

    Vinterbaeck, Johan [Swedish Univ. of Agricultural Sciences, Dept. of Forest Management and Products, Uppsala (Sweden)

    2004-12-01

    This study examined the system of wood pellet distribution to residential consumers. The distribution cost for a residential pellet consumer typically represents 30% of the per tonne price and of this share, the inventory cost could be more than 50%. Important administrative activities in physical distribution are forecasting demand and inventory control. One way to improve distribution systems would be to optimise inventory management for pellet distributors. The aim of this study was to propose improvements in pellet distribution management by using tools from systems analysis. The ultimate goal was to present an optimised storage level curve adapted to the mid-Swedish community of Avesta. An internal model for optimising inventory management, Pell-Sim, was constructed, composed of two integrated parts: a simulation unit to forecast residential wood pellet demand and a spreadsheet unit with inventory-related functions. Daily outdoor temperatures basically regulated the simulation unit. An order point system was chosen for reordering. The residential customers of a distribution company were divided into two groups, delivery and collecting customers, which were statistically treated separately. When collecting and delivery customer input inventories were normally distributed in the intervals from 0 to 3500 kg and 6500 kg, respectively, their annual means of total delivery were both about 7000 kg/customer, which was the desired and empirical level. The expected pellet customer orders were negatively correlated to mean daily temperatures, lagging behind about 1 month. Sensitivity analyses showed that monthly results for ordered quantity and total cost were particularly sensitive to ordering and carrying costs. The Pell-Sim programme can easily be adapted for distributors in other geographical regions. (Author)

  3. Unpolarized structure functions and the parton distributions for nucleon in an independent quark model

    Energy Technology Data Exchange (ETDEWEB)

    Barik, N [Dept. of Physics, Utkal Univ., Bhubaneswar (India); Mishra, R N [Dept. of Physics, Dhenkanal College, Dhenkanal (India)

    2001-04-01

    Considering the nucleon as consisting entirely of its valence quarks confined independently in a scalar-vector harmonic potential; unpolarized structure functions F{sub 1} (x, {mu}{sup 2}) and F{sub 2} (x, {mu}{sup 2}) are derived in the Bjorken limit under certain simplifying assumptions; from which valence quark distribution functions u{sub v} (x, {mu}{sup 2}) and d{sub v} (x, {mu}{sup 2}) are appropriately extracted satisfying the normalization constraints. QCD-evolution of these input distributions from a model scale of {mu}{sup 2} = 0.07 GeV{sup 2} to a higher Q{sup 2} scale of Q{sub 0}{sup 2} = 15 GeV{sup 2} yields xu{sub v} (x, Q{sub 0}{sup 2}) and xd{sub v} (x, Q{sub 0}{sup 2}) in good agreement with experimental data. The gluon and sea-quark distributions such as G (x, Q{sub 0}{sup 2}) and q{sub s} (x, Q{sub 0}{sup 2}) are dynamically generated with a reasonable qualitative agreement with the available data; using the leading order renormalization group equations with appropriate valence-quark distributions as the input. (author)

  4. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  5. Business Models and Regulation | Distributed Generation Interconnection

    Science.gov (United States)

    Collaborative | NREL Business Models and Regulation Business Models and Regulation Subscribe to new business models and approaches. The growing role of distributed resources in the electricity system is leading to a shift in business models and regulation for electric utilities. These

  6. Tempered stable distributions stochastic models for multiscale processes

    CERN Document Server

    Grabchak, Michael

    2015-01-01

    This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions.  A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.

  7. [Calbindin and parvalbumin distribution in spinal cord of normal and rabies-infected mice].

    Science.gov (United States)

    Monroy-Gómez, Jeison; Torres-Fernández, Orlando

    2013-01-01

    Rabies is a fatal infectious disease of the nervous system; however, the knowledge about the pathogenic neural mechanisms in rabies is scarce. In addition, there are few studies of rabies pathology of the spinal cord. To study the distribution of calcium binding proteins calbindin and parvalbumin and assessing the effect of rabies virus infection on their expression in the spinal cord of mice. MATERIALES Y METHODS: Mice were inoculated with rabies virus, by intracerebral or intramuscular route. The spinal cord was extracted to perform some crosscuts which were treated by immunohistochemistry with monoclonal antibodies to reveal the presence of the two proteins in normal and rabies infected mice. We did qualitative and quantitative analyses of the immunoreactivity of the two proteins. Calbindin and parvalbumin showed differential distribution in Rexed laminae. Rabies infection produced a decrease in the expression of calbindin. On the contrary, the infection caused an increased expression of parvalbumin. The effect of rabies infection on the two proteins expression was similar when comparing both routes of inoculation. The differential effect of rabies virus infection on the expression of calbindin and parvalbumin in the spinal cord of mice was similar to that previously reported for brain areas. This result suggests uniformity in the response to rabies infection throughout the central nervous system. This is an important contribution to the understanding of the pathogenesis of rabies.

  8. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  9. Simulation of reactive nanolaminates using reduced models: II. Normal propagation

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Maher; Knio, Omar M. [Department of Mechanical Engineering, The Johns Hopkins University, Baltimore, MD 21218-2686 (United States)

    2010-03-15

    Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)

  10. Modelling of tension stiffening for normal and high strength concrete

    DEFF Research Database (Denmark)

    Christiansen, Morten Bo; Nielsen, Mogens Peter

    1998-01-01

    form the model is extended to apply to biaxial stress fields as well. To determine the biaxial stress field, the theorem of minimum complementary elastic energy is used. The theory has been compared with tests on rods, disks, and beams of both normal and high strength concrete, and very good results...

  11. Probabilistic Load Models for Simulating the Impact of Load Management

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Bak-Jensen, Birgitte; Chen, Zhe

    2009-01-01

    . It is concluded that the AR(12) model is favored with limited measurement data and that the joint-normal model may provide better results with a large data set. Both models can be applied in general to model load time series and used in time-sequential simulation of distribution system planning.......This paper analyzes a distribution system load time series through autocorrelation coefficient, power spectral density, probabilistic distribution and quantile value. Two probabilistic load models, i.e. the joint-normal model and the autoregressive model of order 12 (AR(12)), are proposed...... to simulate the impact of load management. The joint-normal model is superior in modeling the tail region of the hourly load distribution and implementing the change of hourly standard deviation. Whereas the AR(12) model requires much less parameter and is superior in modeling the autocorrelation...

  12. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  13. Distributed MAP in the SpinJa Model Checker

    Directory of Open Access Journals (Sweden)

    Stefan Vijzelaar

    2011-10-01

    Full Text Available Spin in Java (SpinJa is an explicit state model checker for the Promela modelling language also used by the SPIN model checker. Designed to be extensible and reusable, the implementation of SpinJa follows a layered approach in which each new layer extends the functionality of the previous one. While SpinJa has preliminary support for shared-memory model checking, it did not yet support distributed-memory model checking. This tool paper presents a distributed implementation of a maximal accepting predecessors (MAP search algorithm on top of SpinJa.

  14. Distribution and ultrastructure of pigment cells in the skins of normal and albino adult turbot, Scophthalmus Maximus

    Institute of Scientific and Technical Information of China (English)

    GUO Huarong; HUANG Bing; QI Fei; ZHANG Shicui

    2007-01-01

    The distribution and ultrastructure of pigment cells in skins of normal and albino adult turbots were examined with transmission electron microscopy (TEM). Three types of pigment cells of melanophore, iridophore and xanthophore have been recognized in adult turbot skins. The skin color depends mainly on the amount and distribution of melanophore and iridophore, as xanthophore is quite rare. No pigment cells can be found in the epidermis of the skins. In the pigmented ocular skin of the turbot, melanophore and iridophore are usually co-localized in the dermis. This is quite different from the distribution in larvae skin. In albino and white blind skins of adult turbots, however, only iridophore monolayer still exists, while the melanophore monolayer disappears. This cytological evidence explains why the albino adult turbot, unlike its larvae, could never resume its body color no matter what environmental and nutritional conditions were provided. Endocytosis is quite active in the cellular membrane of the iridophore. This might be related to the formation of reflective platelet and stability of the iridophore.

  15. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  17. TOTAL NUMBER, DISTRIBUTION, AND PHENOTYPE OF CELLS EXPRESSING CHONDROITIN SULPHATE PROTEOGLYCANS IN THE NORMAL HUMAN AMYGDALA

    Science.gov (United States)

    Pantazopoulos, Harry; Murray, Elisabeth A.; Berretta, Sabina

    2009-01-01

    Chondroitin sulphate proteoglycans (CSPGs) are a key structural component of the brain extracellular matrix. They are involved in critical neurodevelopmental functions and are one of the main components of pericellular aggregates known as perineuronal nets. As a step toward investigating their functional and pathophysiological roles in the human amygdala, we assessed the pattern of CSPG expression in the normal human amygdala using wisteria floribunda agglutinin (WFA) lectin-histochemistry. Total numbers of WFA-labeled elements were measured in the lateral (LN), basal (BN), accessory basal (ABN) and cortical (CO) nuclei of the amygdala from 15 normal adult human subjects. For interspecies qualitative comparison, we also investigated the pattern of WFA labeling in the amygdala of naïve rats (n=32) and rhesus monkeys (Macaca mulatta; n=6). In human amygdala, WFA lectin-histochemistry resulted in labeling of perineuronal nets and cells with clear glial morphology, while neurons did not show WFA-labeling. Total numbers of WFA-labeled glial cells showed high interindividual variability. These cells aggregated in clusters with a consistent between-subjects spatial distribution. In a subset of human subjects (n=5), dual color fluorescence using an antibody raised against glial fibrillary acidic protein (GFAP) and WFA showed that the majority (93.7%) of WFA-labeled glial cells correspond to astrocytes. In rat and monkey amygdala, WFA histochemistry labeled perineuronal nets, but not glial cells. These results suggest that astrocytes are the main cell type expressing CSPGs in the adult human amygdala. Their highly segregated distribution pattern suggests that these cells serve specialized functions within human amygdalar nuclei. PMID:18374308

  18. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Science.gov (United States)

    Beauregard, Frieda; de Blois, Sylvie

    2014-01-01

    Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential

  19. Orbital angular momentum parton distributions in quark models

    International Nuclear Information System (INIS)

    Scopetta, S.; Vento, V.

    2000-01-01

    At the low energy, hadronic, scale we calculate Orbital Angular Momentum (OAM) twist-two parton distributions for the relativistic MIT bag model and for nonrelativistic quark models. We reach the scale of the data by leading order evolution in perturbative QCD. We confirm that the contribution of quarks and gluons OAM to the nucleon spin grows with Q 2 , and it can be relevant at the experimental scale, even if it is negligible at the hadronic scale, irrespective of the model used. The sign and shape of the quark OAM distribution at high Q 2 may depend strongly on the relative size of the OAM and spin distributions at the hadronic scale. Sizeable quark OAM distributions at the hadronic scale, as proposed by several authors, can produce the dominant contribution to the nucleon spin at high Q 2 . (author)

  20. Temporal Statistical Analysis of Degree Distributions in an Undirected Landline Phone Call Network Graph Series

    Directory of Open Access Journals (Sweden)

    Orgeta Gjermëni

    2017-10-01

    Full Text Available This article aims to provide new results about the intraday degree sequence distribution considering phone call network graph evolution in time. More specifically, it tackles the following problem. Given a large amount of landline phone call data records, what is the best way to summarize the distinct number of calling partners per client per day? In order to answer this question, a series of undirected phone call network graphs is constructed based on data from a local telecommunication source in Albania. All network graphs of the series are simplified. Further, a longitudinal temporal study is made on this network graphs series related to the degree distributions. Power law and log-normal distribution fittings on the degree sequence are compared on each of the network graphs of the series. The maximum likelihood method is used to estimate the parameters of the distributions, and a Kolmogorov–Smirnov test associated with a p-value is used to define the plausible models. A direct distribution comparison is made through a Vuong test in the case that both distributions are plausible. Another goal was to describe the parameters’ distributions’ shape. A Shapiro-Wilk test is used to test the normality of the data, and measures of shape are used to define the distributions’ shape. Study findings suggested that log-normal distribution models better the intraday degree sequence data of the network graphs. It is not possible to say that the distributions of log-normal parameters are normal.

  1. The normal and abnormal distribution of the adrenomedullary imaging agent m-[I-131]iodobenzylguanidine (I-131 MIBG) in man: evaluation by scintigraphy

    International Nuclear Information System (INIS)

    Nakajo, M.; Shapiro, B.; Copp, J.; Kalff, V.; Gross, M.D.; Sisson, J.C.; Beierwaltes, W.H.

    1983-01-01

    The scintigraphic distribution of m-[ 131 I]iodobenzylguanidine (I- 131 MIBG), an adrenal medullary imaging agent, was studied to determine the patterns of uptake of this agent in man. The normal distribution of I- 131 MIBG includes clear portrayal of the salivary glands, liver, spleen, and urinary bladder. The heart, middle and lower lung zones, and colon were less frequently or less clearly seen. The upper lung zones and kidneys were seldom visualized. The thyroid appeared only in cases of inadequate thyroidal blockade. The ''normal'' adrenal glands were seldom seen and faintly imaged in 2% at 24 hr after injection and in 16% at 48 hr, in patients shown not to have pheochromocytomas, whereas intra-adrenal, extraadrenal, and malignant pheochromocytomas usually appeared as intense focal areas of I- 131 MIBG uptake at 24 through 72 hr

  2. The Transmuted Geometric-Weibull distribution: Properties, Characterizations and Regression Models

    Directory of Open Access Journals (Sweden)

    Zohdy M Nofal

    2017-06-01

    Full Text Available We propose a new lifetime model called the transmuted geometric-Weibull distribution. Some of its structural properties including ordinary and incomplete moments, quantile and generating functions, probability weighted moments, Rényi and q-entropies and order statistics are derived. The maximum likelihood method is discussed to estimate the model parameters by means of Monte Carlo simulation study. A new location-scale regression model is introduced based on the proposed distribution. The new distribution is applied to two real data sets to illustrate its flexibility. Empirical results indicate that proposed distribution can be alternative model to other lifetime models available in the literature for modeling real data in many areas.

  3. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    Science.gov (United States)

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  4. Distributed modelling of shallow landslides triggered by intense rainfall

    Directory of Open Access Journals (Sweden)

    G. B. Crosta

    2003-01-01

    Full Text Available Hazard assessment of shallow landslides represents an important aspect of land management in mountainous areas. Among all the methods proposed in the literature, physically based methods are the only ones that explicitly includes the dynamic factors that control landslide triggering (rainfall pattern, land-use. For this reason, they allow forecasting both the temporal and the spatial distribution of shallow landslides. Physically based methods for shallow landslides are based on the coupling of the infinite slope stability analysis with hydrological models. Three different grid-based distributed hydrological models are presented in this paper: a steady state model, a transient "piston-flow" wetting front model, and a transient diffusive model. A comparative test of these models was performed to simulate landslide occurred during a rainfall event (27–28 June 1997 that triggered hundreds of shallow landslides within Lecco province (central Southern Alps, Italy. In order to test the potential for a completely distributed model for rainfall-triggered landslides, radar detected rainfall intensity has been used. A new procedure for quantitative evaluation of distributed model performance is presented and used in this paper. The diffusive model results in the best model for the simulation of shallow landslide triggering after a rainfall event like the one that we have analysed. Finally, radar data available for the June 1997 event permitted greatly improving the simulation. In particular, radar data allowed to explain the non-uniform distribution of landslides within the study area.

  5. A Cyber Physical Model Based on a Hybrid System for Flexible Load Control in an Active Distribution Network

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2017-02-01

    Full Text Available To strengthen the integration of the primary and secondary systems, a concept of Cyber Physical Systems (CPS is introduced to construct a CPS in Power Systems (Power CPS. The most basic work of the Power CPS is to build an integration model which combines both a continuous process and a discrete process. The advanced form of smart grid, the Active Distribution Network (ADN is a typical example of Power CPS. After designing the Power CPS model architecture and its application in ADN, a Hybrid System based model and control method of Power CPS is proposed in this paper. As an application example, ADN flexible load is modeled and controlled with ADN feeder power control by a control strategy which includes the normal condition and the underpowered condition. In this model and strategy, some factors like load power consumption and load functional demand are considered and optimized. In order to make up some of the deficiencies of centralized control, a distributed control method is presented to reduce model complexity and improve calculation speed. The effectiveness of all the models and methods are demonstrated in the case study.

  6. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  7. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research

    International Nuclear Information System (INIS)

    Currie, L.A.

    2001-01-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85 Kr and 14 C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger- Mueller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test - for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14 C- 12 C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban ''soot''. The third, environmentally skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85 Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom. (orig.)

  8. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    Science.gov (United States)

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  9. Modeling wind speed and wind power distributions in Rwanda

    Energy Technology Data Exchange (ETDEWEB)

    Safari, Bonfils [Department of Physics, National University of Rwanda, P.O. Box 117, Huye District, South Province (Rwanda)

    2011-02-15

    Utilization of wind energy as an alternative energy source may offer many environmental and economical advantages compared to fossil fuels based energy sources polluting the lower layer atmosphere. Wind energy as other forms of alternative energy may offer the promise of meeting energy demand in the direct, grid connected modes as well as stand alone and remote applications. Wind speed is the most significant parameter of the wind energy. Hence, an accurate determination of probability distribution of wind speed values is very important in estimating wind speed energy potential over a region. In the present study, parameters of five probability density distribution functions such as Weibull, Rayleigh, lognormal, normal and gamma were calculated in the light of long term hourly observed data at four meteorological stations in Rwanda for the period of the year with fairly useful wind energy potential (monthly hourly mean wind speed anti v{>=}2 m s{sup -1}). In order to select good fitting probability density distribution functions, graphical comparisons to the empirical distributions were made. In addition, RMSE and MBE have been computed for each distribution and magnitudes of errors were compared. Residuals of theoretical distributions were visually analyzed graphically. Finally, a selection of three good fitting distributions to the empirical distribution of wind speed measured data was performed with the aid of a {chi}{sup 2} goodness-of-fit test for each station. (author)

  10. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bernstein, Andrey; Dall' Anese, Emiliano

    2017-05-26

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.

  11. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  12. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  13. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    Science.gov (United States)

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Distributed hydrological modelling of the Senegal river basin - model construction and validation

    DEFF Research Database (Denmark)

    Andersen, J.; Refsgaard, J.C.; Jensen, Karsten Høgh

    2001-01-01

    A modified version of the physically-based distributed MIKE SHE model code was applied to the 375,000 km(2) Senegal River Basin. On the basis of conventional data from meteorological stations and readily accessible databases on topography, soil types, vegetation type, etc. three models with diffe......A modified version of the physically-based distributed MIKE SHE model code was applied to the 375,000 km(2) Senegal River Basin. On the basis of conventional data from meteorological stations and readily accessible databases on topography, soil types, vegetation type, etc. three models...

  15. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  16. Time-independent models of asset returns revisited

    Science.gov (United States)

    Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.

    2000-07-01

    In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.

  17. A generalized statistical model for the size distribution of wealth

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2012-01-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)

  18. A generalized statistical model for the size distribution of wealth

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  19. Similar distributions of repaired sites in chromatin of normal and xeroderma pigmentosum variant cells damaged by ultraviolet light

    International Nuclear Information System (INIS)

    Cleaver, J.E.

    1979-01-01

    Excision repair of damage from ultraviolet light in both normal and xeroderma pigmentosum variant fibroblasts at early times after irradiation occurred preferentially in regions of DNA accessible to micrococcal nuclease digestion. These regions are predominantly the linker regions between nucleosomes in chromatin. The alterations reported at polymerization and ligation steps of excision repair in the variant are therefore not associated with changes in the relative distributions of repair sites in linker and core particle regions of DNA. (Auth.)

  20. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2013-01-01

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity

  1. Confronting species distribution model predictions with species functional traits.

    Science.gov (United States)

    Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M

    2016-02-01

    Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.

  2. Blood Vessel Normalization in the Hamster Oral Cancer Model for Experimental Cancer Therapy Studies

    Energy Technology Data Exchange (ETDEWEB)

    Ana J. Molinari; Romina F. Aromando; Maria E. Itoiz; Marcela A. Garabalino; Andrea Monti Hughes; Elisa M. Heber; Emiliano C. C. Pozzi; David W. Nigg; Veronica A. Trivillin; Amanda E. Schwint

    2012-07-01

    Normalization of tumor blood vessels improves drug and oxygen delivery to cancer cells. The aim of this study was to develop a technique to normalize blood vessels in the hamster cheek pouch model of oral cancer. Materials and Methods: Tumor-bearing hamsters were treated with thalidomide and were compared with controls. Results: Twenty eight hours after treatment with thalidomide, the blood vessels of premalignant tissue observable in vivo became narrower and less tortuous than those of controls; Evans Blue Dye extravasation in tumor was significantly reduced (indicating a reduction in aberrant tumor vascular hyperpermeability that compromises blood flow), and tumor blood vessel morphology in histological sections, labeled for Factor VIII, revealed a significant reduction in compressive forces. These findings indicated blood vessel normalization with a window of 48 h. Conclusion: The technique developed herein has rendered the hamster oral cancer model amenable to research, with the potential benefit of vascular normalization in head and neck cancer therapy.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  4. Modelling and analysis of distributed simulation protocols with distributed graph transformation

    OpenAIRE

    Lara, Juan de; Taentzer, Gabriele

    2005-01-01

    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. de Lara, and G. Taentzer, "Modelling and analysis of distributed simulation protocols with distributed graph transformation...

  5. Surface topography effects on energy-resolved polar angular distributions of electrons induced in heavy ion-Al collisions: experiments and models

    International Nuclear Information System (INIS)

    Mischler, J.; Banouni, M.; Banazeth, C.; Negre, M.; Benazeth, N.

    1986-01-01

    The influence of the surface topography on the polar angular distributions of secondary electrons emitted in Ar + (and Xe - )-Al collisions was studied. After each set of experiments, the surface target was viewed by scanning electron microscope. Under normal incidence, continuum background and Al L 23 VV Auger electron polar angular distributions were not modified by the topography and closely followed a cosine law. For Al L 23 MM Auger electrons, experimental angular distributions as a function of the emission polar angle theta, either were near a constant law or followed a decreasing law depending on the irradiation conditions. The N(theta) curves calculated from the models showed that the isotropic angular distributions obtained for electrons generated outside the crystal from a flat surface could be strongly modified by the surface topography. (author)

  6. Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars

    2014-01-01

    Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...

  7. Comparative pharmacokinetics and tissue distribution profiles of lignan components in normal and hepatic fibrosis rats after oral administration of Fuzheng Huayu recipe.

    Science.gov (United States)

    Yang, Tao; Liu, Shan; Zheng, Tian-Hui; Tao, Yan-Yan; Liu, Cheng-Hai

    2015-05-26

    Fuzheng Huayu recipe (FZHY) is formulated on the basis of Chinese medicine theory in treating liver fibrosis. To illuminate the influence of the pathological state of liver fibrosis on the pharmacokinetics and tissue distribution profiles of lignan components from FZHY. Male Wistar rats were randomly divided into normal group and Hepatic fibrosis group (induced by dimethylnitrosamine). Six lignan components were detected and quantified by ultrahigh performance liquid chromatography-tandem mass spectrometry(UHPLC-MS/MS)in the plasma and tissue of normal and hepatic fibrosis rats. A rapid, sensitive and convenient UHPLC-MS/MS method has been developed for the simultaneous determination of six lignan components in different rat biological samples successfully. After oral administration of FZHY at a dose of 15g/kg, the pharmacokinetic behaviors of schizandrin A (SIA), schizandrin B (SIB), schizandrin C (SIC), schisandrol A (SOA), Schisandrol B (SOB) and schisantherin A (STA) have been significantly changed in hepatic fibrosis rats compared with the normal rats, and their AUC(0-t) values were increased by 235.09%, 388.44%, 223.30%, 669.30%, 295.08% and 267.63% orderly (Pdistribution results showed the amount of SIA, SIB, SOA and SOB were significant increased in heart, lung, spleen and kidney of hepatic fibrosis rats compared with normal rats at most of the time point (Pdistribution of lignan components in normal and hepatic fibrosis rats. The hepatic fibrosis could alter the pharmacokinetics and tissue distribution properties of lignan components in rats after administration of FZHY. The results might be helpful for guide the clinical application of this medicine. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Directory of Open Access Journals (Sweden)

    Frieda Beauregard

    Full Text Available Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839 covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study

  9. Spatial distribution of emissions to air - the SPREAD model

    Energy Technology Data Exchange (ETDEWEB)

    Plejdrup, M S; Gyldenkaerne, S

    2011-04-15

    The National Environmental Research Institute (NERI), Aarhus University, completes the annual national emission inventories for greenhouse gases and air pollutants according to Denmark's obligations under international conventions, e.g. the climate convention, UNFCCC and the convention on long-range transboundary air pollution, CLRTAP. NERI has developed a model to distribute emissions from the national emission inventories on a 1x1 km grid covering the Danish land and sea territory. The new spatial high resolution distribution model for emissions to air (SPREAD) has been developed according to the requirements for reporting of gridded emissions to CLRTAP. Spatial emission data is e.g. used as input for air quality modelling, which again serves as input for assessment and evaluation of health effects. For these purposes distributions with higher spatial resolution have been requested. Previously, a distribution on the 17x17 km EMEP grid has been set up and used in research projects combined with detailed distributions for a few sectors or sub-sectors e.g. a distribution for emissions from road traffic on 1x1 km resolution. SPREAD is developed to generate improved spatial emission data for e.g. air quality modelling in exposure studies. SPREAD includes emission distributions for each sector in the Danish inventory system; stationary combustion, mobile sources, fugitive emissions from fuels, industrial processes, solvents and other product use, agriculture and waste. This model enables generation of distributions for single sectors and for a number of sub-sectors and single sources as well. This report documents the methodologies in this first version of SPREAD and presents selected results. Further, a number of potential improvements for later versions of SPREAD are addressed and discussed. (Author)

  10. MODELING COLLISIONAL CASCADES IN DEBRIS DISKS: STEEP DUST-SIZE DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Gáspár, András; Psaltis, Dimitrios; Rieke, George H.; Özel, Feryal

    2012-01-01

    We explore the evolution of the mass distribution of dust in collision-dominated debris disks, using the collisional code introduced in our previous paper. We analyze the equilibrium distribution and its dependence on model parameters by evolving over 100 models to 10 Gyr. With our numerical models, we confirm that systems reach collisional equilibrium with a mass distribution that is steeper than the traditional solution by Dohnanyi. Our model yields a quasi-steady-state slope of n(m) ∼ m –1.88 [n(a) ∼ a –3.65 ] as a robust solution for a wide range of possible model parameters. We also show that a simple power-law function can be an appropriate approximation for the mass distribution of particles in certain regimes. The steeper solution has observable effects in the submillimeter and millimeter wavelength regimes of the electromagnetic spectrum. We assemble data for nine debris disks that have been observed at these wavelengths and, using a simplified absorption efficiency model, show that the predicted slope of the particle-mass distribution generates spectral energy distributions that are in agreement with the observed ones.

  11. An equal force theory for network models of soft materials with arbitrary molecular weight distribution

    Science.gov (United States)

    Verron, E.; Gros, A.

    2017-09-01

    Most network models for soft materials, e.g. elastomers and gels, are dedicated to idealized materials: all chains admit the same number of Kuhn segments. Nevertheless, such standard models are not appropriate for materials involving multiple networks, and some specific constitutive equations devoted to these materials have been derived in the last few years. In nearly all cases, idealized networks of different chain lengths are assembled following an equal strain assumption; only few papers adopt an equal stress assumption, although some authors argue that such hypothesis would reflect the equilibrium of the different networks in contact. In this work, a full-network model with an arbitrary chain length distribution is derived by considering that chains of different lengths satisfy the equal force assumption in each direction of the unit sphere. The derivation is restricted to non-Gaussian freely jointed chains and to affine deformation of the sphere. Firstly, after a proper definition of the undeformed configuration of the network, we demonstrate that the equal force assumption leads to the equality of a normalized stretch in chains of different lengths. Secondly, we establish that the network with chain length distribution behaves as an idealized full-network of which both chain length and density of are provided by the chain length distribution. This approach is finally illustrated with two examples: the derivation of a new expression for the Young modulus of bimodal interpenetrated polymer networks, and the prediction of the change in fluorescence during deformation of mechanochemically responsive elastomers.

  12. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  13. Forecasting Value-at-Risk under Different Distributional Assumptions

    Directory of Open Access Journals (Sweden)

    Manuela Braione

    2016-01-01

    Full Text Available Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR. We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.

  14. Development of vortex model with realistic axial velocity distribution

    International Nuclear Information System (INIS)

    Ito, Kei; Ezure, Toshiki; Ohshima, Hiroyuki

    2014-01-01

    A vortex is considered as one of significant phenomena which may cause gas entrainment (GE) and/or vortex cavitation in sodium-cooled fast reactors. In our past studies, the vortex is assumed to be approximated by the well-known Burgers vortex model. However, the Burgers vortex model has a simple but unreal assumption that the axial velocity component is horizontally constant, while in real the free surface vortex has the axial velocity distribution which shows large gradient in radial direction near the vortex center. In this study, a new vortex model with realistic axial velocity distribution is proposed. This model is derived from the steady axisymmetric Navier-Stokes equation as well as the Burgers vortex model, but the realistic axial velocity distribution in radial direction is considered, which is defined to be zero at the vortex center and to approach asymptotically to zero at infinity. As the verification, the new vortex model is applied to the evaluation of a simple vortex experiment, and shows good agreements with the experimental data in terms of the circumferential velocity distribution and the free surface shape. In addition, it is confirmed that the Burgers vortex model fails to calculate accurate velocity distribution with the assumption of uniform axial velocity. However, the calculation accuracy of the Burgers vortex model can be enhanced close to that of the new vortex model in consideration of the effective axial velocity which is calculated as the average value only in the vicinity of the vortex center. (author)

  15. Modeling of the wind turbine with doubly fed induction machine and its dynamic behavior in distribution networks

    International Nuclear Information System (INIS)

    Mendez Rodriguez, Christian; Badilla Solorzano, Jorge Adrian

    2014-01-01

    Wind turbines equipped with doubly fed induction generator (DFIG) are described. A model is constructed to represent the behavior of wind turbines during the connection with distribution networks. The main systems that compose a wind turbine with DFIG are specified to develop a mathematical model of each of them. The behavior of the wind turbine in the stable and transient regimes is investigated to explain its dynamics during nominal operation and contingency situations when they are connected to distribution networks. In addition, strategies to mitigate the negative effects of such situations and control strategies to contribute to the dynamics of the network are included. An integrated model of the parts of the wind turbine is built in the program SIMULINK® of MATLAB® to validate the models of the systems and to obtain a tool that allows their simulation. The wind turbine model developed is simulated in order to evaluate and to analyze the dynamic behavior under different operating conditions. The results from validations have revealed an adequate behavior for the model under normal operating conditions. In the case of behavior in contingency situations, the study is limited to the response to three-phase faults and voltage variations, and frequency under conditions of balance in the power system [es

  16. A DISTRIBUTED HYPERMAP MODEL FOR INTERNET GIS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The rapid development of Internet technology makes it possible to integrate GIS with the Internet,forming Internet GIS.Internet GIS is based on a distributed client/server architecture and TCP/IP & IIOP.When constructing and designing Internet GIS,we face the problem of how to express information units of Internet GIS.In order to solve this problem,this paper presents a distributed hypermap model for Internet GIS.This model provides a solution to organize and manage Internet GIS information units.It also illustrates relations between two information units and in an internal information unit both on clients and servers.On the basis of this model,the paper contributes to the expressions of hypermap relations and hypermap operations.The usage of this model is shown in the implementation of a prototype system.

  17. Mathematical model and computer code for coated particles performance at normal operating conditions

    International Nuclear Information System (INIS)

    Golubev, I.; Kadarmetov, I.; Makarov, V.

    2002-01-01

    Computer modeling of thermo-mechanical behavior of coated particles during operating both at normal and off-normal conditions has a very significant role particularly on a stage of new reactors development. In Russia a big experience has been accumulated on fabrication and reactor tests of CP and fuel elements with UO 2 kernels. However, this experience cannot be using in full volume for development of a new reactor installation GT-MHR. This is due to very deep burn-up of the fuel based on plutonium oxide (up to 70% fima). Therefore the mathematical modeling of CP thermal-mechanical behavior and failure prediction becomes particularly important. The authors have a clean understanding that serviceability of fuel with high burn-ups are defined not only by thermo-mechanics, but also by structured changes in coating materials, thermodynamics of chemical processes, 'amoeba-effect', formation CO etc. In the report the first steps of development of integrate code for numerical modeling of coated particles behavior and some calculating results concerning the influence of various design parameters on fuel coated particles endurance for GT-MHR normal operating conditions are submitted. A failure model is developed to predict the fraction of TRISO-coated particles. In this model it is assumed that the failure of CP depends not only on probability of SiC-layer fracture but also on the PyC-layers damage. The coated particle is considered as a uniform design. (author)

  18. Modelling the potential distribution of Betula utilis in the Himalaya

    Directory of Open Access Journals (Sweden)

    Maria Bobrowski

    2017-07-01

    Full Text Available Developing sustainable adaptation pathways under climate change conditions in mountain regions requires accurate predictions of treeline shifts and future distribution ranges of treeline species. Here, we model for the first time the potential distribution of Betula utilis, a principal Himalayan treeline species, to provide a basis for the analysis of future range shifts. Our target species Betula utilis is widespread at alpine treelines in the Himalayan mountains, the distribution range extends across the Himalayan mountain range. Our objective is to model the potential distribution of B. utilis in relation to current climate conditions. We generated a dataset of 590 occurrence records and used 24 variables for ecological niche modelling. We calibrated Generalized Linear Models using the Akaike Information Criterion (AIC and evaluated model performance using threshold-independent (AUC, Area Under the Curve and threshold-dependent (TSS, True Skill Statistics characteristics as well as visual assessments of projected distribution maps. We found two temperature-related (Mean Temperature of the Wettest Quarter, Temperature Annual Range and three precipitation-related variables (Precipitation of the Coldest Quarter, Average Precipitation of March, April and May and Precipitation Seasonality to be useful for predicting the potential distribution of B. utilis. All models had high predictive power (AUC ≥ 0.98 and TSS ≥ 0.89. The projected suitable area in the Himalayan mountains varies considerably, with most extensive distribution in the western and central Himalayan region. A substantial difference between potential and real distribution in the eastern Himalaya points to decreasing competitiveness of B. utilis under more oceanic conditions in the eastern part of the mountain system. A comparison between the vegetation map of Schweinfurth (1957 and our current predictions suggests that B. utilis does not reach the upper elevational limit in

  19. Species distribution models of tropical deep-sea snappers.

    Directory of Open Access Journals (Sweden)

    Céline Gomez

    Full Text Available Deep-sea fisheries provide an important source of protein to Pacific Island countries and territories that are highly dependent on fish for food security. However, spatial management of these deep-sea habitats is hindered by insufficient data. We developed species distribution models using spatially limited presence data for the main harvested species in the Western Central Pacific Ocean. We used bathymetric and water temperature data to develop presence-only species distribution models for the commercially exploited deep-sea snappers Etelis Cuvier 1828, Pristipomoides Valenciennes 1830, and Aphareus Cuvier 1830. We evaluated the performance of four different algorithms (CTA, GLM, MARS, and MAXENT within the BIOMOD framework to obtain an ensemble of predicted distributions. We projected these predictions across the Western Central Pacific Ocean to produce maps of potential deep-sea snapper distributions in 32 countries and territories. Depth was consistently the best predictor of presence for all species groups across all models. Bathymetric slope was consistently the poorest predictor. Temperature at depth was a good predictor of presence for GLM only. Model precision was highest for MAXENT and CTA. There were strong regional patterns in predicted distribution of suitable habitat, with the largest areas of suitable habitat (> 35% of the Exclusive Economic Zone predicted in seven South Pacific countries and territories (Fiji, Matthew & Hunter, Nauru, New Caledonia, Tonga, Vanuatu and Wallis & Futuna. Predicted habitat also varied among species, with the proportion of predicted habitat highest for Aphareus and lowest for Etelis. Despite data paucity, the relationship between deep-sea snapper presence and their environments was sufficiently strong to predict their distribution across a large area of the Pacific Ocean. Our results therefore provide a strong baseline for designing monitoring programs that balance resource exploitation and

  20. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.