WorldWideScience

Sample records for normal distribution model

  1. Modified Normal Demand Distributions in (R,S)-Inventory Models

    NARCIS (Netherlands)

    Strijbosch, L.W.G.; Moors, J.J.A.

    2003-01-01

    To model demand, the normal distribution is by far the most popular; the disadvantage that it takes negative values is taken for granted.This paper proposes two modi.cations of the normal distribution, both taking non-negative values only.Safety factors and order-up-to-levels for the familiar (R,

  2. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  3. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  4. American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution

    DEFF Research Database (Denmark)

    Stentoft, Lars Peter

    In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... properties shows that there are important option pricing differences compared to the Gaussian case as well as to the symmetric special case. A large scale empirical examination shows that our model outperforms the Gaussian case for pricing options on three large US stocks as well as a major index...

  5. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  6. Thermal modelling of normal distributed nanoparticles through thickness in an inorganic material matrix

    Science.gov (United States)

    Latré, S.; Desplentere, F.; De Pooter, S.; Seveno, D.

    2017-10-01

    Nanoscale materials showing superior thermal properties have raised the interest of the building industry. By adding these materials to conventional construction materials, it is possible to decrease the total thermal conductivity by almost one order of magnitude. This conductivity is mainly influenced by the dispersion quality within the matrix material. At the industrial scale, the main challenge is to control this dispersion to reduce or even eliminate thermal bridges. This allows to reach an industrially relevant process to balance out the high material cost and their superior thermal insulation properties. Therefore, a methodology is required to measure and describe these nanoscale distributions within the inorganic matrix material. These distributions are either random or normally distributed through thickness within the matrix material. We show that the influence of these distributions is meaningful and modifies the thermal conductivity of the building material. Hence, this strategy will generate a thermal model allowing to predict the thermal behavior of the nanoscale particles and their distributions. This thermal model will be validated by the hot wire technique. For the moment, a good correlation is found between the numerical results and experimental data for a randomly distributed form of nanoparticles in all directions.

  7. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research

    2011-01-15

    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  8. Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities

    International Nuclear Information System (INIS)

    Waite, D.A.; Denham, D.H.

    1975-01-01

    The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these

  9. A Platoon Dispersion Model Based on a Truncated Normal Distribution of Speed

    Directory of Open Access Journals (Sweden)

    Ming Wei

    2012-01-01

    Full Text Available Understanding platoon dispersion is critical for the coordination of traffic signal control in an urban traffic network. Assuming that platoon speed follows a truncated normal distribution, ranging from minimum speed to maximum speed, this paper develops a piecewise density function that describes platoon dispersion characteristics as the platoon moves from an upstream to a downstream intersection. Based on this density function, the expected number of cars in the platoon that pass the downstream intersection, and the expected number of cars in the platoon that do not pass the downstream point are calculated. To facilitate coordination in a traffic signal control system, dispersion models for the front and the rear of the platoon are also derived. Finally, a numeric computation for the coordination of successive signals is presented to illustrate the validity of the proposed model.

  10. A New Distribution-Random Limit Normal Distribution

    OpenAIRE

    Gong, Xiaolin; Yang, Shuzhen

    2013-01-01

    This paper introduces a new distribution to improve tail risk modeling. Based on the classical normal distribution, we define a new distribution by a series of heat equations. Then, we use market data to verify our model.

  11. A locally adaptive normal distribution

    DEFF Research Database (Denmark)

    Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren

    2016-01-01

    entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models......The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density...

  12. The N'ormal Distribution

    Indian Academy of Sciences (India)

    An optimal way of choosing sample size in an opinion poll is indicated using the normal distribution. Introduction. In this article, the ubiquitous normal distribution is intro- duced as a convenient approximation for computing bino- mial probabilities for large values of n. Stirling's formula. • and DeMoivre-Laplace theorem ...

  13. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  14. Understanding a Normal Distribution of Data.

    Science.gov (United States)

    Maltenfort, Mitchell G

    2015-12-01

    Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?

  15. A framework for analysis of abortive colony size distributions using a model of branching processes in irradiated normal human fibroblasts.

    Science.gov (United States)

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Ouchi, Noriyuki B; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2013-01-01

    Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD) over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells) and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.

  16. An approach to normal forms of Kuramoto model with distributed delays and the effect of minimal delay

    Energy Technology Data Exchange (ETDEWEB)

    Niu, Ben, E-mail: niubenhit@163.com [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Guo, Yuxiao [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Jiang, Weihua [Department of Mathematics, Harbin Institute of Technology, Harbin 150001 (China)

    2015-09-25

    Heterogeneous delays with positive lower bound (gap) are taken into consideration in Kuramoto model. On the Ott–Antonsen's manifold, the dynamical transitional behavior from incoherence to coherence is mediated by Hopf bifurcation. We establish a perturbation technique on complex domain, by which universal normal forms, stability and criticality of the Hopf bifurcation are obtained. Theoretically, a hysteresis loop is found near the subcritically bifurcated coherent state. With respect to Gamma distributed delay with fixed mean and variance, we find that the large gap decreases Hopf bifurcation value, induces supercritical bifurcations, avoids the hysteresis loop and significantly increases in the number of coexisting coherent states. The effect of gap is finally interpreted from the viewpoint of excess kurtosis of Gamma distribution. - Highlights: • Heterogeneously delay-coupled Kuramoto model with minimal delay is considered. • Perturbation technique on complex domain is established for bifurcation analysis. • Hysteresis phenomenon is investigated in a theoretical way. • The effect of excess kurtosis of distributed delays is discussed.

  17. Quantiles for Finite Mixtures of Normal Distributions

    Science.gov (United States)

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  18. Ultrasound-mediated delivery and distribution of polymeric nanoparticles in the normal brain parenchyma of a metastatic brain tumour model.

    Directory of Open Access Journals (Sweden)

    Habib Baghirov

    Full Text Available The treatment of brain diseases is hindered by the blood-brain barrier (BBB preventing most drugs from entering the brain. Focused ultrasound (FUS with microbubbles can open the BBB safely and reversibly. Systemic drug injection might induce toxicity, but encapsulation into nanoparticles reduces accumulation in normal tissue. Here we used a novel platform based on poly(2-ethyl-butyl cyanoacrylate nanoparticle-stabilized microbubbles to permeabilize the BBB in a melanoma brain metastasis model. With a dual-frequency ultrasound transducer generating FUS at 1.1 MHz and 7.8 MHz, we opened the BBB using nanoparticle-microbubbles and low-frequency FUS, and applied high-frequency FUS to generate acoustic radiation force and push nanoparticles through the extracellular matrix. Using confocal microscopy and image analysis, we quantified nanoparticle extravasation and distribution in the brain parenchyma. We also evaluated haemorrhage, as well as the expression of P-glycoprotein, a key BBB component. FUS and microbubbles distributed nanoparticles in the brain parenchyma, and the distribution depended on the extent of BBB opening. The results from acoustic radiation force were not conclusive, but in a few animals some effect could be detected. P-glycoprotein was not significantly altered immediately after sonication. In summary, FUS with our nanoparticle-stabilized microbubbles can achieve accumulation and displacement of nanoparticles in the brain parenchyma.

  19. Ultrasound-mediated delivery and distribution of polymeric nanoparticles in the normal brain parenchyma of a metastatic brain tumour model

    Science.gov (United States)

    Baghirov, Habib; Snipstad, Sofie; Sulheim, Einar; Berg, Sigrid; Hansen, Rune; Thorsen, Frits; Mørch, Yrr; Åslund, Andreas K. O.

    2018-01-01

    The treatment of brain diseases is hindered by the blood-brain barrier (BBB) preventing most drugs from entering the brain. Focused ultrasound (FUS) with microbubbles can open the BBB safely and reversibly. Systemic drug injection might induce toxicity, but encapsulation into nanoparticles reduces accumulation in normal tissue. Here we used a novel platform based on poly(2-ethyl-butyl cyanoacrylate) nanoparticle-stabilized microbubbles to permeabilize the BBB in a melanoma brain metastasis model. With a dual-frequency ultrasound transducer generating FUS at 1.1 MHz and 7.8 MHz, we opened the BBB using nanoparticle-microbubbles and low-frequency FUS, and applied high-frequency FUS to generate acoustic radiation force and push nanoparticles through the extracellular matrix. Using confocal microscopy and image analysis, we quantified nanoparticle extravasation and distribution in the brain parenchyma. We also evaluated haemorrhage, as well as the expression of P-glycoprotein, a key BBB component. FUS and microbubbles distributed nanoparticles in the brain parenchyma, and the distribution depended on the extent of BBB opening. The results from acoustic radiation force were not conclusive, but in a few animals some effect could be detected. P-glycoprotein was not significantly altered immediately after sonication. In summary, FUS with our nanoparticle-stabilized microbubbles can achieve accumulation and displacement of nanoparticles in the brain parenchyma. PMID:29338016

  20. About normal distribution on SO(3) group in texture analysis

    Science.gov (United States)

    Savyolova, T. I.; Filatov, S. V.

    2017-12-01

    This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.

  1. Externally studentized normal midrange distribution

    Directory of Open Access Journals (Sweden)

    Ben Dêivide de Oliveira Batista

    Full Text Available ABSTRACT The distribution of externally studentized midrange was created based on the original studentization procedures of Student and was inspired in the distribution of the externally studentized range. The large use of the externally studentized range in multiple comparisons was also a motivation for developing this new distribution. This work aimed to derive analytic equations to distribution of the externally studentized midrange, obtaining the cumulative distribution, probability density and quantile functions and generating random values. This is a new distribution that the authors could not find any report in the literature. A second objective was to build an R package for obtaining numerically the probability density, cumulative distribution and quantile functions and make it available to the scientific community. The algorithms were proposed and implemented using Gauss-Legendre quadrature and the Newton-Raphson method in R software, resulting in the SMR package, available for download in the CRAN site. The implemented routines showed high accuracy proved by using Monte Carlo simulations and by comparing results with different number of quadrature points. Regarding to the precision to obtain the quantiles for cases where the degrees of freedom are close to 1 and the percentiles are close to 100%, it is recommended to use more than 64 quadrature points.

  2. A branching process model for the analysis of abortive colony size distributions in carbon ion-irradiated normal human fibroblasts

    International Nuclear Information System (INIS)

    Sakashita, Tetsuya; Kobayashi, Yasuhiko; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Saito, Kimiaki

    2014-01-01

    A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/μm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log–log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE. (author)

  3. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    Science.gov (United States)

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  4. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  5. Mast cell distribution in normal adult skin

    NARCIS (Netherlands)

    A.S. Janssens (Artiena Soe); R. Heide (Rogier); J.C. den Hollander (Jan); P.G.M. Mulder (P. G M); B. Tank (Bhupendra); A.P. Oranje (Arnold)

    2005-01-01

    markdownabstract__AIMS:__ To investigate mast cell distribution in normal adult skin to provide a reference range for comparison with mastocytosis. __METHODS:__ Mast cells (MCs) were counted in uninvolved skin adjacent to basal cell carcinomas and other dermatological disorders in adults.

  6. Scale and shape mixtures of multivariate skew-normal distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.

    2018-02-26

    We introduce a broad and flexible class of multivariate distributions obtained by both scale and shape mixtures of multivariate skew-normal distributions. We present the probabilistic properties of this family of distributions in detail and lay down the theoretical foundations for subsequent inference with this model. In particular, we study linear transformations, marginal distributions, selection representations, stochastic representations and hierarchical representations. We also describe an EM-type algorithm for maximum likelihood estimation of the parameters of the model and demonstrate its implementation on a wind dataset. Our family of multivariate distributions unifies and extends many existing models of the literature that can be seen as submodels of our proposal.

  7. Ventilation-perfusion distribution in normal subjects.

    Science.gov (United States)

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  8. Evaluation of the Weibull and log normal distribution functions as survival models of Escherichia coli under isothermal and non isothermal conditions.

    Science.gov (United States)

    Aragao, Glaucia M F; Corradini, Maria G; Normand, Mark D; Peleg, Micha

    2007-11-01

    Published survival curves of Escherichia coli in two growth media, with and without the presence of salt, at various temperatures and in a Greek eggplant salad having various levels of essential oil, all had a characteristic downward concavity when plotted on semi logarithmic coordinates. Some also exhibited what appeared as a 'shoulder' of considerable length. Regardless of whether a shoulder was noticed, the survival pattern could be considered as a manifestation of an underlying unimodal distribution of the cells' death times. Mathematically, the data could be described equally well by the Weibull and log normal distribution functions, which had similar modes, means, standard deviations and coefficients of skewness. When plotted in their probability density function (PDF) form, the curves also appeared very similar visually. This enabled us to quantify and compare the effect of temperature or essential oil concentration on the organism's survival in terms of these temporal distributions' characteristics. Increased lethality was generally expressed in a shorter mean and mode, a smaller standard deviation and increased overall symmetry as judged by the distributions' degree of skewness. The 'shoulder', as expected, simply indicated that the distribution's standard deviation was much smaller than its mode. Rate models based on the two distribution functions could be used to predict non isothermal survival patterns. They were derived on the assumption that the momentary inactivation rate is the isothermal rate at the momentary temperature at a time that corresponds to the momentary survival ratio. In this application, however, the Weibullian model with a fixed power was not only simpler and more convenient mathematically than the one based on the log normal distribution, but it also provided more accurate estimates of the dynamic inactivation patterns.

  9. Mast cell distribution in normal adult skin.

    Science.gov (United States)

    Janssens, A S; Heide, R; den Hollander, J C; Mulder, P G M; Tank, B; Oranje, A P

    2005-03-01

    To investigate mast cell distribution in normal adult skin to provide a reference range for comparison with mastocytosis. Mast cells (MCs) were counted in uninvolved skin adjacent to basal cell carcinomas and other dermatological disorders in adults. There was an uneven distribution of MCs in different body sites using the anti-tryptase monoclonal antibody technique. Numbers of MCs on the trunk, upper arm, and upper leg were similar, but were significantly different from those found on the lower leg and forearm. Two distinct groups were formed--proximal and distal. There were 77.0 MCs/mm2 at proximal body sites and 108.2 MCs/mm2 at distal sites. Adjusted for the adjacent diagnosis and age, this difference was consistent. The numbers of MCs in uninvolved skin adjacent to basal cell carcinomas and other dermatological disorders were not different from those in the control group. Differences in the numbers of MCs between the distal and the proximal body sites must be considered when MCs are counted for a reliable diagnosis of mastocytosis. A pilot study in patients with mastocytosis underlined the variation in the numbers of MCs in mastocytosis and normal skin, but showed a considerable overlap. The observed numbers of MCs in adults cannot be extrapolated to children. MC numbers varied significantly between proximal and distal body sites and these differences must be considered when MCs are counted for a reliable diagnosis of mastocytosis. There was a considerable overlap between the numbers of MCs in mastocytosis and normal skin.

  10. Radiation distribution sensing with normal optical fiber

    Energy Technology Data Exchange (ETDEWEB)

    Kawarabayashi, Jun; Mizuno, Ryoji; Naka, Ryotaro; Uritani, Akira; Watanabe, Ken-ichi; Iguchi, Tetsuo [Nagoya Univ., Dept. of Nuclear Engineering, Nagoya, Aichi (Japan); Tsujimura, Norio [Japan Nuclear Cycle Development Inst., Tokai Works, Tokai, Ibaraki (Japan)

    2002-12-01

    The purpose of this study is to develop a radiation distribution monitor using a normal plastic optical fiber. The monitor has a long operating length (10m-100m) and can obtain continuous radiation distributions. A principle of the position sensing is based on a time-of-flight technique. The characteristics of this monitor to beta particles, gamma rays and fast neutrons were obtained. The spatial resolutions for beta particles ({sup 90}Sr{sup -90}Y), gamma rays ({sup 137}Cs) and D-T neutrons were 30 cm, 37 cm and 13 cm, respectively. The detection efficiencies for the beta rays, the gamma rays and D-T neutrons were 0.11%, 1.6x10{sup -5}% and 5.4x10{sup -4}%, respectively. The effective attenuation length of the detection efficiency was 18m. New principle of the position sensing based on spectroscopic analysis was also proposed. A preliminary test showed that the spectrum observed at the end of the fiber depended on the position of the irradiated point. This fact shows that the radiation distributions were calculated from the spectrum by mathematical deconvolution technique. (author)

  11. Radiation distribution sensing with normal optical fiber

    International Nuclear Information System (INIS)

    Kawarabayashi, Jun; Mizuno, Ryoji; Naka, Ryotaro; Uritani, Akira; Watanabe, Ken-ichi; Iguchi, Tetsuo; Tsujimura, Norio

    2002-01-01

    The purpose of this study is to develop a radiation distribution monitor using a normal plastic optical fiber. The monitor has a long operating length (10m-100m) and can obtain continuous radiation distributions. A principle of the position sensing is based on a time-of-flight technique. The characteristics of this monitor to beta particles, gamma rays and fast neutrons were obtained. The spatial resolutions for beta particles ( 90 Sr -90 Y), gamma rays ( 137 Cs) and D-T neutrons were 30 cm, 37 cm and 13 cm, respectively. The detection efficiencies for the beta rays, the gamma rays and D-T neutrons were 0.11%, 1.6x10 -5 % and 5.4x10 -4 %, respectively. The effective attenuation length of the detection efficiency was 18m. New principle of the position sensing based on spectroscopic analysis was also proposed. A preliminary test showed that the spectrum observed at the end of the fiber depended on the position of the irradiated point. This fact shows that the radiation distributions were calculated from the spectrum by mathematical deconvolution technique. (author)

  12. Radiation distribution sensing with normal optical fiber

    CERN Document Server

    Kawarabayashi, J; Naka, R; Uritani, A; Watanabe, K I; Iguchi, T; Tsujimura, N

    2002-01-01

    The purpose of this study is to develop a radiation distribution monitor using a normal plastic optical fiber. The monitor has a long operating length (10m-100m) and can obtain continuous radiation distributions. A principle of the position sensing is based on a time-of-flight technique. The characteristics of this monitor to beta particles, gamma rays and fast neutrons were obtained. The spatial resolutions for beta particles ( sup 9 sup 0 Sr sup - sup 9 sup 0 Y), gamma rays ( sup 1 sup 3 sup 7 Cs) and D-T neutrons were 30 cm, 37 cm and 13 cm, respectively. The detection efficiencies for the beta rays, the gamma rays and D-T neutrons were 0.11%, 1.6x10 sup - sup 5 % and 5.4x10 sup - sup 4 %, respectively. The effective attenuation length of the detection efficiency was 18m. New principle of the position sensing based on spectroscopic analysis was also proposed. A preliminary test showed that the spectrum observed at the end of the fiber depended on the position of the irradiated point. This fact shows that t...

  13. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  14. Scale and shape mixtures of multivariate skew-normal distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.; Ferreira, Clé cio S.; Genton, Marc G.

    2018-01-01

    We introduce a broad and flexible class of multivariate distributions obtained by both scale and shape mixtures of multivariate skew-normal distributions. We present the probabilistic properties of this family of distributions in detail and lay down

  15. Determining Normal-Distribution Tolerance Bounds Graphically

    Science.gov (United States)

    Mezzacappa, M. A.

    1983-01-01

    Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.

  16. The exp-normal distribution is infinitely divisible

    OpenAIRE

    Pinelis, Iosif

    2018-01-01

    Let $Z$ be a standard normal random variable (r.v.). It is shown that the distribution of the r.v. $\\ln|Z|$ is infinitely divisible; equivalently, the standard normal distribution considered as the distribution on the multiplicative group over $\\mathbb{R}\\setminus\\{0\\}$ is infinitely divisible.

  17. Concentration distribution of trace elements: from normal distribution to Levy flights

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Banas, D.; Braziewicz, J.; Majewska, U.; Pajek, M.

    2003-01-01

    The paper discusses a nature of concentration distributions of trace elements in biomedical samples, which were measured by using the X-ray fluorescence techniques (XRF, TXRF). Our earlier observation, that the lognormal distribution well describes the measured concentration distribution is explained here on a more general ground. Particularly, the role of random multiplicative process, which models the concentration distributions of trace elements in biomedical samples, is discussed in detail. It is demonstrated that the lognormal distribution, appearing when the multiplicative process is driven by normal distribution, can be generalized to the so-called log-stable distribution. Such distribution describes the random multiplicative process, which is driven, instead of normal distribution, by more general stable distribution, being known as the Levy flights. The presented ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, obtained by using the conventional (XRF) and (TXRF) X-ray fluorescence methods. Particularly, the first observation of log-stable concentration distribution of trace elements is reported and discussed here in detail

  18. Inheritance of Properties of Normal and Non-Normal Distributions after Transformation of Scores to Ranks

    Science.gov (United States)

    Zimmerman, Donald W.

    2011-01-01

    This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…

  19. Statistical properties of the normalized ice particle size distribution

    Science.gov (United States)

    Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.

    2005-05-01

    Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000

  20. Normal distribution of standing balance for healthy Danish children

    DEFF Research Database (Denmark)

    Pedersen, Line Kjeldgaard; Ghasemi, Habib; Rahbek, Ole

    2013-01-01

    Title: Normal distribution of standing balance for healthy Danish children – Reproducibility of parameters of balance. Authors Line Kjeldgaard Pedersen Habib Ghasemi Ole Rahbek Bjarne Møller-Madsen 1800 characters incl. spaces Background Pedobarographic measurements are increasingly used in child......Title: Normal distribution of standing balance for healthy Danish children – Reproducibility of parameters of balance. Authors Line Kjeldgaard Pedersen Habib Ghasemi Ole Rahbek Bjarne Møller-Madsen 1800 characters incl. spaces Background Pedobarographic measurements are increasingly used...

  1. The retest distribution of the visual field summary index mean deviation is close to normal.

    Science.gov (United States)

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  2. Application of a truncated normal failure distribution in reliability testing

    Science.gov (United States)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  3. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  4. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  5. Modelling normal tissue isoeffect distribution in conformal radiotherapy of glioblastoma provides an alternative dose escalation pattern through hypofractionation without reducing the total dose

    International Nuclear Information System (INIS)

    Mangel, L.; Skriba, Z.; Major, T.; Polgar, C.; Fodor, J.; Somogyi, A.; Nemeth, G.

    2002-01-01

    The purpose of this study was to prove that by using conformal external beam radiotherapy (RT) normal brain structures can be protected even when applying an alternative approach of biological dose escalation: hypofractionation (HOF) without total dose reduction (TDR). Traditional 2-dimensional (2D) and conformal 3-dimensional (3D) treatment plans were prepared for 10 gliomas representing the subanatomical sites of the supratentorial brain. Isoeffect distributions were generated by the biologically effective dose (BED) formula to analyse the effect of conventionally fractionated (CF) and HOF schedules on both the spatial biological dose distribution and biological dose-volume histograms. A comparison was made between 2D-CF (2.0 Gy/day) and 3D-HOF (2.5 Gy/day) regimens, applying the same 60 Gy total doses. Integral biologically effective dose (IBED) and volumes received biologically equivalent to a dose of 54 Gy or more (V-BED54) were calculated for the lower and upper brain stem as organs of risk. The IBED values were lower with the 3D-HOF than with the 2D-CF schedule in each tumour location, means 22.7±17.1 and 40.4±16.9 in Gy, respectively (p<0.0001). The V-BED54 values were also smaller or equal in 90% of the cases favouring the 3D-HOF scheme. The means were 2.7±4.8 ccm for 3D-HOF and 10.7±12.7 ccm for 2D-CF (p=0.0006). Our results suggest that with conformal RT, fraction size can gradually be increased. HOF radiotherapy regimens without TDR shorten the treatment time and seem to be an alternative way of dose escalation in the treatment of glioblastoma

  6. Modelling normal tissue isoeffect distribution in conformal radiotherapy of glioblastoma provides an alternative dose escalation pattern through hypofractionation without reducing the total dose

    Energy Technology Data Exchange (ETDEWEB)

    Mangel, L.; Skriba, Z.; Major, T.; Polgar, C.; Fodor, J.; Somogyi, A.; Nemeth, G. [National Research Inst. for Radiobiology and Radiohygiene, Budapest (Hungary)

    2002-04-01

    The purpose of this study was to prove that by using conformal external beam radiotherapy (RT) normal brain structures can be protected even when applying an alternative approach of biological dose escalation: hypofractionation (HOF) without total dose reduction (TDR). Traditional 2-dimensional (2D) and conformal 3-dimensional (3D) treatment plans were prepared for 10 gliomas representing the subanatomical sites of the supratentorial brain. Isoeffect distributions were generated by the biologically effective dose (BED) formula to analyse the effect of conventionally fractionated (CF) and HOF schedules on both the spatial biological dose distribution and biological dose-volume histograms. A comparison was made between 2D-CF (2.0 Gy/day) and 3D-HOF (2.5 Gy/day) regimens, applying the same 60 Gy total doses. Integral biologically effective dose (IBED) and volumes received biologically equivalent to a dose of 54 Gy or more (V-BED54) were calculated for the lower and upper brain stem as organs of risk. The IBED values were lower with the 3D-HOF than with the 2D-CF schedule in each tumour location, means 22.7{+-}17.1 and 40.4{+-}16.9 in Gy, respectively (p<0.0001). The V-BED54 values were also smaller or equal in 90% of the cases favouring the 3D-HOF scheme. The means were 2.7{+-}4.8 ccm for 3D-HOF and 10.7{+-}12.7 ccm for 2D-CF (p=0.0006). Our results suggest that with conformal RT, fraction size can gradually be increased. HOF radiotherapy regimens without TDR shorten the treatment time and seem to be an alternative way of dose escalation in the treatment of glioblastoma.

  7. A novel generalized normal distribution for human longevity and other negatively skewed data.

    Science.gov (United States)

    Robertson, Henry T; Allison, David B

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.

  8. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  9. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  10. Multivariate stochastic simulation with subjective multivariate normal distributions

    Science.gov (United States)

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  11. Sketching Curves for Normal Distributions--Geometric Connections

    Science.gov (United States)

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  12. Improved Root Normal Size Distributions for Liquid Atomization

    Science.gov (United States)

    2015-11-01

    ANSI Std. Z39.18 j CONVERSION TABLE Conversion Factors for U.S. Customary to metric (SI) units of measurement. MULTIPLY BY TO...Gray (Gy) coulomb /kilogram (C/kg) second (s) kilogram (kg) kilo pascal (kPa) 1 Improved Root Normal Size Distributions for Liquid

  13. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  14. Evaluating Transfer Entropy for Normal and y-Order Normal Distributions

    Czech Academy of Sciences Publication Activity Database

    Hlaváčková-Schindler, Kateřina; Toulias, T. L.; Kitsos, C. P.

    2016-01-01

    Roč. 17, č. 5 (2016), s. 1-20 ISSN 2231-0851 Institutional support: RVO:67985556 Keywords : Transfer entropy * time series * Kullback-Leibler divergence * causality * generalized normal distribution Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2016/AS/hlavackova-schindler-0461261.pdf

  15. The Weight of Euro Coins: Its Distribution Might Not Be as Normal as You Would Expect

    Science.gov (United States)

    Shkedy, Ziv; Aerts, Marc; Callaert, Herman

    2006-01-01

    Classical regression models, ANOVA models and linear mixed models are just three examples (out of many) in which the normal distribution of the response is an essential assumption of the model. In this paper we use a dataset of 2000 euro coins containing information (up to the milligram) about the weight of each coin, to illustrate that the…

  16. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  17. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  18. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  19. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    Science.gov (United States)

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  20. Distributive justice and cognitive enhancement in lower, normal intelligence.

    Science.gov (United States)

    Dunlop, Mikael; Savulescu, Julian

    2014-01-01

    There exists a significant disparity within society between individuals in terms of intelligence. While intelligence varies naturally throughout society, the extent to which this impacts on the life opportunities it affords to each individual is greatly undervalued. Intelligence appears to have a prominent effect over a broad range of social and economic life outcomes. Many key determinants of well-being correlate highly with the results of IQ tests, and other measures of intelligence, and an IQ of 75 is generally accepted as the most important threshold in modern life. The ability to enhance our cognitive capacities offers an exciting opportunity to correct disabling natural variation and inequality in intelligence. Pharmaceutical cognitive enhancers, such as modafinil and methylphenidate, have been shown to have the capacity to enhance cognition in normal, healthy individuals. Perhaps of most relevance is the presence of an 'inverted U effect' for most pharmaceutical cognitive enhancers, whereby the degree of enhancement increases as intelligence levels deviate further below the mean. Although enhancement, including cognitive enhancement, has been much debated recently, we argue that there are egalitarian reasons to enhance individuals with low but normal intelligence. Under egalitarianism, cognitive enhancement has the potential to reduce opportunity inequality and contribute to relative income and welfare equality in the lower, normal intelligence subgroup. Cognitive enhancement use is justifiable under prioritarianism through various means of distribution; selective access to the lower, normal intelligence subgroup, universal access, or paradoxically through access primarily to the average and above average intelligence subgroups. Similarly, an aggregate increase in social well-being is achieved through similar means of distribution under utilitarianism. In addition, the use of cognitive enhancement within the lower, normal intelligence subgroup negates, or at

  1. A general approach to double-moment normalization of drop size distributions

    Science.gov (United States)

    Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.

    2003-04-01

    Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.

  2. Computer modeling the boron compound factor in normal brain tissue

    International Nuclear Information System (INIS)

    Gavin, P.R.; Huiskamp, R.; Wheeler, F.J.; Griebenow, M.L.

    1993-01-01

    The macroscopic distribution of borocaptate sodium (Na 2 B 12 H 11 SH or BSH) in normal tissues has been determined and can be accurately predicted from the blood concentration. The compound para-borono-phenylalanine (p-BPA) has also been studied in dogs and normal tissue distribution has been determined. The total physical dose required to reach a biological isoeffect appears to increase directly as the proportion of boron capture dose increases. This effect, together with knowledge of the macrodistribution, led to estimates of the influence of the microdistribution of the BSH compound. This paper reports a computer model that was used to predict the compound factor for BSH and p-BPA and, hence, the equivalent radiation in normal tissues. The compound factor would need to be calculated for other compounds with different distributions. This information is needed to design appropriate normal tissue tolerance studies for different organ systems and/or different boron compounds

  3. Bounding species distribution models

    Directory of Open Access Journals (Sweden)

    Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE

    2011-10-01

    Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].

  4. Bounding Species Distribution Models

    Science.gov (United States)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  5. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering Based on the Newly Developed Self-consistent RC/EMIC Waves Model by Khazanov et al. [2006

    Science.gov (United States)

    Khazanov, G. V.; Gallagher, D. L.; Gamayunov, K.

    2007-01-01

    It is well known that the effects of EMIC waves on RC ion and RB electron dynamics strongly depend on such particle/wave characteristics as the phase-space distribution function, frequency, wave-normal angle, wave energy, and the form of wave spectral energy density. Therefore, realistic characteristics of EMIC waves should be properly determined by modeling the RC-EMIC waves evolution self-consistently. Such a selfconsistent model progressively has been developing by Khaznnov et al. [2002-2006]. It solves a system of two coupled kinetic equations: one equation describes the RC ion dynamics and another equation describes the energy density evolution of EMIC waves. Using this model, we present the effectiveness of relativistic electron scattering and compare our results with previous work in this area of research.

  6. Distribution of normal superficial ocular vessels in digital images.

    Science.gov (United States)

    Banaee, Touka; Ehsaei, Asieh; Pourreza, Hamidreza; Khajedaluee, Mohammad; Abrishami, Mojtaba; Basiri, Mohsen; Daneshvar Kakhki, Ramin; Pourreza, Reza

    2014-02-01

    To investigate the distribution of different-sized vessels in the digital images of the ocular surface, an endeavor which may provide useful information for future studies. This study included 295 healthy individuals. From each participant, four digital photographs of the superior and inferior conjunctivae of both eyes, with a fixed succession of photography (right upper, right lower, left upper, left lower), were taken with a slit lamp mounted camera. Photographs were then analyzed by a previously described algorithm for vessel detection in the digital images. The area (of the image) occupied by vessels (AOV) of different sizes was measured. Height, weight, fasting blood sugar (FBS) and hemoglobin levels were also measured and the relationship between these parameters and the AOV was investigated. These findings indicated a statistically significant difference in the distribution of the AOV among the four conjunctival areas. No significant correlations were noted between the AOV of each conjunctival area and the different demographic and biometric factors. Medium-sized vessels were the most abundant vessels in the photographs of the four investigated conjunctival areas. The AOV of the different sizes of vessels follows a normal distribution curve in the four areas of the conjunctiva. The distribution of the vessels in successive photographs changes in a specific manner, with the mean AOV becoming larger as the photos were taken from the right upper to the left lower area. The AOV of vessel sizes has a normal distribution curve and medium-sized vessels occupy the largest area of the photograph. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  7. The normal distribution of thoracoabdominal aorta small branch artery ostia

    International Nuclear Information System (INIS)

    Cronin, Paul; Williams, David M.; Vellody, Ranjith; Kelly, Aine Marie; Kazerooni, Ella A.; Carlos, Ruth C.

    2011-01-01

    The purpose of this study was to determine the normal distribution of aortic branch artery ostia. CT scans of 100 subjects were retrospectively reviewed. The angular distributions of the aorta with respect to the center of the T3 to L4 vertebral bodies, and of branch artery origins with respect to the center of the aorta were measured. At each vertebral body level the distribution of intercostal/lumbar arteries and other branch arteries were calculated. The proximal descending aorta is posteriorly placed becoming a midline structure, at the thoracolumbar junction, and remains anterior to the vertebral bodies within the abdomen. The intercostal and lumbar artery ostia have a distinct distribution. At each vertebral level from T3 caudally, one intercostal artery originates from the posterior wall of the aorta throughout the thoracic aorta, while the other intercostal artery originates from the medial wall of the descending thoracic aorta high in the chest, posteromedially from the mid-thoracic aorta, and from the posterior wall of the aorta low in the chest. Mediastinal branches of the thoracic aorta originate from the medial and anterior wall. Lumbar branches originate only from the posterior wall of the abdominal aorta. Aortic branch artery origins arise with a bimodal distribution and have a characteristic location. Mediastinal branches of the thoracic aorta originate from the medial and anterior wall. Knowing the location of aortic branch artery ostia may help distinguish branch artery pseudoaneurysms from penetrating ulcers.

  8. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Vaginal drug distribution modeling.

    Science.gov (United States)

    Katz, David F; Yuan, Andrew; Gao, Yajing

    2015-09-15

    This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. New Riemannian Priors on the Univariate Normal Model

    Directory of Open Access Journals (Sweden)

    Salem Said

    2014-07-01

    Full Text Available The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ with hyperparameters θ - ∈ Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - /2γ2, where d2(θ, θ - is the square of Rao’s Riemannian distance. The distributions G( θ - , γ are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ, as shown in the paper, and its dispersion away from θ - is given by γ.  Therefore, one thinks of members of the class represented by G( θ - , γ as being centered around θ - and  lying within a typical  distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that  this  leads  to  an  improvement  in  performance  over  the  use  of  conjugate  priors.

  11. Characteristic functions of scale mixtures of multivariate skew-normal distributions

    KAUST Repository

    Kim, Hyoung-Moon

    2011-08-01

    We obtain the characteristic function of scale mixtures of skew-normal distributions both in the univariate and multivariate cases. The derivation uses the simple stochastic relationship between skew-normal distributions and scale mixtures of skew-normal distributions. In particular, we describe the characteristic function of skew-normal, skew-t, and other related distributions. © 2011 Elsevier Inc.

  12. The PDF of fluid particle acceleration in turbulent flow with underlying normal distribution of velocity fluctuations

    International Nuclear Information System (INIS)

    Aringazin, A.K.; Mazhitov, M.I.

    2003-01-01

    We describe a formal procedure to obtain and specify the general form of a marginal distribution for the Lagrangian acceleration of fluid particle in developed turbulent flow using Langevin type equation and the assumption that velocity fluctuation u follows a normal distribution with zero mean, in accord to the Heisenberg-Yaglom picture. For a particular representation, β=exp[u], of the fluctuating parameter β, we reproduce the underlying log-normal distribution and the associated marginal distribution, which was found to be in a very good agreement with the new experimental data by Crawford, Mordant, and Bodenschatz on the acceleration statistics. We discuss on arising possibilities to make refinements of the log-normal model

  13. Validation of MCDS by comparison of predicted with experimental velocity distribution functions in rarefied normal shocks

    Science.gov (United States)

    Pham-Van-diep, Gerald C.; Erwin, Daniel A.

    1989-01-01

    Velocity distribution functions in normal shock waves in argon and helium are calculated using Monte Carlo direct simulation. These are compared with experimental results for argon at M = 7.18 and for helium at M = 1.59 and 20. For both argon and helium, the variable-hard-sphere (VHS) model is used for the elastic scattering cross section, with the velocity dependence derived from a viscosity-temperature power-law relationship in the way normally used by Bird (1976).

  14. Basic study on radiation distribution sensing with normal optical fiber

    International Nuclear Information System (INIS)

    Naka, R.; Kawarabayashi, J.; Uritani, A.; Iguchi, T.; Kaneko, J.; Takeuchi, H.; Kakuta, T.

    2000-01-01

    Recently, some methods of radiation distribution sensing with optical fibers have been proposed. These methods employ scintillating fibers or scintillators with wavelength-shifting fibers. The positions of radiation interactions are detected by applying a time-of-flight (TOF) technique to the scintillation photon propagation. In the former method, the attenuation length for the scintillation photons in the scintillating fiber is relatively short, so that the operating length of the sensor is limited to several meters. In the latter method, a radiation distribution cannot continuously be obtained but discretely. To improve these shortcomings, a normal optical fiber made of polymethyl methacrylate (PMMA) is used in this study. Although the scintillation efficiency of PMMA is very low, several photons are emitted through interaction with a radiation. The fiber is transparent for the emitted photons to have a relatively long operating length. A radiation distribution can continuously be obtained. This paper describes a principle of the position sensing method based on the time of flight technique and preliminary results obtained for 90 Sr- 90 Y beta rays, 137 Cs gamma rays, and 14 MeV neutrons. The spatial resolutions for the above three kinds of radiations are 0.30 m, 0.37 m, 0.13 m, and the detection efficiencies are 1.1 x 10 -3 , 1.6 x 10 -7 , 5.4 x 10 -6 , respectively, with 10 m operation length. The results of a spectroscopic study on the optical property of the fiber are also described. (author)

  15. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  16. Normal distribution of /sup 111/In chloride on scintigram

    Energy Technology Data Exchange (ETDEWEB)

    Oyama, K; Machida, K; Hayashi, S; Watari, T; Akaike, A

    1977-05-01

    Indium-111-chloride (/sup 111/InCl/sub 3/) was used as a bone marrow imaging and a tumor-localizing agent in 38 patients (46 scintigrams), who were suspected of, or diagnosed as, having malignant disease, and who were irradiated for malignant disease. The regions of suspected malignant disease, of abnormally accumulated on scintigrams, and the target irradiated, were excluded to estimate the normal distribution of /sup 111/InCl/sub 3/. Scintigrams were taken 48 hrs after intravenous injection of /sup 111/InCl/sub 3/ 1 to 3 mCi. The percent and score distribution of /sup 111/InCl/sub 3/ were noted in 23 regions. As the liver showed the highest accumulation of /sup 111/In on all scintigrams, the liver was designated as 2+. Comparing with the radioactivity in the liver, other regions had similar (2+), moderately decreased (+), or severely decreased (-) accumulation on scintigram. The score is given one for 2+, 0.5 for +, 0 for -. The score and percentage distributions were: liver 100 (100%), lumbar vertebra 58.5 (100%), mediastinum 55 (100%), nasopharynx 50 (100%), testis 47.5 (59%), heart 44.5 (89%), and pelvis 43.5 (78%). Comparing this study with a previous study of /sup 111/In-BLM, score distribution in lumbar vertebra, pelvis, and skull were similar. /sup 111/In-BLM is excreted rapidly after injection, but little /sup 111/InCl/sub 3/ is excreted. Accumulation of /sup 111/In in bone marrow depends upon the amount of /sup 111/In-transferrin in blood. High accumulation in the lumbar vertebra and pelvis shows that /sup 111/InCl/sub 3/ would be effective as a bone marrow imaging agent.

  17. Dobinski-type relations and the log-normal distribution

    International Nuclear Information System (INIS)

    Blasiak, P; Penson, K A; Solomon, A I

    2003-01-01

    We consider sequences of generalized Bell numbers B(n), n = 1, 2, ..., which can be represented by Dobinski-type summation formulae, i.e. B(n) = 1/C Σ k =0 ∞ [P(k)] n /D(k), with P(k) a polynomial, D(k) a function of k and C = const. They include the standard Bell numbers (P(k) k, D(k) = k!, C = e), their generalizations B r,r (n), r = 2, 3, ..., appearing in the normal ordering of powers of boson monomials (P(k) (k+r)!/k!, D(k) = k!, C = e), variants of 'ordered' Bell numbers B o (p) (n) (P(k) = k, D(k) = (p+1/p) k , C = 1 + p, p = 1, 2 ...), etc. We demonstrate that for α, β, γ, t positive integers (α, t ≠ 0), [B(αn 2 + βn + γ)] t is the nth moment of a positive function on (0, ∞) which is a weighted infinite sum of log-normal distributions. (letter to the editor)

  18. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  19. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  20. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  1. A Box-Cox normal model for response times.

    Science.gov (United States)

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  2. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  3. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    Science.gov (United States)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when

  4. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars Peter

    While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order...... moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared...

  5. An empirical multivariate log-normal distribution representing uncertainty of biokinetic parameters for 137Cs

    International Nuclear Information System (INIS)

    Miller, G.; Martz, H.; Bertelli, L.; Melo, D.

    2008-01-01

    A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)

  6. Characteristic functions of scale mixtures of multivariate skew-normal distributions

    KAUST Repository

    Kim, Hyoung-Moon; Genton, Marc G.

    2011-01-01

    We obtain the characteristic function of scale mixtures of skew-normal distributions both in the univariate and multivariate cases. The derivation uses the simple stochastic relationship between skew-normal distributions and scale mixtures of skew

  7. Visualizing Tensor Normal Distributions at Multiple Levels of Detail.

    Science.gov (United States)

    Abbasloo, Amin; Wiens, Vitalis; Hermann, Max; Schultz, Thomas

    2016-01-01

    Despite the widely recognized importance of symmetric second order tensor fields in medicine and engineering, the visualization of data uncertainty in tensor fields is still in its infancy. A recently proposed tensorial normal distribution, involving a fourth order covariance tensor, provides a mathematical description of how different aspects of the tensor field, such as trace, anisotropy, or orientation, vary and covary at each point. However, this wealth of information is far too rich for a human analyst to take in at a single glance, and no suitable visualization tools are available. We propose a novel approach that facilitates visual analysis of tensor covariance at multiple levels of detail. We start with a visual abstraction that uses slice views and direct volume rendering to indicate large-scale changes in the covariance structure, and locations with high overall variance. We then provide tools for interactive exploration, making it possible to drill down into different types of variability, such as in shape or orientation. Finally, we allow the analyst to focus on specific locations of the field, and provide tensor glyph animations and overlays that intuitively depict confidence intervals at those points. Our system is demonstrated by investigating the effects of measurement noise on diffusion tensor MRI, and by analyzing two ensembles of stress tensor fields from solid mechanics.

  8. Retention and subcellular distribution of 67Ga in normal organs

    International Nuclear Information System (INIS)

    Ando, A.; Ando, I.; Hiraki, T.

    1986-01-01

    Using normal rats, retention values and subcellular distribution of 67 Ga in each organ were investigated. At 10 min after administration of 67 Ga-citrate the retention value of 67 Ga in blood was 6.77% dose/g, and this value decreased with time. The values for skeletal muscle, lung, pancreas, adrenal, heart muscle, brain, small intestine, large intestine and spinal cord were the highest at 10 min after administration, and they decreased with time. Conversely this value in bone increased until 10 days after injection. But in the liver, kidney, and stomach, these values increased with time after administration and were highest 24 h or 48 h after injection. After that, they decreased with time. The value in spleen reached a plateau 48 h after administration, and hardly varied for 10 days. From the results of subcellular fractionation, it was deduced that lysosome plays quite an important role in the concentration of 67 Ga in small intestine, stomach, lung, kidney and pancreas; a lesser role in its concentration in heart muscle, and hardly any role in the 67 Ga accumulation in skeletal muscle. In spleen, the contents in nuclear, mitochrondrial, microsomal, and supernatant fractions all contributed to the accumulation of 67 Ga. (orig.) [de

  9. Distribution of separated energy and injected charge at normal falling of fast electron beam on target

    CERN Document Server

    Smolyar, V A; Eremin, V V

    2002-01-01

    In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well

  10. Distribution of separated energy and injected charge at normal falling of fast electron beam on target

    International Nuclear Information System (INIS)

    Smolyar, V.A.; Eremin, A.V.; Eremin, V.V.

    2002-01-01

    In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well [ru

  11. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time...... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  12. Optimization of b-value distribution for biexponential diffusion-weighted MR imaging of normal prostate.

    Science.gov (United States)

    Jambor, Ivan; Merisaari, Harri; Aronen, Hannu J; Järvinen, Jukka; Saunavaara, Jani; Kauko, Tommi; Borra, Ronald; Pesola, Marko

    2014-05-01

    To determine the optimal b-value distribution for biexponential diffusion-weighted imaging (DWI) of normal prostate using both a computer modeling approach and in vivo measurements. Optimal b-value distributions for the fit of three parameters (fast diffusion Df, slow diffusion Ds, and fraction of fast diffusion f) were determined using Monte-Carlo simulations. The optimal b-value distribution was calculated using four individual optimization methods. Eight healthy volunteers underwent four repeated 3 Tesla prostate DWI scans using both 16 equally distributed b-values and an optimized b-value distribution obtained from the simulations. The b-value distributions were compared in terms of measurement reliability and repeatability using Shrout-Fleiss analysis. Using low noise levels, the optimal b-value distribution formed three separate clusters at low (0-400 s/mm2), mid-range (650-1200 s/mm2), and high b-values (1700-2000 s/mm2). Higher noise levels resulted into less pronounced clustering of b-values. The clustered optimized b-value distribution demonstrated better measurement reliability and repeatability in Shrout-Fleiss analysis compared with 16 equally distributed b-values. The optimal b-value distribution was found to be a clustered distribution with b-values concentrated in the low, mid, and high ranges and was shown to improve the estimation quality of biexponential DWI parameters of in vivo experiments. Copyright © 2013 Wiley Periodicals, Inc.

  13. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    Science.gov (United States)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  14. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  15. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    International Nuclear Information System (INIS)

    Baidillah, Marlin R; Takei, Masahiro

    2017-01-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution. (paper)

  16. A general approach to double-moment normalization of drop size distributions

    NARCIS (Netherlands)

    Lee, G.W.; Zawadzki, I.; Szyrmer, W.; Sempere Torres, D.; Uijlenhoet, R.

    2004-01-01

    Normalization of drop size distributions (DSDs) is reexamined here. First, an extension of the scaling normalization that uses one moment of the DSD as a scaling parameter to a more general scaling normalization that uses two moments as scaling parameters of the normalization is presented. In

  17. Modeling pore corrosion in normally open gold- plated copper connectors.

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.

  18. Comparing of Normal Stress Distribution in Static and Dynamic Soil-Structure Interaction Analyses

    International Nuclear Information System (INIS)

    Kholdebarin, Alireza; Massumi, Ali; Davoodi, Mohammad; Tabatabaiefar, Hamid Reza

    2008-01-01

    It is important to consider the vertical component of earthquake loading and inertia force in soil-structure interaction analyses. In most circumstances, design engineers are primarily concerned about the analysis of behavior of foundations subjected to earthquake-induced forces transmitted from the bedrock. In this research, a single rigid foundation with designated geometrical parameters located on sandy-clay soil has been modeled in FLAC software with Finite Different Method and subjected to three different vertical components of earthquake records. In these cases, it is important to evaluate effect of footing on underlying soil and to consider normal stress in soil with and without footing. The distribution of normal stress under the footing in static and dynamic states has been studied and compared. This Comparison indicated that, increasing in normal stress under the footing caused by vertical component of ground excitations, has decreased dynamic vertical settlement in comparison with static state

  19. Non-linear learning in online tutorial to enhance students’ knowledge on normal distribution application topic

    Science.gov (United States)

    Kartono; Suryadi, D.; Herman, T.

    2018-01-01

    This study aimed to analyze the enhancement of non-linear learning (NLL) in the online tutorial (OT) content to students’ knowledge of normal distribution application (KONDA). KONDA is a competence expected to be achieved after students studied the topic of normal distribution application in the course named Education Statistics. The analysis was performed by quasi-experiment study design. The subject of the study was divided into an experimental class that was given OT content in NLL model and a control class which was given OT content in conventional learning (CL) model. Data used in this study were the results of online objective tests to measure students’ statistical prior knowledge (SPK) and students’ pre- and post-test of KONDA. The statistical analysis test of a gain score of KONDA of students who had low and moderate SPK’s scores showed students’ KONDA who learn OT content with NLL model was better than students’ KONDA who learn OT content with CL model. Meanwhile, for students who had high SPK’s scores, the gain score of students who learn OT content with NLL model had relatively similar with the gain score of students who learn OT content with CL model. Based on those findings it could be concluded that the NLL model applied to OT content could enhance KONDA of students in low and moderate SPK’s levels. Extra and more challenging didactical situation was needed for students in high SPK’s level to achieve the significant gain score.

  20. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming; Jin, Ick-Hoon

    2013-01-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  1. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming

    2013-08-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  2. From Logical to Distributional Models

    Directory of Open Access Journals (Sweden)

    Anne Preller

    2014-12-01

    Full Text Available The paper relates two variants of semantic models for natural language, logical functional models and compositional distributional vector space models, by transferring the logic and reasoning from the logical to the distributional models. The geometrical operations of quantum logic are reformulated as algebraic operations on vectors. A map from functional models to vector space models makes it possible to compare the meaning of sentences word by word.

  3. Exact, time-independent estimation of clone size distributions in normal and mutated cells.

    Science.gov (United States)

    Roshan, A; Jones, P H; Greenman, C D

    2014-10-06

    Biological tools such as genetic lineage tracing, three-dimensional confocal microscopy and next-generation DNA sequencing are providing new ways to quantify the distribution of clones of normal and mutated cells. Understanding population-wide clone size distributions in vivo is complicated by multiple cell types within observed tissues, and overlapping birth and death processes. This has led to the increased need for mathematically informed models to understand their biological significance. Standard approaches usually require knowledge of clonal age. We show that modelling on clone size independent of time is an alternative method that offers certain analytical advantages; it can help parametrize these models, and obtain distributions for counts of mutated or proliferating cells, for example. When applied to a general birth-death process common in epithelial progenitors, this takes the form of a gambler's ruin problem, the solution of which relates to counting Motzkin lattice paths. Applying this approach to mutational processes, alternative, exact, formulations of classic Luria-Delbrück-type problems emerge. This approach can be extended beyond neutral models of mutant clonal evolution. Applications of these approaches are twofold. First, we resolve the probability of progenitor cells generating proliferating or differentiating progeny in clonal lineage tracing experiments in vivo or cell culture assays where clone age is not known. Second, we model mutation frequency distributions that deep sequencing of subclonal samples produce.

  4. Stellar Distributions and NIR Colours of Normal Galaxies

    NARCIS (Netherlands)

    Peletier, R. F.; Grijs, R. de

    1997-01-01

    Abstract: We discuss some results of a morphological study of edge-on galaxies, based on optical and especially near-infrared surface photometry. We find that the vertical surface brightness distributions of galaxies are fitted very well by exponential profiles, much better than by isothermal

  5. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  6. Advection-diffusion model for normal grain growth and the stagnation of normal grain growth in thin films

    International Nuclear Information System (INIS)

    Lou, C.

    2002-01-01

    An advection-diffusion model has been set up to describe normal grain growth. In this model grains are divided into different groups according to their topological classes (number of sides of a grain). Topological transformations are modelled by advective and diffusive flows governed by advective and diffusive coefficients respectively, which are assumed to be proportional to topological classes. The ordinary differential equations governing self-similar time-independent grain size distribution can be derived analytically from continuity equations. It is proved that the time-independent distributions obtained by solving the ordinary differential equations have the same form as the time-dependent distributions obtained by solving the continuity equations. The advection-diffusion model is extended to describe the stagnation of normal grain growth in thin films. Grain boundary grooving prevents grain boundaries from moving, and the correlation between neighbouring grains accelerates the stagnation of normal grain growth. After introducing grain boundary grooving and the correlation between neighbouring grains into the model, the grain size distribution is close to a lognormal distribution, which is usually found in experiments. A vertex computer simulation of normal grain growth has also been carried out to make a cross comparison with the advection-diffusion model. The result from the simulation did not verify the assumption that the advective and diffusive coefficients are proportional to topological classes. Instead, we have observed that topological transformations usually occur on certain topological classes. This suggests that the advection-diffusion model can be improved by making a more realistic assumption on topological transformations. (author)

  7. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    and the development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil...... the steady state, distributed behaviour of a short-path evaporator....

  8. Elk Distributions Relative to Spring Normalized Difference Vegetation Index Values

    International Nuclear Information System (INIS)

    Smallidge, S.T.; Baker, T.T.; VanLeeuwen, D.; Gould, W.R.; Thompson, B.C.

    2010-01-01

    Rocky Mountain elk (Cervus elaphus) that winter near San Antonio Mountain in northern New Mexico provide important recreational and economic benefits while creating management challenges related to temporospatial variation in their spring movements. Our objective was to examine spring distributions of elk in relation to vegetative emergence as it progresses across the landscape as measured by remote sensing. Spring distributions of elk were closely associated with greater photosynthetic activity of spring vegetation in 2 of 3 years as determined using NDVI values derived from AVHRR datasets. Observed elk locations were up to 271% greater than expected in the category representing the most photosynthetic activity. This association was not observed when analyses at a finer geographic scale were conducted. Managers facing challenges involving human-wildlife interactions and land-use issues should consider environmental conditions that may influence variation in elk association with greener portions of the landscape.

  9. Modeling a Distribution of Mortgage Credit Losses

    Czech Academy of Sciences Publication Activity Database

    Gapko, Petr; Šmíd, Martin

    2012-01-01

    Roč. 60, č. 10 (2012), s. 1005-1023 ISSN 0013-3035 R&D Projects: GA ČR GD402/09/H045; GA ČR(CZ) GBP402/12/G097 Grant - others:Univerzita Karlova(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : credit risk * mortgage * delinquency rate * generalized hyperbolic distribution * normal distribution Subject RIV: AH - Economics Impact factor: 0.194, year: 2012 http://library.utia.cas.cz/separaty/2013/E/smid-modeling a distribution of mortgage credit losses.pdf

  10. Modeling a Distribution of Mortgage Credit Losses

    Czech Academy of Sciences Publication Activity Database

    Gapko, Petr; Šmíd, Martin

    2010-01-01

    Roč. 23, č. 23 (2010), s. 1-23 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045 Grant - others:Univerzita Karlova - GAUK(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Keywords : Credit Risk * Mortgage * Delinquency Rate * Generalized Hyperbolic Distribution * Normal Distribution Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/gapko-modeling a distribution of mortgage credit losses-ies wp.pdf

  11. Individual loss reserving with the Multivariate Skew Normal distribution

    NARCIS (Netherlands)

    Pigeon, M.; Antonio, K.; Denuit, M.

    2012-01-01

    The evaluation of future cash flows and solvency capital recently gained importance in general insurance. To assist in this process, our paper proposes a novel loss reserving model, designed for individual claims in discrete time. We model the occurrence of claims, as well as their reporting delay,

  12. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    Science.gov (United States)

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the

  13. Very short-term probabilistic forecasting of wind power with generalized logit-Normal distributions

    DEFF Research Database (Denmark)

    Pinson, Pierre

    2012-01-01

    and probability masses at the bounds. Both auto-regressive and conditional parametric auto-regressive models are considered for the dynamics of their location and scale parameters. Estimation is performed in a recursive least squares framework with exponential forgetting. The superiority of this proposal over......Very-short-term probabilistic forecasts, which are essential for an optimal management of wind generation, ought to account for the non-linear and double-bounded nature of that stochastic process. They take here the form of discrete–continuous mixtures of generalized logit–normal distributions...

  14. Distribution system modeling and analysis

    CERN Document Server

    Kersting, William H

    2001-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...

  15. Annual rainfall statistics for stations in the Top End of Australia: normal and log-normal distribution analysis

    International Nuclear Information System (INIS)

    Vardavas, I.M.

    1992-01-01

    A simple procedure is presented for the statistical analysis of measurement data where the primary concern is the determination of the value corresponding to a specified average exceedance probability. The analysis employs the normal and log-normal frequency distributions together with a χ 2 -test and an error analysis. The error analysis introduces the concept of a counting error criterion, or ζ-test, to test whether the data are sufficient to make the Z 2 -test reliable. The procedure is applied to the analysis of annual rainfall data recorded at stations in the tropical Top End of Australia where the Ranger uranium deposit is situated. 9 refs., 12 tabs., 9 figs

  16. Normal and student´s t distributions and their applications

    CERN Document Server

    Ahsanullah, Mohammad; Shakil, Mohammad

    2014-01-01

    The most important properties of normal and Student t-distributions are presented. A number of applications of these properties are demonstrated. New related results dealing with the distributions of the sum, product and ratio of the independent normal and Student distributions are presented. The materials will be useful to the advanced undergraduate and graduate students and practitioners in the various fields of science and engineering.

  17. Kullback–Leibler Divergence of the γ–ordered Normal over t–distribution

    OpenAIRE

    Toulias, T-L.; Kitsos, C-P.

    2012-01-01

    The aim of this paper is to evaluate and study the Kullback–Leibler divergence of the γ–ordered Normal distribution, a generalization of Normal distribution emerged from the generalized Fisher’s information measure, over the scaled t–distribution. We investigate this evaluation through a series of bounds and approximations while the asymptotic behavior of the divergence is also studied. Moreover, we obtain a generalization of the known Kullback–Leibler information measure betwe...

  18. The law of distribution of light beam direction fluctuations in telescopes. [normal density functions

    Science.gov (United States)

    Divinskiy, M. L.; Kolchinskiy, I. G.

    1974-01-01

    The distribution of deviations from mean star trail directions was studied on the basis of 105 star trails. It was found that about 93% of the trails yield a distribution in agreement with the normal law. About 4% of the star trails agree with the Charlier distribution.

  19. Normal compliance contact models with finite interpenetration

    Czech Academy of Sciences Publication Activity Database

    Eck, Ch.; Jarušek, Jiří; Stará, J.

    2013-01-01

    Roč. 208, č. 1 (2013), s. 25-57 ISSN 0003-9527 R&D Projects: GA AV ČR IAA100750802; GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985840 Keywords : compliance models * approximation Subject RIV: BA - General Mathematics Impact factor: 2.022, year: 2013 http://link.springer.com/article/10.1007%2Fs00205-012-0602-8#

  20. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  1. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.

  2. Options and pitfalls of normal tissues complication probability models

    International Nuclear Information System (INIS)

    Dorr, Wolfgang

    2011-01-01

    Full text: Technological improvements in the physical administration of radiotherapy have led to increasing conformation of the treatment volume (TV) with the planning target volume (PTV) and of the irradiated volume (IV) with the TV. In this process of improvement of the physical quality of radiotherapy, the total volumes of organs at risk exposed to significant doses have significantly decreased, resulting in increased inhomogeneities in the dose distributions within these organs. This has resulted in a need to identify and quantify volume effects in different normal tissues. Today, irradiated volume today must be considered a 6t h 'R' of radiotherapy, in addition to the 5 'Rs' defined by Withers and Steel in the mid/end 1980 s. The current status of knowledge of these volume effects has recently been summarized for many organs and tissues by the QUANTEC (Quantitative Analysis of Normal Tissue Effects in the Clinic) initiative [Int. J. Radiat. Oncol. BioI. Phys. 76 (3) Suppl., 2010]. However, the concept of using dose-volume histogram parameters as a basis for dose constraints, even without applying any models for normal tissue complication probabilities (NTCP), is based on (some) assumptions that are not met in clinical routine treatment planning. First, and most important, dose-volume histogram (DVH) parameters are usually derived from a single, 'snap-shot' CT-scan, without considering physiological (urinary bladder, intestine) or radiation induced (edema, patient weight loss) changes during radiotherapy. Also, individual variations, or different institutional strategies of delineating organs at risk are rarely considered. Moreover, the reduction of the 3-dimentional dose distribution into a '2dimensl' DVH parameter implies that the localization of the dose within an organ is irrelevant-there are ample examples that this assumption is not justified. Routinely used dose constraints also do not take into account that the residual function of an organ may be

  3. Modeled ground water age distributions

    Science.gov (United States)

    Woolfenden, Linda R.; Ginn, Timothy R.

    2009-01-01

    The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.

  4. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  5. Notes on power of normality tests of error terms in regression models

    International Nuclear Information System (INIS)

    Střelec, Luboš

    2015-01-01

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models

  6. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    Science.gov (United States)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been

  7. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    Science.gov (United States)

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  8. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    Science.gov (United States)

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  9. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    Science.gov (United States)

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher

  10. Dynamic models for distributed generation resources

    Energy Technology Data Exchange (ETDEWEB)

    Morched, A.S. [BPR Energie, Sherbrooke, PQ (Canada)

    2010-07-01

    Distributed resources can impact the performance of host power systems during both normal and abnormal system conditions. This PowerPoint presentation discussed the use of dynamic models for identifying potential interaction problems between interconnected systems. The models were designed to simulate steady state behaviour as well as transient responses to system disturbances. The distributed generators included directly coupled and electronically coupled generators. The directly coupled generator was driven by wind turbines. Simplified models of grid-side inverters, electronically coupled wind generators and doubly-fed induction generators (DFIGs) were presented. The responses of DFIGs to wind variations were evaluated. Synchronous machine and electronically coupled generator responses were compared. The system model components included load models, generators, protection systems, and system equivalents. Frequency responses to islanding events were reviewed. The study demonstrated that accurate simulations are needed to predict the impact of distributed generation resources on the performance of host systems. Advances in distributed generation technology have outpaced the development of models needed for integration studies. tabs., figs.

  11. Local stem cell depletion model for normal tissue damage

    International Nuclear Information System (INIS)

    Yaes, R.J.; Keland, A.

    1987-01-01

    The hypothesis that radiation causes normal tissue damage by completely depleting local regions of tissue of viable stem cells leads to a simple mathematical model for such damage. In organs like skin and spinal cord where destruction of a small volume of tissue leads to a clinically apparent complication, the complication probability is expressed as a function of dose, volume and stem cell number by a simple triple negative exponential function analogous to the double exponential function of Munro and Gilbert for tumor control. The steep dose response curves for radiation myelitis that are obtained with our model are compared with the experimental data for radiation myelitis in laboratory rats. The model can be generalized to include other types or organs, high LET radiation, fractionated courses of radiation, and cases where an organ with a heterogeneous stem cell population receives an inhomogeneous dose of radiation. In principle it would thus be possible to determine the probability of tumor control and of damage to any organ within the radiation field if the dose distribution in three dimensional space within a patient is known

  12. Computer program determines exact two-sided tolerance limits for normal distributions

    Science.gov (United States)

    Friedman, H. A.; Webb, S. R.

    1968-01-01

    Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.

  13. Adaptive Bayesian inference on the mean of an infinite-dimensional normal distribution

    NARCIS (Netherlands)

    Belitser, E.; Ghosal, S.

    2003-01-01

    We consider the problem of estimating the mean of an infinite-break dimensional normal distribution from the Bayesian perspective. Under the assumption that the unknown true mean satisfies a "smoothness condition," we first derive the convergence rate of the posterior distribution for a prior that

  14. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  15. Comparison of CSF Distribution between Idiopathic Normal Pressure Hydrocephalus and Alzheimer Disease.

    Science.gov (United States)

    Yamada, S; Ishikawa, M; Yamamoto, K

    2016-07-01

    CSF volumes in the basal cistern and Sylvian fissure are increased in both idiopathic normal pressure hydrocephalus and Alzheimer disease, though the differences in these volumes in idiopathic normal pressure hydrocephalus and Alzheimer disease have not been well-described. Using CSF segmentation and volume quantification, we compared the distribution of CSF in idiopathic normal pressure hydrocephalus and Alzheimer disease. CSF volumes were extracted from T2-weighted 3D spin-echo sequences on 3T MR imaging and quantified semi-automatically. We compared the volumes and ratios of the ventricles and subarachnoid spaces after classification in 30 patients diagnosed with idiopathic normal pressure hydrocephalus, 10 with concurrent idiopathic normal pressure hydrocephalus and Alzheimer disease, 18 with Alzheimer disease, and 26 control subjects 60 years of age or older. Brain to ventricle ratios at the anterior and posterior commissure levels and 3D volumetric convexity cistern to ventricle ratios were useful indices for the differential diagnosis of idiopathic normal pressure hydrocephalus or idiopathic normal pressure hydrocephalus with Alzheimer disease from Alzheimer disease, similar to the z-Evans index and callosal angle. The most distinctive characteristics of the CSF distribution in idiopathic normal pressure hydrocephalus were small convexity subarachnoid spaces and the large volume of the basal cistern and Sylvian fissure. The distribution of the subarachnoid spaces in the idiopathic normal pressure hydrocephalus with Alzheimer disease group was the most deformed among these 3 groups, though the mean ventricular volume of the idiopathic normal pressure hydrocephalus with Alzheimer disease group was intermediate between that of the idiopathic normal pressure hydrocephalus and Alzheimer disease groups. The z-axial expansion of the lateral ventricle and compression of the brain just above the ventricle were the common findings in the parameters for differentiating

  16. Skewed Normal Distribution Of Return Assets In Call European Option Pricing

    Directory of Open Access Journals (Sweden)

    Evy Sulistianingsih

    2011-12-01

    Full Text Available Option is one of security derivates. In financial market, option is a contract that gives a right (notthe obligation for its owner to buy or sell a particular asset for a certain price at a certain time.Option can give a guarantee for a risk that can be faced in a market.This paper studies about theuse of Skewed Normal Distribution (SN in call europeanoption pricing. The SN provides aflexible framework that captures the skewness of log return. We obtain aclosed form solution forthe european call option pricing when log return follow the SN. Then, we will compare optionprices that is obtained by the SN and the Black-Scholes model with the option prices of market. Keywords: skewed normaldistribution, log return, options.

  17. The rank of a normally distributed matrix and positive definiteness of a noncentral Wishart distributed matrix

    NARCIS (Netherlands)

    Steerneman, A. G. M.; van Perlo-ten Kleij, Frederieke

    2008-01-01

    If X similar to N-nxk(M, I-n circle times Sigma), then S = X'X has the noncentral Wishart distribution W-k(')(n, Sigma; A), where Lambda = M'M. Here Sigma is allowed to be singular. It is well known that if Lambda = 0, then S has a (central) Wishart distribution and. S is positive definite with

  18. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  19. Probabilistic analysis in normal operation of distribution system with distributed generation

    DEFF Research Database (Denmark)

    Villafafila-Robles, R.; Sumper, A.; Bak-Jensen, B.

    2011-01-01

    Nowadays, the incorporation of high levels of small-scale non-dispatchable distributed generation is leading to the transition from the traditional 'vertical' power system structure to a 'horizontally-operated' power system, where the distribution networks contain both stochastic generation...... and load. This fact increases the number of stochastic inputs and dependence structures between them need to be considered. The deterministic analysis is not enough to cope with these issues and a new approach is needed. Probabilistic analysis provides a better approach. Moreover, as distribution systems...

  20. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  1. Video distribution system cost model

    Science.gov (United States)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  2. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    Energy Technology Data Exchange (ETDEWEB)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu [Division of Science and Mathematics, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi (United Arab Emirates)

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  3. Generating a normalized geometric liver model with warping

    International Nuclear Information System (INIS)

    Boes, J.L.; Weymouth, T.E.; Meyer, C.R.; Quint, L.E.; Bland, P.H.; Bookstein, F.L.

    1990-01-01

    This paper reports on the automated determination of the liver surface in abdominal CT scans for radiation treatment, surgery planning, and anatomic visualization. The normalized geometric model of the liver is generated by averaging registered outlines from a set of 15 studies of normal liver. The outlines have been registered with the use of thin-plate spline warping based on a set of five homologous landmarks. Thus, the model consists of an average of the surface and a set of five anatomic landmarks. The accuracy of the model is measured against both the set of studies used in model generation and an alternate set of 15 normal studies with use of, as an error measure, the ratio of nonoverlapping model and study volume to total model volume

  4. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  5. Generating log-normally distributed random numbers by using the Ziggurat algorithm

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2016-01-01

    Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method

  6. The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J

    2018-03-23

    In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.

  7. Normal and Special Models of Neutrino Masses and Mixings

    CERN Document Server

    Altarelli, Guido

    2005-01-01

    One can make a distinction between "normal" and "special" models. For normal models $\\theta_{23}$ is not too close to maximal and $\\theta_{13}$ is not too small, typically a small power of the self-suggesting order parameter $\\sqrt{r}$, with $r=\\Delta m_{sol}^2/\\Delta m_{atm}^2 \\sim 1/35$. Special models are those where some symmetry or dynamical feature assures in a natural way the near vanishing of $\\theta_{13}$ and/or of $\\theta_{23}- \\pi/4$. Normal models are conceptually more economical and much simpler to construct. Here we focus on special models, in particular a recent one based on A4 discrete symmetry and extra dimensions that leads in a natural way to a Harrison-Perkins-Scott mixing matrix.

  8. Neutron importance and the generalized Green function for the conventionally critical reactor with normalized neutron distribution

    International Nuclear Information System (INIS)

    Khromov, V.V.

    1978-01-01

    The notion of neutron importance when applied to nuclear reactor statics problems described by time-independent homogeneous equations of neutron transport with provision for normalization of neutron distribution is considered. An equation has been obtained for the function of neutron importance in a conditionally critical reactor with respect to an arbitrary nons linear functional determined for the normalized neutron distribution. Relation between this function and the generalized Green function of the selfconjugated operator of the reactor equation is determined and the formula of small perturbations for the functionals of a conditionally critical reactor is deduced

  9. Turboelectric Distributed Propulsion System Modelling

    OpenAIRE

    Liu, Chengyuan

    2013-01-01

    The Blended-Wing-Body is a conceptual aircraft design with rear-mounted, over wing engines. Turboelectric distributed propulsion system with boundary layer ingestion has been considered for this aircraft. It uses electricity to transmit power from the core turbine to the fans, therefore dramatically increases bypass ratio to reduce fuel consumption and noise. This dissertation presents methods on designing the TeDP system, evaluating effects of boundary layer ingestion, modelling engine perfo...

  10. Normality of raw data in general linear models: The most widespread myth in statistics

    Science.gov (United States)

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  11. Austenite Grain Size Estimtion from Chord Lengths of Logarithmic-Normal Distribution

    Directory of Open Access Journals (Sweden)

    Adrian H.

    2017-12-01

    Full Text Available Linear section of grains in polyhedral material microstructure is a system of chords. The mean length of chords is the linear grain size of the microstructure. For the prior austenite grains of low alloy structural steels, the chord length is a random variable of gamma- or logarithmic-normal distribution. The statistical grain size estimation belongs to the quantitative metallographic problems. The so-called point estimation is a well known procedure. The interval estimation (grain size confidence interval for the gamma distribution was given elsewhere, but for the logarithmic-normal distribution is the subject of the present contribution. The statistical analysis is analogous to the one for the gamma distribution.

  12. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    Science.gov (United States)

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  13. Linking the Value Assessment of Oil and Gas Firms to Ambidexterity Theory Using a Mixture of Normal Distributions

    Directory of Open Access Journals (Sweden)

    Casault Sébastien

    2016-05-01

    Full Text Available Oil and gas exploration and production firms have return profiles that are not easily explained by current financial theory – the variation in their market returns is non-Gaussian. In this paper, the nature and underlying reason for these significant deviations from expected behavior are considered. Understanding these differences in financial market behavior is important for a wide range of reasons, including: assessing investments, investor relations, decisions to raise capital, assessment of firm and management performance. We show that using a “thicker tailed” mixture of two normal distributions offers a significantly more accurate model than the traditionally Gaussian approach in describing the behavior of the value of oil and gas firms. This mixture of normal distribution is also more effective in bridging the gap between management theory and practice without the need to introduce complex time-sensitive GARCH and/or jump diffusion dynamics. The mixture distribution is consistent with ambidexterity theory that suggests firms operate in two distinct states driven by the primary focus of the firm: an exploration state with high uncertainty and, an exploitation (or production state with lower uncertainty. The findings have direct implications on improving the accuracy of real option pricing techniques and futures analysis of risk management. Traditional options pricing models assume that commercial returns from these assets are described by a normal random walk. However, a normal random walk model discounts the possibility of large changes to the marketplace from events such as the discovery of important reserves or the introduction of new technology. The mixture distribution proves to be well suited to inherently describe the unusually large risks and opportunities associated with oil and gas production and exploration. A significance testing study of 554 oil and gas exploration and production firms empirically supports using a mixture

  14. Elastin distribution in the normal uterus, uterine leiomyomas, adenomyosis and adenomyomas: a comparison.

    Science.gov (United States)

    Zheng, Wei-Qiang; Ma, Rong; Zheng, Jian-Ming; Gong, Zhi-Jing

    2006-04-01

    To describe the histologic distribution of elastin in the nonpregnant human uterus, uterine leiomyomas, adenomyosis and adenomyomas. Uteri were obtained from women undergoing hysterectomy for benign conditions, including 26 cases of uterine leiomyomas, 24 cases of adenomyosis, 18 adenomyomas and 6 cases of autopsy specimens. Specific histochemical staining techniques were employed in order to demonstrate the distribution of elastin. The distribution of elastin components in the uterus was markedly uneven and showed a decreasing gradient from outer to inner myometrium. No elastin was present within leiomyomas, adenomyomas or adenomyosis. The distribution of elastin may help explain the normal function of the myometrium in labor. It implies that the uneven distribution of elastin components and absence of elastin within leiomyomas, adenomyomas and adenomyosis could be of some clinical significance. The altered elastin distribution in disease states may help explain such symptoms as dysmenorrhea in uterine endometriosis.

  15. Energy dependence of angular distributions of sputtered particles by ion-beam bombardment at normal incidence

    International Nuclear Information System (INIS)

    Matsuda, Yoshinobu; Ueda, Yasutoshi; Uchino, Kiichiro; Muraoka, Katsunori; Maeda, Mitsuo; Akazaki, Masanori; Yamamura, Yasunori.

    1986-01-01

    The angular distributions of sputtered Fe-atoms were measured using the laser fluorescence technique during Ar-ion bombardment for energies of 0.6, 1, 2 and 3 keV at normal incidence. The measured cosine distribution at 0.6 keV progressively deviated to an over-cosine distribution at higher energies, and at 3 keV the angular distribution was an overcosine distribution of about 20 %. The experimental results agree qualitatively with calculations by a recent computer simulation code, ACAT. The results are explained by the competition between surface scattering and the effects of primary knock-on atoms, which tend to make the angular distributions over-cosine and under-cosine, respectively. (author)

  16. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  17. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  18. Distribution of the anticancer drugs doxorubicin, mitoxantrone and topotecan in tumors and normal tissues.

    Science.gov (United States)

    Patel, Krupa J; Trédan, Olivier; Tannock, Ian F

    2013-07-01

    Pharmacokinetic analyses estimate the mean concentration of drug within a given tissue as a function of time, but do not give information about the spatial distribution of drugs within that tissue. Here, we compare the time-dependent spatial distribution of three anticancer drugs within tumors, heart, kidney, liver and brain. Mice bearing various xenografts were treated with doxorubicin, mitoxantrone or topotecan. At various times after injection, tumors and samples of heart, kidney, liver and brain were excised. Within solid tumors, the distribution of doxorubicin, mitoxantrone and topotecan was limited to perivascular regions at 10 min after administration and the distance from blood vessels at which drug intensity fell to half was ~25-75 μm. Although drug distribution improved after 3 and 24 h, there remained a significant decrease in drug fluorescence with increasing distance from tumor blood vessels. Drug distribution was relatively uniform in the heart, kidney and liver with substantially greater perivascular drug uptake than in tumors. There was significantly higher total drug fluorescence in the liver than in tumors after 10 min, 3 and 24 h. Little to no drug fluorescence was observed in the brain. There are marked differences in the spatial distributions of three anticancer drugs within tumor tissue and normal tissues over time, with greater exposure to most normal tissues and limited drug distribution to many cells in tumors. Studies of the spatial distribution of drugs are required to complement pharmacokinetic data in order to better understand and predict drug effects and toxicities.

  19. A Box-Cox normal model for response times

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box–Cox transformations for response

  20. Water Distribution and Removal Model

    International Nuclear Information System (INIS)

    Y. Deng; N. Chipman; E.L. Hardin

    2005-01-01

    The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD and R) Model; (2) EBS Physical and Chemical Environment (P and CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD and R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment

  1. Water Distribution and Removal Model

    Energy Technology Data Exchange (ETDEWEB)

    Y. Deng; N. Chipman; E.L. Hardin

    2005-08-26

    The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes

  2. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering

    Science.gov (United States)

    Gamayunov, K. V.; Khazanov, G. V.

    2006-01-01

    calculate the pitch-angle diffusion coefficients using the typical wave normal distributions obtained from our self-consistent ring current-EMIC wave model, and try to quantify the effect of EMIC wave normal angle characteristics on relativistic electron scattering.

  3. Partial LVAD restores ventricular outputs and normalizes LV but not RV stress distributions in the acutely failing heart in silico

    OpenAIRE

    Sack, Kevin L.; Baillargeon, Brian; Acevedo-Bolton, Gabriel; Genet, Martin; Rebelo, Nuno; Kuhl, Ellen; Klein, Liviu; Weiselthaler, Georg M.; Burkhoff, Daniel; Franz, Thomas; Guccione, Julius M.

    2016-01-01

    Purpose: Heart failure is a worldwide epidemic that is unlikely to change as the population ages and life expectancy increases. We sought to detail significant recent improvements to the Dassault Systèmes Living Heart Model (LHM) and use the LHM to compute left ventricular (LV) and right ventricular (RV) myofiber stress distributions under the following 4 conditions: (1) normal cardiac function; (2) acute left heart failure (ALHF); (3) ALHF treated using an LV assist device (LVAD) flow rate o...

  4. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  5. Reply to: Are There More Gifted People than Would Be Expected on a Normal Distribution?

    Science.gov (United States)

    Gallagher, James J.

    2014-01-01

    The author responds to the article by Warne, Godwin, and Smith (2013) on the question of whether there are more gifted people than would be expected in a Gaussian normal distribution. He asserts that the answer to this question is yes, based on (a) data that he and his colleagues have collected, (b) data that are already available and quoted by…

  6. Using an APOS Framework to Understand Teachers' Responses to Questions on the Normal Distribution

    Science.gov (United States)

    Bansilal, Sarah

    2014-01-01

    This study is an exploration of teachers' engagement with concepts embedded in the normal distribution. The participants were a group of 290 in-service teachers enrolled in a teacher development program. The research instrument was an assessment task that can be described as an "unknown percentage" problem, which required the application…

  7. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    Science.gov (United States)

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  8. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve P. Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...

  9. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  10. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  11. Modeling and simulation of normal and hemiparetic gait

    Science.gov (United States)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  12. MR imaging of the bone marrow using short TI IR, 1. Normal and pathological intensity distribution of the bone marrow

    Energy Technology Data Exchange (ETDEWEB)

    Ishizaka, Hiroshi; Kurihara, Mikiko; Tomioka, Kuniaki; Kobayashi, Kanako; Sato, Noriko; Nagai, Teruo; Heshiki, Atsuko; Amanuma, Makoto; Mizuno, Hitomi.

    1989-02-01

    Normal vertebral bone marrow intensity distribution and its alteration in various anemias were evaluated on short TI IR sequences. Material consists of 73 individuals, 48 normals and 25 anemic patients excluding neoplastic conditions. All normal and reactive hypercellular bone marrow revealed characteristic intensity distribution; marginal high intensity and central low intensity, corresponding well to normal distribution of red and yellow marrows and their physiological or reactive conversion between red and yellow marrows. Aplastic anemia did not reveal normal intensity distribution, presumably due to autonomous condition.

  13. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    Directory of Open Access Journals (Sweden)

    Eckhard Limpert

    Full Text Available BACKGROUND: The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. METHODOLOGY/PRINCIPAL FINDINGS: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log- normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. CONCLUSIONS/SIGNIFICANCE: The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  14. Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

    Science.gov (United States)

    Sass, D. A.; Schmitt, T. A.; Walker, C. M.

    2008-01-01

    Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…

  15. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  16. Broadband model of the distribution network

    DEFF Research Database (Denmark)

    Jensen, Martin Høgdahl

    for circular conductors involving Bessel series. The two methods show equal values of resistance, but there is considerable difference in the values of internal inductance. A method for calculation of proximity effect is derived for a two-conductor configuration. This method is expanded to the use...... of frequency up to 200 kHz. The square wave measurements reveal the complete capacitance matrice at a frequency of approximately 12.5 MHz as well as the series inductance between the four conductors. The influence of non-ideal ground could not be measured due to the high impedance of the grounding device...... measurement and simulation, once the Phase model is used. No explanation is found on why the new material properties cause error in the Phase model. At the kyndby 10 kV test site a non-linear load is inserted on the secondary side of normal distribution transformer and the phase voltage and current...

  17. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering in Outer RB

    Science.gov (United States)

    Khazanov, G. V.; Gamayunov, K. V.

    2007-01-01

    We present the equatorial and bounce average pitch angle diffusion coefficients for scattering of relativistic electrons by the H+ mode of EMIC waves. Both the model (prescribed) and self consistent distributions over the wave normal angle are considered. The main results of our calculation can be summarized as follows: First, in comparison with field aligned waves, the intermediate and highly oblique waves reduce the pitch angle range subject to diffusion, and strongly suppress the scattering rate for low energy electrons (E less than 2 MeV). Second, for electron energies greater than 5 MeV, the |n| = 1 resonances operate only in a narrow region at large pitch-angles, and despite their greatest contribution in case of field aligned waves, cannot cause electron diffusion into the loss cone. For those energies, oblique waves at |n| greater than 1 resonances are more effective, extending the range of pitch angle diffusion down to the loss cone boundary, and increasing diffusion at small pitch angles by orders of magnitude.

  18. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    Science.gov (United States)

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  19. Normal-Force and Hinge-Moment Characteristics at Transonic Speeds of Flap-Type Ailerons at Three Spanwise Locations on a 4-Percent-Thick Sweptback-Wing-Body Model and Pressure-Distribution Measurements on an Inboard Aileron

    Science.gov (United States)

    Runckel, Jack F.; Hieser, Gerald

    1961-01-01

    An investigation has been conducted at the Langley 16-foot transonic tunnel to determine the loading characteristics of flap-type ailerons located at inboard, midspan, and outboard positions on a 45 deg. sweptback-wing-body combination. Aileron normal-force and hinge-moment data have been obtained at Mach numbers from 0.80 t o 1.03, at angles of attack up to about 27 deg., and at aileron deflections between approximately -15 deg. and 15 deg. Results of the investigation indicate that the loading over the ailerons was established by the wing-flow characteristics, and the loading shapes were irregular in the transonic speed range. The spanwise location of the aileron had little effect on the values of the slope of the curves of hinge-moment coefficient against aileron deflection, but the inboard aileron had the greatest value of the slope of the curves of hinge-moment coefficient against angle of attack and the outboard aileron had the least. Hinge-moment and aileron normal-force data taken with strain-gage instrumentation are compared with data obtained with pressure measurements.

  20. Modelling of tension stiffening for normal and high strength concrete

    DEFF Research Database (Denmark)

    Christiansen, Morten Bo; Nielsen, Mogens Peter

    1998-01-01

    form the model is extended to apply to biaxial stress fields as well. To determine the biaxial stress field, the theorem of minimum complementary elastic energy is used. The theory has been compared with tests on rods, disks, and beams of both normal and high strength concrete, and very good results...

  1. Modelling growth curves of Nigerian indigenous normal feather ...

    African Journals Online (AJOL)

    This study was conducted to predict the growth curve parameters using Bayesian Gompertz and logistic models and also to compare the two growth function in describing the body weight changes across age in Nigerian indigenous normal feather chicken. Each chick was wing-tagged at day old and body weights were ...

  2. Size distribution of interstellar particles. III. Peculiar extinctions and normal infrared extinction

    International Nuclear Information System (INIS)

    Mathis, J.S.; Wallenhorst, S.G.

    1981-01-01

    The effect of changing the upper and lower size limits of a distribution of bare graphite and silicate particles with n(a)αa/sup -q/ is investigated. Mathis, Rumpl, and Nordsieck showed that the normal extinction is matched very well by having the small-size cutoff, a/sub -/, roughly-equal0.005 or 0.01 μm, and the large size a/sub +/, about 0.25 μm, and q = 3.5 for both substances. We consider the progressively peculiar extinctions exhibited by the well-observed stars, sigma Sco, rho Oph, and theta 1 Ori C, with values of R/sub v/[equivalentA/sub v//E(B--V)] of 3.4, 4.4, and 5.5 compared to the normal 3.1. Two (sigma Sco, rho Oph) are in a neutral dense cloud; theta 1 Ori C is in the Orion Nebula. We find that sigma Sco has a normal graphite distribution but has had its small silicate particles removed, so that a/sub -/(sil)roughly-equal0.04 μm if q = 3.5, or q(sil) = 2.6 if the size limits are fixed. However, the upper size limit on silicates remains normal. In rho Oph, the graphite is still normal, but both a/sub -/(sil) and a/sub +/(sil) are increased, to about 0.04 μm and 0.4 or 0.5 μm, respectively, if q = 3.5, or q(sil)roughly-equal1.3 if the size limits are fixed. In theta 1 Ori, the small limit on graphite has increased to about 0.04 μm, or q(gra)roughly-equal3, while the silicates are about like those in rho Oph. The calculated lambda2175 bump is broader than the observed, but normal foreground extinction probably contributes appreciably to the observed bump. The absolute amount of extinction per H atom for rho Oph is not explained. The column density of H is so large that systematic effects might be present. Very large graphite particles (a>3 μm) are required to ''hide'' the graphite without overly affecting the visual extinction, but a normal (small) graphite size distribution is required by the lambda2175 bump. We feel that it is unlikely that such a bimodal distribution exists

  3. Differentiation in boron distribution in adult male and female rats' normal brain: A BNCT approach

    International Nuclear Information System (INIS)

    Goodarzi, Samereh; Pazirandeh, Ali; Jameie, Seyed Behnamedin; Baghban Khojasteh, Nasrin

    2012-01-01

    Boron distribution in adult male and female rats' normal brain after boron carrier injection (0.005 g Boric Acid+0.005 g Borax+10 ml distilled water, pH: 7.4) was studied in this research. Coronal sections of control and trial animal tissue samples were irradiated with thermal neutrons. Using alpha autoradiography, significant differences in boron concentration were seen in forebrain, midbrain and hindbrain sections of male and female animal groups with the highest value, four hours after boron compound injection. - Highlights: ► Boron distribution in male and female rats' normal brain was studied in this research. ► Coronal sections of animal tissue samples were irradiated with thermal neutrons. ► Alpha and Lithium tracks were counted using alpha autoradiography. ► Different boron concentration was seen in brain sections of male and female rats. ► The highest boron concentration was seen in 4 h after boron compound injection.

  4. On the possible ''normalization'' of experimental curves of 230Th vertical distribution in abyssal oceanic sediments

    International Nuclear Information System (INIS)

    Kuznetsov, Yu.V.; Al'terman, Eh.I.; Lisitsyn, A.P.; AN SSSR, Moscow. Inst. Okeanologii)

    1981-01-01

    The possibilities of the method of normalization of experimental ionic curves in reference to dating of abyssal sediments and establishing their accumulation rapidities are studied. The method is based on using correlation between ionic curves extrema and variations of Fe, Mn, C org., and P contents in abyssal oceanic sediments. It has been found that the above method can be successfully applied for correction of 230 Th vertical distribution data obtained by low-background γ-spectrometry. The method leads to most reliable results in those cases when the vertical distribution curves in sediments of elements concentrators of 230 Th are symbasic between themselves. The normalization of experimental ionic curves in many cases gives the possibility to realize the sediment age stratification [ru

  5. Spatial arrangement and size distribution of normal faults, Buckskin detachment upper plate, Western Arizona

    Science.gov (United States)

    Laubach, S. E.; Hundley, T. H.; Hooker, J. N.; Marrett, R. A.

    2018-03-01

    Fault arrays typically include a wide range of fault sizes and those faults may be randomly located, clustered together, or regularly or periodically located in a rock volume. Here, we investigate size distribution and spatial arrangement of normal faults using rigorous size-scaling methods and normalized correlation count (NCC). Outcrop data from Miocene sedimentary rocks in the immediate upper plate of the regional Buckskin detachment-low angle normal-fault, have differing patterns of spatial arrangement as a function of displacement (offset). Using lower size-thresholds of 1, 0.1, 0.01, and 0.001 m, displacements range over 5 orders of magnitude and have power-law frequency distributions spanning ∼ four orders of magnitude from less than 0.001 m to more than 100 m, with exponents of -0.6 and -0.9. The largest faults with >1 m displacement have a shallower size-distribution slope and regular spacing of about 20 m. In contrast, smaller faults have steep size-distribution slopes and irregular spacing, with NCC plateau patterns indicating imposed clustering. Cluster widths are 15 m for the 0.1-m threshold, 14 m for 0.01-m, and 1 m for 0.001-m displacement threshold faults. Results demonstrate normalized correlation count effectively characterizes the spatial arrangement patterns of these faults. Our example from a high-strain fault pattern above a detachment is compatible with size and spatial organization that was influenced primarily by boundary conditions such as fault shape, mechanical unit thickness and internal stratigraphy on a range of scales rather than purely by interaction among faults during their propagation.

  6. Normal cranial bone marrow MR imaging pattern with age-related ADC value distribution

    International Nuclear Information System (INIS)

    Li Qi; Pan Shinong; Yin Yuming; Li Wei; Chen Zhian; Liu Yunhui; Wu Zhenhua; Guo Qiyong

    2011-01-01

    Objective: To determine MRI appearances of normal age-related cranial bone marrow and the relationship between MRI patterns and apparent diffusion coefficient (ADC) values. Methods: Five hundred subjects were divided into seven groups based on ages. Cranial bone marrow MRI patterns were performed based on different thickness of the diploe and signal intensity distribution characteristics. ADC values of the frontal, parietal, occipital and temporal bones on DWI were measured and calculated. Correlations between ages and ADC values, between patterns and ADC values, as well as the distribution of ADC values were analyzed. Results: Normal cranial bone marrow was divided into four types and six subtypes, Type I, II, III and IV, which had positive correlation with age increasing (χ 2 = 266.36, P 0.05). In addition, there was significant negative correlation between the ADC values and MRI patterns in the normal parietal and occipital bones (r = -0.691 and -0.750, P < 0.01). Conclusion: The combination of MRI features and ADC values changes in different cranial bones showed significant correlation with age increasing. Familiar with the MRI appearance of the normal bone marrow conversion pattern in different age group and their ADC value will aid the diagnosis and differential of the cranial bone pathology.

  7. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  8. Distribution and elimination of intravenously administered atrial natriuretic hormone(ANH) to normal and nephrectomized rats

    International Nuclear Information System (INIS)

    Devine, E.; Artman, L.; Budzik, G.; Bush, E.; Holleman, W.

    1986-01-01

    The 24 amino acid peptide, ANH(5-28), was N-terminally labeled with I-125 Bolton-Hunter reagent, iodo-N-succinimidyl 3-(4-hydroxyphenyl)propionate. The I-125 peptide plus 1μg/kg of the I-127 Bolton-Hunter peptide was injected into normal and nephrectomized anesthetized (Nembutal) rats. Blood samples were drawn into a cocktail developed to inhibit plasma induced degradation. Radiolabeled peptides were analyzed by HPLC. A biphasic curve of I-125 ANH(5-28) elimination was obtained, the first phase (t 1/2 = 15 sec) representing in vivo distribution and the second phase (t 1/2 = 7-10 min) a measurement of elimination. This biphasic elimination curve was similar in normal and nephrectomized rats. The apparent volumes of distribution were 15-20 ml for the first phase and > 300 ml for the second phase. In order to examine the tissue distribution of the peptide, animals were sacrificed at 2 minutes and the I-125 tissue contents were quantitated. The majority of the label was located in the liver (50%), kidneys (21%) and the lung (5%). The degradative peptides appearing in the plasma and urine of normal rats were identical. No intact radiolabeled ANH(5-28) was found in the urine. In conclusion, iodinated Bolton-Hunter labeled ANH(5-28) is rapidly removed from the circulation by the liver and to a lesser extent by the kidney, but the rate of elimination is not decreased by nephrectomy

  9. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    Science.gov (United States)

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  10. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  11. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Simulation study of pO2 distribution in induced tumour masses and normal tissues within a microcirculation environment.

    Science.gov (United States)

    Li, Mao; Li, Yan; Wen, Peng Paul

    2014-01-01

    The biological microenvironment is interrupted when tumour masses are introduced because of the strong competition for oxygen. During the period of avascular growth of tumours, capillaries that existed play a crucial role in supplying oxygen to both tumourous and healthy cells. Due to limitations of oxygen supply from capillaries, healthy cells have to compete for oxygen with tumourous cells. In this study, an improved Krogh's cylinder model which is more realistic than the previously reported assumption that oxygen is homogeneously distributed in a microenvironment, is proposed to describe the process of the oxygen diffusion from a capillary to its surrounding environment. The capillary wall permeability is also taken into account. The simulation study is conducted and the results show that when tumour masses are implanted at the upstream part of a capillary and followed by normal tissues, the whole normal tissues suffer from hypoxia. In contrast, when normal tissues are ahead of tumour masses, their pO2 is sufficient. In both situations, the pO2 in the whole normal tissues drops significantly due to the axial diffusion at the interface of normal tissues and tumourous cells. As the existence of the axial oxygen diffusion cannot supply the whole tumour masses, only these tumourous cells that are near the interface can be partially supplied, and have a small chance to survive.

  13. Breast cancer subtype distribution is different in normal weight, overweight, and obese women.

    Science.gov (United States)

    Gershuni, Victoria; Li, Yun R; Williams, Austin D; So, Alycia; Steel, Laura; Carrigan, Elena; Tchou, Julia

    2017-06-01

    Obesity is associated with tumor promoting pathways related to insulin resistance and chronic low-grade inflammation which have been linked to various disease states, including cancer. Many studies have focused on the relationship between obesity and increased estrogen production, which contributes to the pathogenesis of estrogen receptor-positive breast cancers. The link between obesity and other breast cancer subtypes, such as triple-negative breast cancer (TNBC) and Her2/neu+ (Her2+) breast cancer, is less clear. We hypothesize that obesity may be associated with the pathogenesis of specific breast cancer subtypes resulting in a different subtype distribution than normal weight women. A single-institution, retrospective analysis of tumor characteristics of 848 patients diagnosed with primary operable breast cancer between 2000 and 2013 was performed to evaluate the association between BMI and clinical outcome. Patients were grouped based on their BMI at time of diagnosis stratified into three subgroups: normal weight (BMI = 18-24.9), overweight (BMI = 25-29.9), and obese (BMI > 30). The distribution of breast cancer subtypes across the three BMI subgroups was compared. Obese and overweight women were more likely to present with TNBC and normal weight women with Her2+ breast cancer (p = 0.008). We demonstrated, for the first time, that breast cancer subtype distribution varied significantly according to BMI status. Our results suggested that obesity might activate molecular pathways other than the well-known obesity/estrogen circuit in the pathogenesis of breast cancer. Future studies are needed to understand the molecular mechanisms that drive the variation in subtype distribution across BMI subgroups.

  14. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  15. Population Synthesis Models for Normal Galaxies with Dusty Disks

    Directory of Open Access Journals (Sweden)

    Kyung-Won Suh

    2003-09-01

    Full Text Available To investigate the SEDs of galaxies considering the dust extinction processes in the galactic disks, we present the population synthesis models for normal galaxies with dusty disks. We use PEGASE (Fioc & Rocca-Volmerange 1997 to model them with standard input parameters for stars and new dust parameters. We find that the model results are strongly dependent on the dust parameters as well as other parameters (e.g. star formation history. We compare the model results with the observations and discuss about the possible explanations. We find that the dust opacity functions derived from studies of asymptotic giant branch stars are useful for modeling a galaxy with a dusty disk.

  16. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  17. Impact of foot progression angle on the distribution of plantar pressure in normal children.

    Science.gov (United States)

    Lai, Yu-Cheng; Lin, Huey-Shyan; Pan, Hui-Fen; Chang, Wei-Ning; Hsu, Chien-Jen; Renn, Jenn-Huei

    2014-02-01

    Plantar pressure distribution during walking is affected by several gait factors, most especially the foot progression angle which has been studied in children with neuromuscular diseases. However, this relationship in normal children has only been reported in limited studies. The purpose of this study is to clarify the correlation between foot progression angle and plantar pressure distribution in normal children, as well as the impacts of age and sex on this correlation. This study retrospectively reviewed dynamic pedobarographic data that were included in the gait laboratory database of our institution. In total, 77 normally developed children aged 5-16 years who were treated between 2004 and 2009 were included. Each child's footprint was divided into 5 segments: lateral forefoot, medial forefoot, lateral midfoot, medial midfoot, and heel. The percentages of impulse exerted at the medial foot, forefoot, midfoot, and heel were calculated. The average foot progression angle was 5.03° toe-out. Most of the total impulse was exerted on the forefoot (52.0%). Toe-out gait was positively correlated with high medial (r = 0.274; P plantar pressure as part of the treatment of various foot pathologies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Distribution of Basement Membrane Molecules, Laminin and Collagen Type IV, in Normal and Degenerated Cartilage Tissues.

    Science.gov (United States)

    Foldager, Casper Bindzus; Toh, Wei Seong; Gomoll, Andreas H; Olsen, Bjørn Reino; Spector, Myron

    2014-04-01

    The objective of the present study was to investigate the presence and distribution of 2 basement membrane (BM) molecules, laminin and collagen type IV, in healthy and degenerative cartilage tissues. Normal and degenerated tissues were obtained from goats and humans, including articular knee cartilage, the intervertebral disc, and meniscus. Normal tissue was also obtained from patella-tibial enthesis in goats. Immunohistochemical analysis was performed using anti-laminin and anti-collagen type IV antibodies. Human and goat skin were used as positive controls. The percentage of cells displaying the pericellular presence of the protein was graded semiquantitatively. When present, laminin and collagen type IV were exclusively found in the pericellular matrix, and in a discrete layer on the articulating surface of normal articular cartilage. In normal articular (hyaline) cartilage in the human and goat, the proteins were found co-localized pericellularly. In contrast, in human osteoarthritic articular cartilage, collagen type IV but not laminin was found in the pericellular region. Nonpathological fibrocartilaginous tissues from the goat, including the menisci and the enthesis, were also positive for both laminin and collagen type IV pericellularly. In degenerated fibrocartilage, including intervertebral disc, as in degenerated hyaline cartilage only collagen type IV was found pericellularly around chondrocytes but with less intense staining than in non-degenerated tissue. In calcified cartilage, some cells were positive for laminin but not type IV collagen. We report differences in expression of the BM molecules, laminin and collagen type IV, in normal and degenerative cartilaginous tissues from adult humans and goats. In degenerative tissues laminin is depleted from the pericellular matrix before collagen type IV. The findings may inform future studies of the processes underlying cartilage degeneration and the functional roles of these 2 extracellular matrix proteins

  19. Distribution of Basement Membrane Molecules, Laminin and Collagen Type IV, in Normal and Degenerated Cartilage Tissues

    Science.gov (United States)

    Toh, Wei Seong; Gomoll, Andreas H.; Olsen, Bjørn Reino; Spector, Myron

    2014-01-01

    Objective: The objective of the present study was to investigate the presence and distribution of 2 basement membrane (BM) molecules, laminin and collagen type IV, in healthy and degenerative cartilage tissues. Design: Normal and degenerated tissues were obtained from goats and humans, including articular knee cartilage, the intervertebral disc, and meniscus. Normal tissue was also obtained from patella-tibial enthesis in goats. Immunohistochemical analysis was performed using anti-laminin and anti–collagen type IV antibodies. Human and goat skin were used as positive controls. The percentage of cells displaying the pericellular presence of the protein was graded semiquantitatively. Results: When present, laminin and collagen type IV were exclusively found in the pericellular matrix, and in a discrete layer on the articulating surface of normal articular cartilage. In normal articular (hyaline) cartilage in the human and goat, the proteins were found co-localized pericellularly. In contrast, in human osteoarthritic articular cartilage, collagen type IV but not laminin was found in the pericellular region. Nonpathological fibrocartilaginous tissues from the goat, including the menisci and the enthesis, were also positive for both laminin and collagen type IV pericellularly. In degenerated fibrocartilage, including intervertebral disc, as in degenerated hyaline cartilage only collagen type IV was found pericellularly around chondrocytes but with less intense staining than in non-degenerated tissue. In calcified cartilage, some cells were positive for laminin but not type IV collagen. Conclusions: We report differences in expression of the BM molecules, laminin and collagen type IV, in normal and degenerative cartilaginous tissues from adult humans and goats. In degenerative tissues laminin is depleted from the pericellular matrix before collagen type IV. The findings may inform future studies of the processes underlying cartilage degeneration and the functional

  20. Mathematical models of tumour and normal tissue response

    International Nuclear Information System (INIS)

    Jones, B.; Dale, R.G.; Charing Cross Group of Hospitals, London

    1999-01-01

    The historical application of mathematics in the natural sciences and in radiotherapy is compared. The various forms of mathematical models and their limitations are discussed. The Linear Quadratic (LQ) model can be modified to include (i) radiobiological parameter changes that occur during fractionated radiotherapy, (ii) situations such as focal forms of radiotherapy, (iii) normal tissue responses, and (iv) to allow for the process of optimization. The inclusion of a variable cell loss factor in the LQ model repopulation term produces a more flexible clonogenic doubling time, which can simulate the phenomenon of 'accelerated repopulation'. Differential calculus can be applied to the LQ model after elimination of the fraction number integers. The optimum dose per fraction (maximum cell kill relative to a given normal tissue fractionation sensitivity) is then estimated from the clonogen doubling times and the radiosensitivity parameters (or α/β ratios). Economic treatment optimization is described. Tumour volume studies during or following teletherapy are used to optimize brachytherapy. The radiation responses of both individual tumours and tumour populations (by random sampling 'Monte-Carlo' techniques from statistical ranges of radiobiological and physical parameters) can be estimated. Computerized preclinical trials can be used to guide choice of dose fractionation scheduling in clinical trials. The potential impact of gene and other biological therapies on the results of radical radiotherapy are testable. New and experimentally testable hypotheses are generated from limited clinical data by exploratory modelling exercises. (orig.)

  1. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, Gašper, E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, Andrej, E-mail: andrej.trkov@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Smith, Donald L., E-mail: donald.l.smith@anl.gov [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States); Capote, Roberto, E-mail: roberto.capotenoy@iaea.org [NAPC–Nuclear Data Section, International Atomic Energy Agency, PO Box 100, Vienna-A-1400 (Austria)

    2013-11-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue.

  2. Transformation of correlation coefficients between normal and lognormal distribution and implications for nuclear applications

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Smith, Donald L.; Capote, Roberto

    2013-01-01

    Inherently positive parameters with large relative uncertainties (typically ≳30%) are often considered to be governed by the lognormal distribution. This assumption has the practical benefit of avoiding the possibility of sampling negative values in stochastic applications. Furthermore, it is typically assumed that the correlation coefficients for comparable multivariate normal and lognormal distributions are equivalent. However, this ideal situation is approached only in the linear approximation which happens to be applicable just for small uncertainties. This paper derives and discusses the proper transformation of correlation coefficients between both distributions for the most general case which is applicable for arbitrary uncertainties. It is seen that for lognormal distributions with large relative uncertainties strong anti-correlations (negative correlations) are mathematically forbidden. This is due to the asymmetry that is an inherent feature of these distributions. Some implications of these results for practical nuclear applications are discussed and they are illustrated with examples in this paper. Finally, modifications to the ENDF-6 format used for representing uncertainties in evaluated nuclear data libraries are suggested, as needed to deal with this issue

  3. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    International Nuclear Information System (INIS)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro

    2011-01-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≅10 -2.5 Mpc -1 with the upper limit B < or approx. 3 nG.

  4. The distribution of YKL-40 in osteoarthritic and normal human articular cartilage

    DEFF Research Database (Denmark)

    Volck, B; Ostergaard, K; Johansen, J S

    1999-01-01

    YKL-40, also called human cartilage glycoprotein-39, is a major secretory protein of human chondrocytes in cell culture. YKL-40 mRNA is expressed by cartilage from patients with rheumatoid arthritis, but is not detectable in normal human cartilage. The aim was to investigate the distribution of YKL......-40 in osteoarthritic (n=9) and macroscopically normal (n=5) human articular cartilage, collected from 12 pre-selected areas of the femoral head, to discover a potential role for YKL-40 in cartilage remodelling in osteoarthritis. Immunohistochemical analysis showed that YKL-40 staining was found...... in chondrocytes of osteoarthritic cartilage mainly in the superficial and middle zone of the cartilage rather than the deep zone. There was a tendency for high number of YKL-40 positive chondrocytes in areas of the femoral head with a considerable biomechanical load. The number of chondrocytes with a positive...

  5. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  6. Simulation of reactive nanolaminates using reduced models: II. Normal propagation

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Maher; Knio, Omar M. [Department of Mechanical Engineering, The Johns Hopkins University, Baltimore, MD 21218-2686 (United States)

    2010-03-15

    Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)

  7. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    Science.gov (United States)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  8. Log-Normal Distribution in a Growing System with Weighted and Multiplicatively Interacting Particles

    Science.gov (United States)

    Fujihara, Akihiro; Tanimoto, Satoshi; Yamamoto, Hiroshi; Ohtsuki, Toshiya

    2018-03-01

    A growing system with weighted and multiplicatively interacting particles is investigated. Each particle has a quantity that changes multiplicatively after a binary interaction, with its growth rate controlled by a weight parameter in a homogeneous symmetric kernel. We consider the system using moment inequalities and analytically derive the log-normal-type tail in the probability distribution function of quantities when the parameter is negative, which is different from the result for single-body multiplicative processes. We also find that the system approaches a winner-take-all state when the parameter is positive.

  9. The distribution of YKL-40 in osteoarthritic and normal human articular cartilage

    DEFF Research Database (Denmark)

    Volck, B; Ostergaard, K; Johansen, J S

    1999-01-01

    YKL-40, also called human cartilage glycoprotein-39, is a major secretory protein of human chondrocytes in cell culture. YKL-40 mRNA is expressed by cartilage from patients with rheumatoid arthritis, but is not detectable in normal human cartilage. The aim was to investigate the distribution of YKL...... in chondrocytes of osteoarthritic cartilage mainly in the superficial and middle zone of the cartilage rather than the deep zone. There was a tendency for high number of YKL-40 positive chondrocytes in areas of the femoral head with a considerable biomechanical load. The number of chondrocytes with a positive...

  10. Comparative pharmacokinetic and tissue distribution profiles of four major bioactive components in normal and hepatic fibrosis rats after oral administration of Fuzheng Huayu recipe.

    Science.gov (United States)

    Yang, Tao; Liu, Shan; Wang, Chang-Hong; Tao, Yan-Yan; Zhou, Hua; Liu, Cheng-Hai

    2015-10-10

    Fuzheng Huayu recipe (FZHY) is a herbal product for the treatment of liver fibrosis approved by the Chinese State Food and Drug Administration (SFDA), but its pharmacokinetics and tissue distribution had not been investigated. In this study, the liver fibrotic model was induced with intraperitoneal injection of dimethylnitrosamine (DMN), and FZHY was given orally to the model and normal rats. The plasma pharmacokinetics and tissue distribution profiles of four major bioactive components from FZHY were analyzed in the normal and fibrotic rat groups using an ultrahigh performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method. Results revealed that the bioavailabilities of danshensu (DSS), salvianolic acid B (SAB) and rosmarinic acid (ROS) in liver fibrotic rats increased 1.49, 3.31 and 2.37-fold, respectively, compared to normal rats. There was no obvious difference in the pharmacokinetics of amygdalin (AMY) between the normal and fibrotic rats. The tissue distribution of DSS, SAB, and AMY trended to be mostly in the kidney and lung. The distribution of DSS, SAB, and AMY in liver tissue of the model rats was significantly decreased compared to the normal rats. Significant differences in the pharmacokinetics and tissue distribution profiles of DSS, ROS, SAB and AMY were observed in rats with hepatic fibrosis after oral administration of FZHY. These results provide a meaningful basis for developing a clinical dosage regimen in the treatment of hepatic fibrosis by FZHY. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Model for the angular distribution of sky radiance

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, F C; Brunger, A P

    1979-08-01

    A flexible mathematical model is introduced which describes the radiance of the dome of the sky under various conditions. This three-component continuous distribution (TCCD) model is compounded by the superposition of three separate terms, the isotropic, circumsolar and horizon brightening terms, each representing the contribution of a particular sky characteristic. In use a particular sky condition is characterized by the values of the coefficients of each of these three terms, defining the distribution of the total diffuse component. The TCCD model has been demonstrated to fit both the normalized clear sky data and the normalized overcast sky data with an RMS error of about ten percent of the man overall sky radiance. By extension the model could describe variable or partly clouded sky conditions. The model can aid in improving the prediction of solar collector performance.

  12. Modeling the brain morphology distribution in the general aging population

    Science.gov (United States)

    Huizinga, W.; Poot, D. H. J.; Roshchupkin, G.; Bron, E. E.; Ikram, M. A.; Vernooij, M. W.; Rueckert, D.; Niessen, W. J.; Klein, S.

    2016-03-01

    Both normal aging and neurodegenerative diseases such as Alzheimer's disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.

  13. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    Science.gov (United States)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  14. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  15. Elastic microfibril distribution in the cornea: Differences between normal and keratoconic stroma.

    Science.gov (United States)

    White, Tomas L; Lewis, Philip N; Young, Robert D; Kitazawa, Koji; Inatomi, Tsutomu; Kinoshita, Shigeru; Meek, Keith M

    2017-06-01

    The optical and biomechanical properties of the cornea are largely governed by the collagen-rich stroma, a layer that represents approximately 90% of the total thickness. Within the stroma, the specific arrangement of superimposed lamellae provides the tissue with tensile strength, whilst the spatial arrangement of individual collagen fibrils within the lamellae confers transparency. In keratoconus, this precise stromal arrangement is lost, resulting in ectasia and visual impairment. In the normal cornea, we previously characterised the three-dimensional arrangement of an elastic fiber network spanning the posterior stroma from limbus-to-limbus. In the peripheral cornea/limbus there are elastin-containing sheets or broad fibers, most of which become microfibril bundles (MBs) with little or no elastin component when reaching the central cornea. The purpose of the current study was to compare this network with the elastic fiber distribution in post-surgical keratoconic corneal buttons, using serial block face scanning electron microscopy and transmission electron microscopy. We have demonstrated that the MB distribution is very different in keratoconus. MBs are absent from a region of stroma anterior to Descemet's membrane, an area that is densely populated in normal cornea, whilst being concentrated below the epithelium, an area in which they are absent in normal cornea. We contend that these latter microfibrils are produced as a biomechanical response to provide additional strength to the anterior stroma in order to prevent tissue rupture at the apex of the cone. A lack of MBs anterior to Descemet's membrane in keratoconus would alter the biomechanical properties of the tissue, potentially contributing to the pathogenesis of the disease. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Distributions with given marginals and statistical modelling

    CERN Document Server

    Fortiana, Josep; Rodriguez-Lallena, José

    2002-01-01

    This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.

  17. Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars

    2014-01-01

    Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...

  18. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    Science.gov (United States)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  19. Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)

    Science.gov (United States)

    Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi

    2017-03-01

    The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.

  20. Sildenafil normalizes bowel transit in preclinical models of constipation.

    Directory of Open Access Journals (Sweden)

    Sarah K Sharman

    Full Text Available Guanylyl cyclase-C (GC-C agonists increase cGMP levels in the intestinal epithelium to promote secretion. This process underlies the utility of exogenous GC-C agonists such as linaclotide for the treatment of chronic idiopathic constipation (CIC and irritable bowel syndrome with constipation (IBS-C. Because GC-C agonists have limited use in pediatric patients, there is a need for alternative cGMP-elevating agents that are effective in the intestine. The present study aimed to determine whether the PDE-5 inhibitor sildenafil has similar effects as linaclotide on preclinical models of constipation. Oral administration of sildenafil caused increased cGMP levels in mouse intestinal epithelium demonstrating that blocking cGMP-breakdown is an alternative approach to increase cGMP in the gut. Both linaclotide and sildenafil reduced proliferation and increased differentiation in colon mucosa, indicating common target pathways. The homeostatic effects of cGMP required gut turnover since maximal effects were observed after 3 days of treatment. Neither linaclotide nor sildenafil treatment affected intestinal transit or water content of fecal pellets in healthy mice. To test the effectiveness of cGMP elevation in a functional motility disorder model, mice were treated with dextran sulfate sodium (DSS to induce colitis and were allowed to recover for several weeks. The recovered animals exhibited slower transit, but increased fecal water content. An acute dose of sildenafil was able to normalize transit and fecal water content in the DSS-recovery animal model, and also in loperamide-induced constipation. The higher fecal water content in the recovered animals was due to a compromised epithelial barrier, which was normalized by sildenafil treatment. Taken together our results show that sildenafil can have similar effects as linaclotide on the intestine, and may have therapeutic benefit to patients with CIC, IBS-C, and post-infectious IBS.

  1. A Distributional Representation Model For Collaborative Filtering

    OpenAIRE

    Junlin, Zhang; Heng, Cai; Tongwen, Huang; Huiping, Xue

    2015-01-01

    In this paper, we propose a very concise deep learning approach for collaborative filtering that jointly models distributional representation for users and items. The proposed framework obtains better performance when compared against current state-of-art algorithms and that made the distributional representation model a promising direction for further research in the collaborative filtering.

  2. Rapid Prototyping of Formally Modelled Distributed Systems

    OpenAIRE

    Buchs, Didier; Buffo, Mathieu; Titsworth, Frances M.

    1999-01-01

    This paper presents various kinds of prototypes, used in the prototyping of formally modelled distributed systems. It presents the notions of prototyping techniques and prototype evolution, and shows how to relate them to the software life-cycle. It is illustrated through the use of the formal modelling language for distributed systems CO-OPN/2.

  3. Target normal sheath acceleration analytical modeling, comparative study and developments

    International Nuclear Information System (INIS)

    Perego, C.; Batani, D.; Zani, A.; Passoni, M.

    2012-01-01

    Ultra-intense laser interaction with solid targets appears to be an extremely promising technique to accelerate ions up to several MeV, producing beams that exhibit interesting properties for many foreseen applications. Nowadays, most of all the published experimental results can be theoretically explained in the framework of the target normal sheath acceleration (TNSA) mechanism proposed by Wilks et al. [Phys. Plasmas 8(2), 542 (2001)]. As an alternative to numerical simulation various analytical or semi-analytical TNSA models have been published in the latest years, each of them trying to provide predictions for some of the ion beam features, given the initial laser and target parameters. However, the problem of developing a reliable model for the TNSA process is still open, which is why the purpose of this work is to enlighten the present situation of TNSA modeling and experimental results, by means of a quantitative comparison between measurements and theoretical predictions of the maximum ion energy. Moreover, in the light of such an analysis, some indications for the future development of the model proposed by Passoni and Lontano [Phys. Plasmas 13(4), 042102 (2006)] are then presented.

  4. Financing options and economic impact: distributed generation using solar photovoltaic systems in Normal, Illinois

    Directory of Open Access Journals (Sweden)

    Jin H. Jo

    2016-04-01

    Full Text Available Due to increasing price volatility in fossil-fuel-produced energy, the demand for clean, renewable, and abundant energy is more prevalent than in past years. Solar photovoltaic (PV systems have been well documented for their ability to produce electrical energy while at the same time offering support to mitigate the negative externalities associated with fossil fuel combustion. Prices for PV systems have decreased over the past few years, however residential and commercial owners may still opt out of purchasing a system due to the overall price required for a PV system installation. Therefore, determining optimal financing options for residential and small-scale purchasers is a necessity. We report on payment methods currently used for distributed community solar projects throughout the US and suggest appropriate options for purchasers in Normal, Illinois given their economic status. We also examine the jobs and total economic impact of a PV system implementation in the case study area.

  5. Topographical Distribution of Arsenic, Manganese, and Selenium in the Normal Human Brain

    DEFF Research Database (Denmark)

    Larsen, Niels Agersnap; Pakkenberg, H.; Damsgaard, Else

    1979-01-01

    The concentrations of arsenic, manganese and selenium per gram wet tissue weight were determined in samples from 24 areas of normal human brains from 5 persons with ages ranging from 15 to 81 years of age. The concentrations of the 3 elements were determined for each sample by means of neutron...... activation analysis with radiochemical separation. Distinct patterns of distribution were shown for each of the 3 elements. Variations between individuals were found for some but not all brain areas, resulting in coefficients of variation between individuals of about 30% for arsenic, 10% for manganese and 20......% for selenium. The results seem to indicate that arsenic is associated with the lipid phase, manganese with the dry matter and selenium with the aqueous phase of brain tissue....

  6. Statistical mechanics of normal grain growth in one dimension: A partial integro-differential equation model

    International Nuclear Information System (INIS)

    Ng, Felix S.L.

    2016-01-01

    We develop a statistical-mechanical model of one-dimensional normal grain growth that does not require any drift-velocity parameterization for grain size, such as used in the continuity equation of traditional mean-field theories. The model tracks the population by considering grain sizes in neighbour pairs; the probability of a pair having neighbours of certain sizes is determined by the size-frequency distribution of all pairs. Accordingly, the evolution obeys a partial integro-differential equation (PIDE) over ‘grain size versus neighbour grain size’ space, so that the grain-size distribution is a projection of the PIDE's solution. This model, which is applicable before as well as after statistically self-similar grain growth has been reached, shows that the traditional continuity equation is invalid outside this state. During statistically self-similar growth, the PIDE correctly predicts the coarsening rate, invariant grain-size distribution and spatial grain size correlations observed in direct simulations. The PIDE is then reducible to the standard continuity equation, and we derive an explicit expression for the drift velocity. It should be possible to formulate similar parameterization-free models of normal grain growth in two and three dimensions.

  7. TOTAL NUMBER, DISTRIBUTION, AND PHENOTYPE OF CELLS EXPRESSING CHONDROITIN SULPHATE PROTEOGLYCANS IN THE NORMAL HUMAN AMYGDALA

    Science.gov (United States)

    Pantazopoulos, Harry; Murray, Elisabeth A.; Berretta, Sabina

    2009-01-01

    Chondroitin sulphate proteoglycans (CSPGs) are a key structural component of the brain extracellular matrix. They are involved in critical neurodevelopmental functions and are one of the main components of pericellular aggregates known as perineuronal nets. As a step toward investigating their functional and pathophysiological roles in the human amygdala, we assessed the pattern of CSPG expression in the normal human amygdala using wisteria floribunda agglutinin (WFA) lectin-histochemistry. Total numbers of WFA-labeled elements were measured in the lateral (LN), basal (BN), accessory basal (ABN) and cortical (CO) nuclei of the amygdala from 15 normal adult human subjects. For interspecies qualitative comparison, we also investigated the pattern of WFA labeling in the amygdala of naïve rats (n=32) and rhesus monkeys (Macaca mulatta; n=6). In human amygdala, WFA lectin-histochemistry resulted in labeling of perineuronal nets and cells with clear glial morphology, while neurons did not show WFA-labeling. Total numbers of WFA-labeled glial cells showed high interindividual variability. These cells aggregated in clusters with a consistent between-subjects spatial distribution. In a subset of human subjects (n=5), dual color fluorescence using an antibody raised against glial fibrillary acidic protein (GFAP) and WFA showed that the majority (93.7%) of WFA-labeled glial cells correspond to astrocytes. In rat and monkey amygdala, WFA histochemistry labeled perineuronal nets, but not glial cells. These results suggest that astrocytes are the main cell type expressing CSPGs in the adult human amygdala. Their highly segregated distribution pattern suggests that these cells serve specialized functions within human amygdalar nuclei. PMID:18374308

  8. The normal zone propagation in ATLAS B00 model coil

    CERN Document Server

    Boxman, E W; ten Kate, H H J

    2002-01-01

    The B00 model coil has been successfully tested in the ATLAS Magnet Test Facility at CERN. The coil consists of two double pancakes wound with aluminum stabilized cables of the barrel- and end-cap toroids conductors for the ATLAS detector. The magnet current is applied up to 24 kA and quenches are induced by firing point heaters. The normal zone velocity is measured over a wide range of currents by using pickup coils, voltage taps and superconducting quench detectors. The signals coming from various sensors are presented and analyzed. The results extracted from the various detection methods are in good agreement. It is found that the characteristic velocities vary from 5 to 20 m/s at 15 and 24 kA respectively. In addition, the minimum quench energies at different applied magnet currents are presented. (6 refs).

  9. Mathematical model of normal tissue injury in telegammatherapy

    International Nuclear Information System (INIS)

    Belov, S.A.; Lyass, F.M.; Mamin, R.G.; Minakova, E.I.; Raevskaya, S.A.

    1983-01-01

    A model of normal tissue injury as a result of exposure to ionizing radiation is based on an assumption that the degree of tissue injury is determined by the degree of destruction by certain critical cells. The dependence of the number of lethal injuriies on a single dose is expressed by a trinomial - linear and quadratic parts and a constant, obtained as a result of the processing of experimental data. Quantitative correlations have been obtained for the skin and brain. They have been tested using clinical and experimental material. The results of the testing point out to the absence of time dependence on a single up to 6-week irradiation cources. Correlation with an irradiation field has been obtained for the skin. A conclusion has been made that the concept of isoefficacy of irradiation cources is conditional. Spatial-time fractionation is a promising direction in the development of radiation therapy

  10. Even-odd charged multiplicity distributions and energy dependence of normalized multiplicity moments in different rapidity windows

    International Nuclear Information System (INIS)

    Wu Yuanfang; Liu Lianshou

    1990-01-01

    The even and odd multiplicity distributions for hadron-hadron collision in different rapidity windows are calculated, starting from a simple picture for charge correlation with non-zero correlation length. The coincidence and separation of these distributions are explained. The calculated window-and energy-dependence of normalized moments recovered the behaviour found in experiments. A new definition for normalized moments is propossed, especially suitable for narrow rapidity windows

  11. Business Models and Regulation | Distributed Generation Interconnection

    Science.gov (United States)

    Collaborative | NREL Business Models and Regulation Business Models and Regulation Subscribe to new business models and approaches. The growing role of distributed resources in the electricity system is leading to a shift in business models and regulation for electric utilities. These

  12. Measurement of activity-weighted size distributions of radon decay products in a normally occupied home

    International Nuclear Information System (INIS)

    Hopke, P.K.; Wasiolek, P.; Montassier, N.; Cavallo, A.; Gadsby, K.; Socolow, R.

    1992-01-01

    In order to assess the exposure of individuals to the presence of indoor radioactivity arising from the decay of radon, an automated, semicontinuous graded screen array system was developed to permit the measurement of the activity-weighted size distributions of the radon progeny in homes. The system has been modified so that the electronics and sampling heads can be separated from the pump by approximately 15 m. The system was placed in the living room of a one-storey house with basement in Princeton, NJ and operated for 2 weeks while the house was occupied by the home owners in their normal manner. One of the house occupants was a cigarette smoker. Radon and potential alpha energy concentration (PAEC) measurements were also made, but condensation nuclei counts were not performed. PAEC values ranged from 23.4 to 461.6 mWL. In the measured activity size distributions, the amount of activity in the 0.5-1.5 nm size range can be considered to be the unattached fraction. The mean value for the 218 Po unattached fraction is 0.217 with a range of 0.054-0.549. The median value for the unattached fraction of PAEC is 0.077 with a range of 0.022-0.178. (author)

  13. [Calbindin and parvalbumin distribution in spinal cord of normal and rabies-infected mice].

    Science.gov (United States)

    Monroy-Gómez, Jeison; Torres-Fernández, Orlando

    2013-01-01

    Rabies is a fatal infectious disease of the nervous system; however, the knowledge about the pathogenic neural mechanisms in rabies is scarce. In addition, there are few studies of rabies pathology of the spinal cord. To study the distribution of calcium binding proteins calbindin and parvalbumin and assessing the effect of rabies virus infection on their expression in the spinal cord of mice. MATERIALES Y METHODS: Mice were inoculated with rabies virus, by intracerebral or intramuscular route. The spinal cord was extracted to perform some crosscuts which were treated by immunohistochemistry with monoclonal antibodies to reveal the presence of the two proteins in normal and rabies infected mice. We did qualitative and quantitative analyses of the immunoreactivity of the two proteins. Calbindin and parvalbumin showed differential distribution in Rexed laminae. Rabies infection produced a decrease in the expression of calbindin. On the contrary, the infection caused an increased expression of parvalbumin. The effect of rabies infection on the two proteins expression was similar when comparing both routes of inoculation. The differential effect of rabies virus infection on the expression of calbindin and parvalbumin in the spinal cord of mice was similar to that previously reported for brain areas. This result suggests uniformity in the response to rabies infection throughout the central nervous system. This is an important contribution to the understanding of the pathogenesis of rabies.

  14. Distribution Log Normal of 222 Rn in the state of Zacatecas, Mexico

    International Nuclear Information System (INIS)

    Garcia, M.L.; Mireles, F.; Quirino, L.; Davila, I.; Rios, C.; Pinedo, J.L.

    2006-01-01

    In this work the evaluation of the concentration of 222 Rn in air for Zacatecas is shown. The Solid State Nuclear Track Detectors were used as the technique for the realization of the measurements in large scale with cellulose nitrate LR-115, type 2, in open chambers of 222 Rn. The measurements were carried out during three months in different times of the year. In the results it is presented the log normal distribution, arithmetic mean and geometric media for the concentration at indoor and outdoor of residence constructions, the concentration at indoor of occupational constructions and in the 57 municipal heads of the state of Zacatecas. The statistics of the values in the concentration showed variation according to the time of the year, obtaining high quantities in winter seasons for both cases. The distribution of the concentration of 222 Rn is presented in the state map for each one of the municipalities, representing the measurement places in the entire state of Zacatecas. Finally the places where the values in the concentration of 222 Rn in air are near to the one limit settled down by the EPA of 148 Bq/m 3 are presented. (Author)

  15. Heterogeneous distribution of a diffusional tracer in the aortic wall of normal and atherosclerotic rabbits

    International Nuclear Information System (INIS)

    Tsutsui, H.; Tomoike, H.; Nakamura, M.

    1990-01-01

    Tracer distribution as an index of nutritional support across the thoracic and abdominal aortas in rabbits in the presence or absence of atherosclerotic lesions was evaluated using [ 14 C]antipyrine, a metabolically inert, diffusible indicator. Intimal plaques were produced by endothelial balloon denudation of the thoracic aorta and a 1% cholesterol diet. After a steady intravenous infusion of 200 microCi of [ 14 C]antipyrine for 60 seconds, thoracic and abdominal aortas and the heart were excised, and autoradiograms of 20-microns-thick sections were quantified, using microcomputer-aided densitometry. Regional radioactivity and regional diffusional support, as an index of nutritional flow estimated from the timed collections of arterial blood, was 367 and 421 nCi.g-1 (82 and 106 ml.min-1.100 g-1) in thoracic aortic media of the normal and atherosclerotic rabbits, respectively. Radioactivity at the thickened intima was 179 nCi.g-1 (p less than 0.01 versus media). The gruel was noted at a deeper site within the thickened intima, and diffusional support here was 110 nCi.g-1 (p less than 0.01 versus an average radioactivity at the thickened intima). After ligating the intercostal arteries, regional tracer distribution in the media beneath the fibrofatty lesion, but not the plaque-free intima, was reduced to 46%. Thus, in the presence of advanced intimal thickening, the heterogeneous distribution of diffusional flow is prominent across the vessel wall, and abluminal routes are crucial to meet the increased demands of nutritional requirements

  16. Distributed modeling for road authorities

    NARCIS (Netherlands)

    Luiten, G.T.; Bõhms, H.M.; Nederveen, S. van; Bektas, E.

    2013-01-01

    A great challenge for road authorities is to improve the effectiveness and efficiency of their core processes by improving data exchange and sharing using new technologies such as building information modeling (BIM). BIM has already been successfully implemented in other sectors, such as

  17. New trends in species distribution modelling

    Science.gov (United States)

    Zimmermann, Niklaus E.; Edwards, Thomas C.; Graham, Catherine H.; Pearman, Peter B.; Svenning, Jens-Christian

    2010-01-01

    Species distribution modelling has its origin in the late 1970s when computing capacity was limited. Early work in the field concentrated mostly on the development of methods to model effectively the shape of a species' response to environmental gradients (Austin 1987, Austin et al. 1990). The methodology and its framework were summarized in reviews 10–15 yr ago (Franklin 1995, Guisan and Zimmermann 2000), and these syntheses are still widely used as reference landmarks in the current distribution modelling literature. However, enormous advancements have occurred over the last decade, with hundreds – if not thousands – of publications on species distribution model (SDM) methodologies and their application to a broad set of conservation, ecological and evolutionary questions. With this special issue, originating from the third of a set of specialized SDM workshops (2008 Riederalp) entitled 'The Utility of Species Distribution Models as Tools for Conservation Ecology', we reflect on current trends and the progress achieved over the last decade.

  18. Air Distribution Effectiveness for Residential Mechanical Ventilation: Simulation and Comparison of Normalized Exposures

    Energy Technology Data Exchange (ETDEWEB)

    Petithuguenin, T.D.P.; Sherman, M.H.

    2009-05-01

    The purpose of ventilation is to dilute indoor contaminants that an occupant is exposed to. Even when providing the same nominal rate of outdoor air, different ventilation systems may distribute air in different ways, affecting occupants' exposure to household contaminants. Exposure ultimately depends on the home being considered, on source disposition and strength, on occupants' behavior, on the ventilation strategy, and on operation of forced air heating and cooling systems. In any multi-zone environment dilution rates and source strengths may be different in every zone and change in time, resulting in exposure being tied to occupancy patterns.This paper will report on simulations that compare ventilation systems by assessing their impact on exposure by examining common house geometries, contaminant generation profiles, and occupancy scenarios. These simulations take into account the unsteady, occupancy-tied aspect of ventilation such as bathroom and kitchen exhaust fans. As most US homes have central HVAC systems, the simulation results will be used to make appropriate recommendations and adjustments for distribution and mixing to residential ventilation standards such as ASHRAE Standard 62.2.This paper will report on work being done to model multizone airflow systems that are unsteady and elaborate the concept of distribution matrix. It will examine several metrics for evaluating the effect of air distribution on exposure to pollutants, based on previous work by Sherman et al. (2006).

  19. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  20. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  1. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  2. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  3. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  4. Modelling refrigerant distribution in minichannel evaporators

    DEFF Research Database (Denmark)

    Brix, Wiebke

    of the liquid and vapour in the inlet manifold. Combining non-uniform airflow and non-uniform liquid and vapour distribution shows that a non-uniform airflow distribution to some degree can be compensated by a suitable liquid and vapour distribution. Controlling the superheat out of the individual channels...... to be equal, results in a cooling capacity very close to the optimum. A sensitivity study considering parameter changes shows that the course of the pressure gradient in the channel is significant, considering the magnitude of the capacity reductions due to non-uniform liquid and vapour distribution and non......This thesis is concerned with numerical modelling of flow distribution in a minichannel evaporator for air-conditioning. The study investigates the impact of non-uniform airflow and non-uniform distribution of the liquid and vapour phases in the inlet manifold on the refrigerant mass flow...

  5. Absolute quantification of pharmacokinetic distribution of RES colloids in individuals with normal liver function

    International Nuclear Information System (INIS)

    Herzog, H.; Spohr, G.; Notohamiprodjo, G.; Feinendegen, L.E.

    1987-01-01

    Estimates of the radiation dose resulting from liver-spleen scintigraphy 99 TCsup(m)-labelled colloids are based on pharmacokinetic data mainly determined in animals. The aim of this study was to check the pharmacokinetic data by direct, absolute in vivo quantification in man. Liver and spleen activities were directly measured using a double-energy window technique. Activities in other organs were quantified by conjugate whole-body scans. All measurement procedures were checked using the whole-body Alderson phantom. Pharmacokinetic data for sulphur colloid, tin colloid, human serum albumin (HSA) millimicrospheres, and phytate were obtained in 13 to 20 normal subjects for each type of colloid. Depending on the colloid type liver uptake was between 54 and 75% of the total administered dose (TAD) and spleen uptake was 3.5 to 21% TAD. Activity measured in blood, urine, lung and thyroid proved to be far from negligible. The results of this work suggest a correction of the animal-based data of colloid distribution and radiation dose on the basis of the direct measurement of absolute uptake in man. (author)

  6. Pharmacodynamic and pharmacokinetic studies and prostatic tissue distribution of fosfomycin tromethamine in bacterial prostatitis or normal rats.

    Science.gov (United States)

    Fan, L; Shang, X; Zhu, J; Ma, B; Zhang, Q

    2018-05-02

    In this study, we assessed the therapeutic effects of fosfomycin tromethamine (FT) in a bacterial prostatitis (BP) rat model. The BP model was induced by Escherichia coli and was demonstrated after 7 days microbiologically and histologically. Then, 25 BP rats selected were randomly divided into five treatment groups: model group, positive group, FT-3 day group, FT-7 day group and FT-14 day group. Ventral lobes of prostate from all animals were removed, and the serum samples were collected at the end of the experiments. Microbiological cultures and histological findings of the prostate samples demonstrated reduced bacterial growth and improved inflammatory responses in FT-treatment groups compared with the model group, indicating that FT against prostatic infection induced by E. coli showed good antibacterial effects. Moreover, plasma pharmacokinetics and prostatic distribution of fosfomycin were studied and compared in BP and normal rats. The concentrations of fosfomycin in samples were analysed by liquid chromatography-tandem mass spectrometry. There were no differences in plasma pharmacokinetic parameters between two groups. But significantly higher penetration of fosfomycin into prostatic tissues was found in BP rats. We therefore suggested that FT had a good therapeutic effect on BP and it might be used in curing masculine reproductive system diseases. © 2018 Blackwell Verlag GmbH.

  7. STOCHASTIC MODEL OF THE SPIN DISTRIBUTION OF DARK MATTER HALOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Juhan [Center for Advanced Computation, Korea Institute for Advanced Study, Heogiro 85, Seoul 130-722 (Korea, Republic of); Choi, Yun-Young [Department of Astronomy and Space Science, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of); Kim, Sungsoo S.; Lee, Jeong-Eun [School of Space Research, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of)

    2015-09-15

    We employ a stochastic approach to probing the origin of the log-normal distributions of halo spin in N-body simulations. After analyzing spin evolution in halo merging trees, it was found that a spin change can be characterized by a stochastic random walk of angular momentum. Also, spin distributions generated by random walks are fairly consistent with those directly obtained from N-body simulations. We derived a stochastic differential equation from a widely used spin definition and measured the probability distributions of the derived angular momentum change from a massive set of halo merging trees. The roles of major merging and accretion are also statistically analyzed in evolving spin distributions. Several factors (local environment, halo mass, merging mass ratio, and redshift) are found to influence the angular momentum change. The spin distributions generated in the mean-field or void regions tend to shift slightly to a higher spin value compared with simulated spin distributions, which seems to be caused by the correlated random walks. We verified the assumption of randomness in the angular momentum change observed in the N-body simulation and detected several degrees of correlation between walks, which may provide a clue for the discrepancies between the simulated and generated spin distributions in the voids. However, the generated spin distributions in the group and cluster regions successfully match the simulated spin distribution. We also demonstrated that the log-normality of the spin distribution is a natural consequence of the stochastic differential equation of the halo spin, which is well described by the Geometric Brownian Motion model.

  8. Reference man models based on normal data from human populations

    International Nuclear Information System (INIS)

    Tanaka, Gi-ichiro; Kawamura, Hisao

    2000-01-01

    Quantitative description of the physical, and metabolic parameters of the human body is the very basic for internal dosimetry. Compilation of anatomical and other types of data Asian populations for internal (and external) dosimetry is of grate significance because of the potential spread of nuclear energy use in the Asian region and the major contribution of the region to the world population (about 58%). It has been observed that some differences exist for habitat, race, body sizes and pattern of food consumption. In the early stage of revision of ICRP Reference man by the Task Group, Characteristics of the human body of non-European populations received considerable attention as well as those of the European populations of different sexes and ages. In this context, an IAEA-RCA Co-ordinated Research Program on Compilation of Anatomical, Physiological and Metabolic Characteristics for a Reference Asian Man endorsed. In later stages of reference Man revision, anatomical data for Asians was discusses together with those of European populations, presumably due to ICRP's decision of unanimous use of the Reference Man for radiation protection. Reference man models for adults and 15, 10, 5, 1, 0 year-old males and females of Asian populations were developed for use in internal and external dosimetry. Based on the concept of ICRP Reference Man (Publication 23), the reference values were derived from the normal organ mass data for Japanese and statistical data on the physique and nutrition of Japanese and Chinese. Also incorporated were variations in physical measurements, as observed in the above mentioned IAEA-RCA Co-ordinated Research Program. The work was partly carried out within the activities of the ICRP Task Group on Reference Man. The weight of the skeleton was adjusted following the revised values in Publication 70. This paper will report basic shared and non-shared characteristics of Reference Man' for Asians and ICRP Reference Man. (author)

  9. Mathematical Models for Room Air Distribution

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    1982-01-01

    A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....

  10. Mathematical Models for Room Air Distribution - Addendum

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    1982-01-01

    A number of different models on the air distribution in rooms are introduced. This includes the throw model, a model on penetration length of a cold wall jet and a model for maximum velocity in the dimensioning of an air distribution system in highly loaded rooms and shows that the amount of heat...... removed from the room at constant penetration length is proportional to the cube of the velocities in the occupied zone. It is also shown that a large number of diffusers increases the amount of heat which may be removed without affecting the thermal conditions. Control strategies for dual duct and single...... duct systems are given and the paper is concluded by mentioning a computer-based prediction method which gives the velocity and temperature distribution in the whole room....

  11. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    Science.gov (United States)

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  12. A Hierarchy Model of Income Distribution

    OpenAIRE

    Fix, Blair

    2018-01-01

    Based on worldly experience, most people would agree that firms are hierarchically organized, and that pay tends to increase as one moves up the hierarchy. But how this hierarchical structure affects income distribution has not been widely studied. To remedy this situation, this paper presents a new model of income distribution that explores the effects of social hierarchy. This ‘hierarchy model’ takes the limited available evidence on the structure of firm hierarchies and generalizes it to c...

  13. A distributed dynamic model of a monolith hydrogen membrane reactor

    International Nuclear Information System (INIS)

    Michelsen, Finn Are; Wilhelmsen, Øivind; Zhao, Lei; Aasen, Knut Ingvar

    2013-01-01

    Highlights: ► We model a rigorous distributed dynamic model for a HMR unit. ► The model includes enough complexity for steady-state and dynamic analysis. ► Simulations show that the model is non-linear within the normal operating range. ► The model is useful for studying and handling disturbances such as inlet changes and membrane leakage. - Abstract: This paper describes a distributed mechanistic dynamic model of a hydrogen membrane reformer unit (HMR) used for methane steam reforming. The model is based on a square channel monolith structure concept, where air flows adjacent to a mix of natural gas and water distributed in a chess pattern of channels. Combustion of hydrogen gives energy to the endothermic steam reforming reactions. The model is used for both steady state and dynamic analyses. It therefore needs to be computationally attractive, but still include enough complexity to study the important steady state and dynamic features of the process. Steady-state analysis of the model gives optimum for the steam to carbon and steam to oxygen ratios, where the conversion of methane is 92% and the hydrogen used as energy for the endothermic reactions is 28% at the nominal optimum. The dynamic analysis shows that non-linear control schemes may be necessary for satisfactory control performance

  14. Advanced Distribution Network Modelling with Distributed Energy Resources

    Science.gov (United States)

    O'Connell, Alison

    The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a

  15. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  16. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  17. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  18. Analysis of a hundred-years series of magnetic activity indices. III. Is the frequency distribution logarithmo-normal

    International Nuclear Information System (INIS)

    Mayaud, P.N.

    1976-01-01

    Because of the various components of positive conservation existing in the series of aa indices, their frequency distribution is necessarily distorted with respect to any random distribution. However when one takes these various components into account, the observed distribution can be considered as being a logarithmo-normal distribution. This implies that the geomagnetic activity satisfies the conditions of the central limit theorem, according to which a phenomenon which presents such a distribution is due to independent causes whose effects are multiplicative. Furthermore, the distorsion of the frequency distribution caused by the 11-year and 90-year cycles corresponds to a pure attenuation effect; an interpretation by the solar 'coronal holes' is proposed [fr

  19. Distribution of CD163-positive cell and MHC class II-positive cell in the normal equine uveal tract.

    Science.gov (United States)

    Sano, Yuto; Matsuda, Kazuya; Okamoto, Minoru; Takehana, Kazushige; Hirayama, Kazuko; Taniyama, Hiroyuki

    2016-02-01

    Antigen-presenting cells (APCs) in the uveal tract participate in ocular immunity including immune homeostasis and the pathogenesis of uveitis. In horses, although uveitis is the most common ocular disorder, little is known about ocular immunity, such as the distribution of APCs. In this study, we investigated the distribution of CD163-positive and MHC II-positive cells in the normal equine uveal tract using an immunofluorescence technique. Eleven eyes from 10 Thoroughbred horses aged 1 to 24 years old were used. Indirect immunofluorescence was performed using the primary antibodies CD163, MHC class II (MHC II) and CD20. To demonstrate the site of their greatest distribution, positive cells were manually counted in 3 different parts of the uveal tract (ciliary body, iris and choroid), and their average number was assessed by statistical analysis. The distribution of pleomorphic CD163- and MHC II-expressed cells was detected throughout the equine uveal tract, but no CD20-expressed cells were detected. The statistical analysis demonstrated the distribution of CD163- and MHC II-positive cells focusing on the ciliary body. These results demonstrated that the ciliary body is the largest site of their distribution in the normal equine uveal tract, and the ciliary body is considered to play important roles in uveal and/or ocular immune homeostasis. The data provided in this study will help further understanding of equine ocular immunity in the normal state and might be beneficial for understanding of mechanisms of ocular disorders, such as equine uveitis.

  20. A model for fission product distribution in CANDU fuel

    International Nuclear Information System (INIS)

    Muzumdar, A.P.

    1983-01-01

    This paper describes a model to estimate the distribution of active fission products among the UO 2 grains, grain-boundaries, and the free void spaces in CANDU fuel elements during normal operation. This distribution is required for the calculation of the potential release of activity from failed fuel sheaths during a loss-of-coolant accident. The activity residing in the free spaces (''free'' inventory) is available for release upon sheath rupture, whereas relatively high fuel temperatures and/or thermal shock are required to release the activity in the grain boundaries or grains. A preliminary comparison of the model with the data from in-reactor sweep-gas experiments performed in Canada yields generally good agreement, with overprediction rather than under prediction of radiologically important isotopes, such as I 131 . The model also appears to generally agree with the ''free'' inventory release calculated using ANS-5.4. (author)

  1. The skin immune system (SIS): distribution and immunophenotype of lymphocyte subpopulations in normal human skin

    NARCIS (Netherlands)

    Bos, J. D.; Zonneveld, I.; Das, P. K.; Krieg, S. R.; van der Loos, C. M.; Kapsenberg, M. L.

    1987-01-01

    The complexity of immune response-associated cells present in normal human skin was recently redefined as the skin immune system (SIS). In the present study, the exact immunophenotypes of lymphocyte subpopulations with their localizations in normal human skin were determined quantitatively. B cells

  2. Modeling the distribution of Culex tritaeniorhynchus to predict Japanese encephalitis distribution in the Republic of Korea

    Directory of Open Access Journals (Sweden)

    Penny Masuoka

    2010-11-01

    Full Text Available Over 35,000 cases of Japanese encephalitis (JE are reported worldwide each year. Culex tritaeniorhynchus is the primary vector of the JE virus, while wading birds are natural reservoirs and swine amplifying hosts. As part of a JE risk analysis, the ecological niche modeling programme, Maxent, was used to develop a predictive model for the distribution of Cx. tritaeniorhynchus in the Republic of Korea, using mosquito collection data, temperature, precipitation, elevation, land cover and the normalized difference vegetation index (NDVI. The resulting probability maps from the model were consistent with the known environmental limitations of the mosquito with low probabilities predicted for forest covered mountains. July minimum temperature and land cover were the most important variables in the model. Elevation, summer NDVI (July-September, precipitation in July, summer minimum temperature (May-August and maximum temperature for fall and winter months also contributed to the model. Comparison of the Cx. tritaeniorhynchus model to the distribution of JE cases in the Republic of Korea from 2001 to 2009 showed that cases among a highly vaccinated Korean population were located in high-probability areas for Cx. tritaeniorhynchus. No recent JE cases were reported from the eastern coastline, where higher probabilities of mosquitoes were predicted, but where only small numbers of pigs are raised. The geographical distribution of reported JE cases corresponded closely with the predicted high-probability areas for Cx. tritaeniorhynchus, making the map a useful tool for health risk analysis that could be used for planning preventive public health measures.

  3. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  4. Applications of species distribution modeling to paleobiology

    DEFF Research Database (Denmark)

    Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine Ann

    2011-01-01

    -Pleistocene megafaunal extinctions, past community assembly, human paleobiogeography, Holocene paleoecology, and even deep-time biogeography (notably, providing insights into biogeographic dynamics >400 million years ago). We discuss important assumptions and uncertainties that affect the SDM approach to paleobiology......Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i......) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview...

  5. Drug binding affinities and potencies are best described by a log-normal distribution and use of geometric means

    International Nuclear Information System (INIS)

    Stanisic, D.; Hancock, A.A.; Kyncl, J.J.; Lin, C.T.; Bush, E.N.

    1986-01-01

    (-)-Norepinephrine (NE) is used as an internal standard in their in vitro adrenergic assays, and the concentration of NE which produces a half-maximal inhibition of specific radioligand binding (affinity; K/sub I/), or half-maximal contractile response (potency; ED 50 ) has been measured numerous times. The goodness-of-fit test for normality was performed on both normal (Gaussian) or log 10 -normal frequency histograms of these data using the SAS Univariate procedure. Specific binding of 3 H-prazosin to rat liver (α 1 -), 3 H rauwolscine to rat cortex (α 2 -) and 3 H-dihydroalprenolol to rat ventricle (β 1 -) or rat lung (β 2 -receptors) was inhibited by NE; the distributions of NE K/sub I/'s at all these sites were skewed to the right, with highly significant (p 50 's of NE in isolated rabbit aorta (α 1 ), phenoxybenzamine-treated dog saphenous vein (α 2 ) and guinea pig atrium (β 1 ). The vasorelaxant potency of atrial natriuretic hormone in histamine-contracted rabbit aorta also was better described by a log-normal distribution, indicating that log-normalcy is probably a general phenomenon of drug-receptor interactions. Because data of this type appear to be log-normally distributed, geometric means should be used in parametric statistical analyses

  6. Modeling Word Burstiness Using the Dirichlet Distribution

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg; Kauchak, David; Elkan, Charles

    2005-01-01

    Multinomial distributions are often used to model text documents. However, they do not capture well the phenomenon that words in a document tend to appear in bursts: if a word appears once, it is more likely to appear again. In this paper, we propose the Dirichlet compound multinomial model (DCM......) as an alternative to the multinomial. The DCM model has one additional degree of freedom, which allows it to capture burstiness. We show experimentally that the DCM is substantially better than the multinomial at modeling text data, measured by perplexity. We also show using three standard document collections...

  7. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  8. Are There More Gifted People Than Would Be Expected in a Normal Distribution? An Investigation of the Overabundance Hypothesis

    Science.gov (United States)

    Warne, Russell T.; Godwin, Lindsey R.; Smith, Kyle V.

    2013-01-01

    Among some gifted education researchers, advocates, and practitioners, it is sometimes believed that there is a larger number of gifted people in the general population than would be predicted from a normal distribution (e.g., Gallagher, 2008; N. M. Robinson, Zigler, & Gallagher, 2000; Silverman, 1995, 2009), a belief that we termed the…

  9. Overhead distribution line models for harmonics studies

    Energy Technology Data Exchange (ETDEWEB)

    Nagpal, M.; Xu, W.; Dommel, H.W.

    1994-01-01

    Carson's formulae and Maxwell's potential coefficients are used for calculating the per unit length series impedances and shunt capacitances of the overhead lines. The per unit length values are then used for building the models, nominal pi-circuit, and equivalent pi-circuit at the harmonic frequencies. This paper studies the accuracy of these models for presenting the overhead distribution lines in steady-state harmonic solutions at frequencies up to 5 kHz. The models are verified with a field test on a 25 kV distribution line and the sensitivity of the models to ground resistivity, skin effect, and multiple grounding is reported.

  10. A DISTRIBUTED HYPERMAP MODEL FOR INTERNET GIS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The rapid development of Internet technology makes it possible to integrate GIS with the Internet,forming Internet GIS.Internet GIS is based on a distributed client/server architecture and TCP/IP & IIOP.When constructing and designing Internet GIS,we face the problem of how to express information units of Internet GIS.In order to solve this problem,this paper presents a distributed hypermap model for Internet GIS.This model provides a solution to organize and manage Internet GIS information units.It also illustrates relations between two information units and in an internal information unit both on clients and servers.On the basis of this model,the paper contributes to the expressions of hypermap relations and hypermap operations.The usage of this model is shown in the implementation of a prototype system.

  11. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Real-time modeling and simulation of distribution feeder and distributed resources

    Science.gov (United States)

    Singh, Pawan

    The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.

  13. Programming model for distributed intelligent systems

    Science.gov (United States)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  14. Modelling Dynamic Forgetting in Distributed Information Systems

    NARCIS (Netherlands)

    N.F. Höning (Nicolas); M.C. Schut

    2010-01-01

    htmlabstractWe describe and model a new aspect in the design of distributed information systems. We build upon a previously described problem on the microlevel, which asks how quickly agents should discount (forget) their experience: If they cherish their memories, they can build their reports on

  15. Comparison of sparse point distribution models

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus

    2010-01-01

    This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...

  16. A Distributive Model of Treatment Acceptability

    Science.gov (United States)

    Carter, Stacy L.

    2008-01-01

    A model of treatment acceptability is proposed that distributes overall treatment acceptability into three separate categories of influence. The categories are comprised of societal influences, consultant influences, and influences associated with consumers of treatments. Each of these categories are defined and their inter-relationships within…

  17. Finessing atlas data for species distribution models

    NARCIS (Netherlands)

    Niamir, A.; Skidmore, A.K.; Toxopeus, A.G.; Munoz, A.R.; Real, R.

    2011-01-01

    Aim The spatial resolution of species atlases and therefore resulting model predictions are often too coarse for local applications. Collecting distribution data at a finer resolution for large numbers of species requires a comprehensive sampling effort, making it impractical and expensive. This

  18. Normalization and Implementation of Three Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.

    2016-01-01

    Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  19. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  20. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  1. Carbon K-shell photoionization of CO: Molecular frame angular distributions of normal and conjugate shakeup satellites

    International Nuclear Information System (INIS)

    Jahnke, T.; Titze, J.; Foucar, L.; Wallauer, R.; Osipov, T.; Benis, E.P.; Jagutzki, O.; Arnold, W.; Czasch, A.; Staudte, A.; Schoeffler, M.; Alnaser, A.; Weber, T.; Prior, M.H.; Schmidt-Boecking, H.; Doerner, R.

    2011-01-01

    We have measured the molecular frame angular distributions of photoelectrons emitted from the Carbon K-shell of fixed-in-space CO molecules for the case of simultaneous excitation of the remaining molecular ion. Normal and conjugate shakeup states are observed. Photoelectrons belonging to normal Σ-satellite lines show an angular distribution resembling that observed for the main photoline at the same electron energy. Surprisingly a similar shape is found for conjugate shakeup states with Π-symmetry. In our data we identify shake rather than electron scattering (PEVE) as the mechanism producing the conjugate lines. The angular distributions clearly show the presence of a Σ shape resonance for all of the satellite lines.

  2. Modeling water vapor and heat transfer in the normal and the intubated airways.

    Science.gov (United States)

    Tawhai, Merryn H; Hunter, Peter J

    2004-04-01

    Intubation of the artificially ventilated patient with an endotracheal tube bypasses the usual conditioning regions of the nose and mouth. In this situation any deficit in heat or moisture in the air is compensated for by evaporation and thermal transfer from the pulmonary airway walls. To study the dynamics of heat and water transport in the intubated airway, a coupled system of nonlinear equations is solved in airway models with symmetric geometry and anatomically based geometry. Radial distribution of heat, water vapor, and velocity in the airway are described by power-law equations. Solution of the time-dependent system of equations yields dynamic airstream and mucosal temperatures and air humidity. Comparison of model results with two independent experimental studies in the normal and intubated airway shows a close correlation over a wide range of minute ventilation. Using the anatomically based model a range of spatially distributed temperature paths is demonstrated, which highlights the model's ability to predict thermal behavior in airway regions currently inaccessible to measurement. Accurate representation of conducting airway geometry is shown to be necessary for simulating mouth-breathing at rates between 15 and 100 l x min(-1), but symmetric geometry is adequate for the low minute ventilation and warm inspired air conditions that are generally supplied to the intubated patient.

  3. Probability distribution of atmospheric pollutants: comparison among four methods for the determination of the log-normal distribution parameters; La distribuzione di probabilita` degli inquinanti atmosferici: confronto tra quattro metodi per la determinazione dei parametri della distribuzione log-normale

    Energy Technology Data Exchange (ETDEWEB)

    Bellasio, R [Enviroware s.r.l., Agrate Brianza, Milan (Italy). Centro Direzionale Colleoni; Lanzani, G; Ripamonti, M; Valore, M [Amministrazione Provinciale, Como (Italy)

    1998-04-01

    This work illustrates the possibility to interpolate the measured concentrations of CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} during one year (1995) at the 13 stations of the air quality monitoring station network of the Provinces of Como and Lecco (Italy) by means of a log-normal distribution. Particular attention was given in choosing the method for the determination of the log-normal distribution parameters among four possible methods: I natural, II percentiles, III moments, IV maximum likelihood. In order to evaluate the goodness of fit a ranking procedure was carried out over the values of four indices: absolute deviation, weighted absolute deviation, Kolmogorov-Smirnov index and Cramer-von Mises-Smirnov index. The capability of the log-normal distribution to fit the measured data is then discussed as a function of the pollutant and of the monitoring station. Finally an example of application is given: the effect of an emission reduction strategy in Lombardy Region (the so called `bollino blu`) is evaluated using a log-normal distribution. [Italiano] In questo lavoro si discute la possibilita` di interpolare le concentrazioni misurate di CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} durante un anno solare (il 1995) nelle 13 stazioni della Rete di Rilevamento della qualita` dell`aria delle Provincie di Como e di Lecco mediante una funzione log-normale. In particolare si discute quale metodo e` meglio usare per l`individuazione dei 2 parametri caratteristici della log-normale, tra 4 teoreticamente possibili: I naturale, II dei percentili, III dei momenti, IV della massima verosimiglianza. Per valutare i risultati ottenuti si usano: la deviazione assoluta, la deviazione pesata, il parametro di Kolmogorov-Smirnov e quello di Cramer-von Mises-Smirnov effettuando un ranking tra i metodi in funzione degli inquinanti e della stazione di misura. Ancora in funzione degli inquinanti e delle diverse stazioni di misura si discute poi la capacita` della funzione log-normale di

  4. Modelling simple helically delivered dose distributions

    International Nuclear Information System (INIS)

    Fenwick, John D; Tome, Wolfgang A; Kissick, Michael W; Mackie, T Rock

    2005-01-01

    In a previous paper, we described quality assurance procedures for Hi-Art helical tomotherapy machines. Here, we develop further some ideas discussed briefly in that paper. Simple helically generated dose distributions are modelled, and relationships between these dose distributions and underlying characteristics of Hi-Art treatment systems are elucidated. In particular, we describe the dependence of dose levels along the central axis of a cylinder aligned coaxially with a Hi-Art machine on fan beam width, couch velocity and helical delivery lengths. The impact on these dose levels of angular variations in gantry speed or output per linear accelerator pulse is also explored

  5. A void distribution model-flashing flow

    International Nuclear Information System (INIS)

    Riznic, J.; Ishii, M.; Afgan, N.

    1987-01-01

    A new model for flashing flow based on wall nucleations is proposed here and the model predictions are compared with some experimental data. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites was used. Thus it was possible to avoid the usual assumption of a constant bubble number density. Comparisons of the model with the data shows that the model based on the nucleation site density correlation appears to be acceptable to describe the vapor generation in the flashing flow. For the limited data examined, the comparisons show rather satisfactory agreement without using a floating parameter to adjust the model. This result indicated that, at least for the experimental conditions considered here, the mechanistic predictions of the flashing phenomenon is possible on the present wall nucleation based model

  6. Subchondral bone density distribution of the talus in clinically normal Labrador Retrievers.

    Science.gov (United States)

    Dingemanse, W; Müller-Gerbl, M; Jonkers, I; Vander Sloten, J; van Bree, H; Gielen, I

    2016-03-15

    Bones continually adapt their morphology to their load bearing function. At the level of the subchondral bone, the density distribution is highly correlated with the loading distribution of the joint. Therefore, subchondral bone density distribution can be used to study joint biomechanics non-invasively. In addition physiological and pathological joint loading is an important aspect of orthopaedic disease, and research focusing on joint biomechanics will benefit veterinary orthopaedics. This study was conducted to evaluate density distribution in the subchondral bone of the canine talus, as a parameter reflecting the long-term joint loading in the tarsocrural joint. Two main density maxima were found, one proximally on the medial trochlear ridge and one distally on the lateral trochlear ridge. All joints showed very similar density distribution patterns and no significant differences were found in the localisation of the density maxima between left and right limbs and between dogs. Based on the density distribution the lateral trochlear ridge is most likely subjected to highest loads within the tarsocrural joint. The joint loading distribution is very similar between dogs of the same breed. In addition, the joint loading distribution supports previous suggestions of the important role of biomechanics in the development of OC lesions in the tarsus. Important benefits of computed tomographic osteoabsorptiometry (CTOAM), i.e. the possibility of in vivo imaging and temporal evaluation, make this technique a valuable addition to the field of veterinary orthopaedic research.

  7. Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals

    Science.gov (United States)

    Kara, Yusuf; Kamata, Akihito

    2017-01-01

    A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…

  8. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  9. Software Application Profile: RVPedigree: a suite of family-based rare variant association tests for normally and non-normally distributed quantitative traits.

    Science.gov (United States)

    Oualkacha, Karim; Lakhal-Chaieb, Lajmi; Greenwood, Celia Mt

    2016-04-01

    RVPedigree (Rare Variant association tests in Pedigrees) implements a suite of programs facilitating genome-wide analysis of association between a quantitative trait and autosomal region-based genetic variation. The main features here are the ability to appropriately test for association of rare variants with non-normally distributed quantitative traits, and also to appropriately adjust for related individuals, either from families or from population structure and cryptic relatedness. RVPedigree is available as an R package. The package includes calculation of kinship matrices, various options for coping with non-normality, three different ways of estimating statistical significance incorporating triaging to enable efficient use of the most computationally-intensive calculations, and a parallelization option for genome-wide analysis. The software is available from the Comprehensive R Archive Network [CRAN.R-project.org] under the name 'RVPedigree' and at [https://github.com/GreenwoodLab]. It has been published under General Public License (GPL) version 3 or newer. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  10. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  11. Modelling refrigerant distribution in microchannel evaporators

    DEFF Research Database (Denmark)

    Brix, Wiebke; Kærn, Martin Ryhl; Elmegaard, Brian

    2009-01-01

    of the refrigerant distribution is carried out for two channels in parallel and for two different cases. In the first case maldistribution of the inlet quality into the channels is considered, and in the second case a non-uniform airflow on the secondary side is considered. In both cases the total mixed superheat...... out of the evaporator is kept constant. It is shown that the cooling capacity of the evaporator is reduced significantly, both in the case of unevenly distributed inlet quality and for the case of non-uniform airflow on the outside of the channels.......The effects of refrigerant maldistribution in parallel evaporator channels on the heat exchanger performance are investigated numerically. For this purpose a 1D steady state model of refrigerant R134a evaporating in a microchannel tube is built and validated against other evaporator models. A study...

  12. An analysis of longitudinal data with nonignorable dropout using the truncated multivariate normal distribution

    NARCIS (Netherlands)

    Jolani, Shahab

    2014-01-01

    For a vector of multivariate normal when some elements, but not necessarily all, are truncated, we derive the moment generating function and obtain expressions for the first two moments involving the multivariate hazard gradient. To show one of many applications of these moments, we then extend the

  13. Site-dependent distribution of macrophages in normal human extraocular muscles

    NARCIS (Netherlands)

    Schmidt, E. D.; van der Gaag, R.; Mourits, M. P.; Koornneef, L.

    1993-01-01

    PURPOSE: Clinical data indicate that extraocular muscles have different susceptibilities for some orbital immune disorders depending on their anatomic location. The resident immunocompetent cells may be important mediators in the local pathogenesis of such disorders so the distribution of these

  14. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  15. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Preparation, distribution, stability and tumor imaging properties of [62Zn] Bleomycin complex in normal and tumor-bearing mice

    International Nuclear Information System (INIS)

    Jalilian, A.R.; Fateh, B.; Ghergherehchi, M.; Karimian, A.; Matloobi, M.; Moradkhani, S.; Kamalidehghan, M.; Tabeie, F.

    2003-01-01

    Backgrounds: Bleomycin (BLM) has been labeled with radioisotopes and widely used in therapy and diagnosis. In this study BLM was labeled with [ 62 Zn] zinc chloride for oncologic PET studies. Materials and methods: The complex was obtained at the P H=2 normal saline at 90 d eg C in 60 min. Radio-TLC showed on overall radiochemical yield of 95-97% (radiochemical purity>97%). Stability of complex was checked in vitro in mice and human plasma/urine. Results: Preliminary in vitro studies performed to determined complex stability and distribution of [ 62 Zn] BLM in normal and fibrosarcoma tumors in mice according to bio-distribution/imaging studies. Conclusion: [ 62 Zn] BLM can be used in PET oncology studies due to its suitable physico-chemical propertied as a diagnostic complex behavior in higher animals

  17. A scan statistic for continuous data based on the normal probability model

    Directory of Open Access Journals (Sweden)

    Huang Lan

    2009-10-01

    Full Text Available Abstract Temporal, spatial and space-time scan statistics are commonly used to detect and evaluate the statistical significance of temporal and/or geographical disease clusters, without any prior assumptions on the location, time period or size of those clusters. Scan statistics are mostly used for count data, such as disease incidence or mortality. Sometimes there is an interest in looking for clusters with respect to a continuous variable, such as lead levels in children or low birth weight. For such continuous data, we present a scan statistic where the likelihood is calculated using the the normal probability model. It may also be used for other distributions, while still maintaining the correct alpha level. In an application of the new method, we look for geographical clusters of low birth weight in New York City.

  18. A distributed snow-evolution modeling system (SnowModel)

    Science.gov (United States)

    Glen E. Liston; Kelly. Elder

    2006-01-01

    SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...

  19. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    Science.gov (United States)

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  20. Similar distributions of repaired sites in chromatin of normal and xeroderma pigmentosum variant cells damaged by ultraviolet light

    International Nuclear Information System (INIS)

    Cleaver, J.E.

    1979-01-01

    Excision repair of damage from ultraviolet light in both normal and xeroderma pigmentosum variant fibroblasts at early times after irradiation occurred preferentially in regions of DNA accessible to micrococcal nuclease digestion. These regions are predominantly the linker regions between nucleosomes in chromatin. The alterations reported at polymerization and ligation steps of excision repair in the variant are therefore not associated with changes in the relative distributions of repair sites in linker and core particle regions of DNA. (Auth.)

  1. On the distribution of the stochastic component in SUE traffic assignment models

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper discuss the use of different distributions of the stochastic component in SUE. A main conclusion is that they generally gave reasonable similar results, except for the LogNormal distribution which use is dissuaded. However, in cases with low link-costs (e.g. in dense urban areas, ramps...... and modelling of intersections and inter-changes), distributions with long tails (Gumbel and Normal) gave biased results com-pared with the Rectangular distribution. The Triangular distribution gave results somewhat between. Besides giving the most reasonable results, the Rectangular dis-tribution is the most...... calculation effective.All distributions gave a unique solution at link level after a sufficient large number of iterations (up to 1,000 at full-scale networks) while the usual aggregated measures of convergence converged quite fast (under 50 iterations). The tests also showed, that the distributions must...

  2. Modeling consonant perception in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech perception is often studied in terms of natural meaningful speech, i.e., by measuring the in- telligibility of a given set of single words or full sentences. However, when trying to understand how background noise, various sorts of transmission channels (e.g., mobile phones) or hearing...... perception data: (i) an audibility-based approach, which corresponds to the Articu- lation Index (AI), and (ii) a modulation-masking based approach, as reflected in the speech-based Envelope Power Spectrum Model (sEPSM). For both models, the internal representations of the same stimuli as used...

  3. The pharmacokinetics, distribution and degradation of human recombinant interleukin 1 beta in normal rats

    DEFF Research Database (Denmark)

    Wogensen, L D; Welinder, B; Hejnaes, K R

    1991-01-01

    -lives of distribution (T1/2 alpha) and elimination phases (T1/2 beta) of human recombinant interleukin 1 beta (rIL-1 beta), and its tissue distribution and cellular localization by means of mono-labelled, biologically active 125I-rIL-1 beta. After intravenous (i.v.) injection, 125I-rIL-1 beta was eliminated from...... the circulation with a T1/2 alpha of 2.9 min and a T1/2 beta of 41.1 min. The central and peripheral volume of distribution was 20.7 and 19.1 ml/rat, respectively, and the metabolic clearance rate was 16.9 ml/min/kg. The kidney and liver showed the highest accumulation of tracer, and autoradiography demonstrated...

  4. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...

  5. Individual loss reserving with the Multivariate Skew Normal model

    NARCIS (Netherlands)

    Pigeon, M.; Antonio, K.; Denuit, M.

    2011-01-01

    In general insurance, the evaluation of future cash ows and solvency capital has become increasingly important. To assist in this process, the present paper proposes an individual discrete-time loss re- serving model describing the occurrence, the reporting delay, the timeto the first payment, and

  6. The Normal Zone Propagation in ATLAS B00 Model Coil

    NARCIS (Netherlands)

    Boxman, E.W.; Dudarev, A.V.; ten Kate, Herman H.J.

    2002-01-01

    The B00 model coil has been successfully tested in the ATLAS Magnet Test Facility at CERN. The coil consists of two double pancakes wound with aluminum stabilized cables of the barrel- and end-cap toroids conductors for the ATLAS detector. The magnet current is applied up to 24 kA and quenches are

  7. Numerical modelling of pyrolysis in normal and reduced oxygen concentration

    International Nuclear Information System (INIS)

    Kacem, Ahmed

    2016-01-01

    The predictive capability of computational fluid dynamics (CFD) fire models depends on the accuracy with which the source term due to fuel pyrolysis can be determined. The pyrolysis rate is a key parameter controlling fire behavior, which in turn drives the heat feedback from the flame to the fuel surface. In the present study an in-depth pyrolysis model of a semi-transparent solid fuel (here, clear polymethyl methacrylate or PMMA) with spectrally-resolved radiation and a moving gas/solid interface was coupled with the CFD code ISIS of the IRSN which included turbulence, combustion and radiation for the gas phase. A combined genetic algorithm/pyrolysis model was used with Cone Calorimeter data from a pure pyrolysis experiment to estimate a unique set of kinetic parameters for PMMA pyrolysis. In order to validate the coupled model, ambient air flaming experiments were conducted on square slabs of PMMA with side lengths of 10, 20 and 40 cm. From measurements at the center of the slab, it was found that i) for any sample size, the experimental regression rate becomes almost constant with time, and ii) although the radiative and total heat transfers increase significantly with the sample size, the radiative contribution to the total heat flux remains almost constant (∼80%). Coupled model results show a fairly good agreement with the literature and with current measurements of the heat fluxes, gas temperature and regressing surface rate at the center of the slabs. Discrepancies between predicted and measured total pyrolysis rate are observed, which result from the underestimation of the flame heat flux feedback at the edges of the slab, as confirmed by the comparison between predicted and observed topography of burned samples. Predicted flame heights based on a threshold temperature criterion were found to be close to those deduced from the correlation of Heskestad. Finally, in order to predict the pyrolysis of PMMA under reduced ambient oxygen concentration, a two

  8. Distributed hierarchical control architecture for integrating smart grid assets during normal and disrupted operations

    Science.gov (United States)

    Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek; Pratt, Robert G.; Chassin, David P.; Hammerstrom, Donald J.

    2017-09-12

    Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.

  9. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  10. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2013-01-01

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity

  11. Log-normal spray drop distribution...analyzed by two new computer programs

    Science.gov (United States)

    Gerald S. Walton

    1968-01-01

    Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...

  12. Presenting Thin Media Models Affects Women's Choice of Diet or Normal Snacks

    Science.gov (United States)

    Krahe, Barbara; Krause, Christina

    2010-01-01

    Our study explored the influence of thin- versus normal-size media models and of self-reported restrained eating behavior on women's observed snacking behavior. Fifty female undergraduates saw a set of advertisements for beauty products showing either thin or computer-altered normal-size female models, allegedly as part of a study on effective…

  13. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  14. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  15. Reconsideration of mass-distribution models

    Directory of Open Access Journals (Sweden)

    Ninković S.

    2014-01-01

    Full Text Available The mass-distribution model proposed by Kuzmin and Veltmann (1973 is revisited. It is subdivided into two models which have a common case. Only one of them is subject of the present study. The study is focused on the relation between the density ratio (the central one to that corresponding to the core radius and the total-mass fraction within the core radius. The latter one is an increasing function of the former one, but it cannot exceed one quarter, which takes place when the density ratio tends to infinity. Therefore, the model is extended by representing the density as a sum of two components. The extension results into possibility of having a correspondence between the infinite density ratio and 100% total-mass fraction. The number of parameters in the extended model exceeds that of the original model. Due to this, in the extended model, the correspondence between the density ratio and total-mass fraction is no longer one-to-one; several values of the total-mass fraction can correspond to the same value for the density ratio. In this way, the extended model could explain the contingency of having two, or more, groups of real stellar systems (subsystems in the diagram total-mass fraction versus density ratio. [Projekat Ministarstva nauke Republike Srbije, br. 176011: Dynamics and Kinematics of Celestial Bodies and Systems

  16. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  17. Angular momentum dependence of the distribution of shell model eigenenergies

    International Nuclear Information System (INIS)

    Yen, M.K.

    1974-01-01

    In the conventional shell model calculation the many-particle energy matrices are constructed and diagonalized for definite angular momentum and parity. However the resulting set of eigenvalues possess a near normal behavior and hence a simple statistical description is possible. Usually one needs only about four parameters to capture the average level densities if the size of the set is not too small. The parameters are essentially moments of the distribution. But the difficulty lies in the yet unsolved problem of calculating moments in the fixed angular momentum subspace. We have derived a formula to approximate the angular momentum projection dependence of any operator averaged in a shell model basis. This approximate formula which is a truncated series in Hermite polynomials has been proved very good numerically and justified analytically for large systems. Applying this formula to seven physical cases we have found that the fixed angular momentum projection energy centroid, width and higher central moments can be obtained accurately provided for even-even nuclei the even and odd angular momentum projections are treated separately. Using this information one can construct the energy distribution for fixed angular momentum projection assuming normal behavior. Then the fixed angular momentum level densities are deduced and spectra are extracted. Results are in reasonably good agreement with the exact values although not as good as those obtained using exact fixed angular momentum moments. (Diss. Abstr. Int., B)

  18. A flexible multipurpose model for normal and transient cell kinetics

    International Nuclear Information System (INIS)

    Toivonen, Harri.

    1979-07-01

    The internal hypothetical compartments within the different phases of the cell cycle have been adopted as the basis of models dealing with various specific problems in cell kinetics. This approach was found to be of more general validity, extending from expanding cell populations to complex maturation processes. The differential equations describing the system were solved with an effective, commercially available library subroutine. Special attention was devoted to analysis of transient and feedback kinetics of cell populations encountered in diverse environmental and exposure conditions, for instance in cases of wounding and radiation damage. (author)

  19. Evaluation of myocardial distribution of iodine-123 labeled metaiodobenzylguanidine (123I-MIBG) in normal subjects

    International Nuclear Information System (INIS)

    Tsuchimochi, Shinsaku; Tamaki, Nagara; Shirakawa, Seishi; Fujita, Toru; Yonekura, Yoshiharu; Konishi, Junji; Nohara, Ryuji; Sasayama, Shigetake; Nishioka, Kenya

    1994-01-01

    The normal pattern of the myocardial sympathetic innervation was studied in 15 subjects using gamma camera scintigraphy with iodine-123 labeled metaiodobenzylguanidine ( 123 I-MIBG). Seven younger subjects (mean age 24.6±3.6) and eight older patients (mean age 60.9±8.4) with normal cardiac function were studied. Planar imaging was obtained at 15 minutes and 3 hours, and SPECT was also performed 3 hours after injection of 111 MBq (3 mCi) of MIBG. The younger subjects showed higher the heart to mediastinum count ratio (2.91±0.25 vs. 2.67±0.34; p<0.05) and higher inferior to anterior count ratio (1.19±0.15 vs. 0.97±0.13; p<0.05) on the late scan. The bull's-eye polar map also differences in counts in the mid-inferior (p<0.005), basal-inferior (p<0.005) and mid-lateral sectors (p<0.01). But there was no significant difference in MIBG washout rate from myocardium between two groups. These data suggest that there is a difference of the cardiac sympathetic innervation, with older subjects having fewer sympathetic nerve terminals, especially in inferior than younger subjects. We conclude that the age difference in sympathetic nerve function should be considered in the interpretation of MIBG scan. (author)

  20. Effects of adipose tissue distribution on maximum lipid oxidation rate during exercise in normal-weight women.

    Science.gov (United States)

    Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N

    2014-06-01

    Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  1. Normal loads program for aerodynamic lifting surface theory. [evaluation of spanwise and chordwise loading distributions

    Science.gov (United States)

    Medan, R. T.; Ray, K. S.

    1974-01-01

    A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.

  2. Distributed Bayesian Networks for User Modeling

    DEFF Research Database (Denmark)

    Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang

    2006-01-01

    The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used by such ada......The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...

  3. Competition between clonal plasma cells and normal cells for potentially overlapping bone marrow niches is associated with a progressively altered cellular distribution in MGUS vs myeloma.

    Science.gov (United States)

    Paiva, B; Pérez-Andrés, M; Vídriales, M-B; Almeida, J; de las Heras, N; Mateos, M-V; López-Corral, L; Gutiérrez, N C; Blanco, J; Oriol, A; Hernández, M T; de Arriba, F; de Coca, A G; Terol, M-J; de la Rubia, J; González, Y; Martín, A; Sureda, A; Schmidt-Hieber, M; Schmitz, A; Johnsen, H E; Lahuerta, J-J; Bladé, J; San-Miguel, J F; Orfao, A

    2011-04-01

    Disappearance of normal bone marrow (BM) plasma cells (PC) predicts malignant transformation of monoclonal gammopathy of undetermined significance (MGUS) and smoldering myeloma (SMM) into symptomatic multiple myeloma (MM). The homing, behavior and survival of normal PC, but also CD34(+) hematopoietic stem cells (HSC), B-cell precursors, and clonal PC largely depends on their interaction with stromal cell-derived factor-1 (SDF-1) expressing, potentially overlapping BM stromal cell niches. Here, we investigate the distribution, phenotypic characteristics and competitive migration capacity of these cell populations in patients with MGUS, SMM and MM vs healthy adults (HA) aged >60 years. Our results show that BM and peripheral blood (PB) clonal PC progressively increase from MGUS to MM, the latter showing a slightly more immature immunophenotype. Of note, such increased number of clonal PC is associated with progressive depletion of normal PC, B-cell precursors and CD34(+) HSC in the BM, also with a parallel increase in PB. In an ex vivo model, normal PC, B-cell precursors and CD34(+) HSC from MGUS and SMM, but not MM patients, were able to abrogate the migration of clonal PC into serial concentrations of SDF-1. Overall, our results show that progressive competition and replacement of normal BM cells by clonal PC is associated with more advanced disease in patients with MGUS, SMM and MM.

  4. The pharmacokinetics, distribution and degradation of human recombinant interleukin 1 beta in normal rats

    DEFF Research Database (Denmark)

    Reimers, J; Wogensen, L D; Welinder, B

    1991-01-01

    Based upon in vivo rat experiments it was recently suggested that interleukin 1 in the circulation may be implicated in the initial events of beta-cell destruction leading to insulin-dependent diabetes mellitus (IDDM) in humans. The aim of the present study was to estimate half-lives of distribut......Based upon in vivo rat experiments it was recently suggested that interleukin 1 in the circulation may be implicated in the initial events of beta-cell destruction leading to insulin-dependent diabetes mellitus (IDDM) in humans. The aim of the present study was to estimate half......-lives of distribution (T1/2 alpha) and elimination phases (T1/2 beta) of human recombinant interleukin 1 beta (rIL-1 beta), and its tissue distribution and cellular localization by means of mono-labelled, biologically active 125I-rIL-1 beta. After intravenous (i.v.) injection, 125I-rIL-1 beta was eliminated from.......v., intraperitoneal (i.p.) and subcutaneous (s.c.) injections, as demonstrated by high performance size exclusion chromatography, trichloracetic acid precipitation and SDS-PAGE until 5 h after tracer injection. Pre-treatment with 'cold' rIL-1 beta enhanced degradation of a subsequent injection of tracer. The route...

  5. Comparison of plantar pressure distribution in subjects with normal and flat feet during gait

    Directory of Open Access Journals (Sweden)

    Aluisio Otavio Vargas Avila

    2010-06-01

    Full Text Available The aim of this study was to determine the possible relationship between loss of thenormal medial longitudinal arch measured by the height of the navicular bone in a static situationand variables related to plantar pressure distribution measured in a dynamic situation. Elevenmen (21 ± 3 years, 74 ± 10 kg and 175 ± 4 cm participated in the study. The Novel Emed-ATSystem was used for the acquisition of plantar pressure distribution data (peak pressure, meanpressure, contact area, and relative load at a sampling rate of 50 Hz. The navicular drop testproposed by Brody (1982 was used to assess the height of the navicular bone for classificationof the subjects. The results were compared by the Mann-Whitney U test, with the level of significanceset at p ≤ 0.05. Differences were observed between the two groups in the mid-foot regionfor all variables studied, with the observation of higher mean values in subjects with flat feet.There were also significant differences in contact area, relative load, peak pressure, and meanpressure between groups. The present study demonstrates the importance of paying attentionto subjects with flat feet because changes in plantar pressure distribution are associated withdiscomfort and injuries.

  6. Considerations in modeling fission gas release during normal operation

    International Nuclear Information System (INIS)

    Rumble, E.T.; Lim, E.Y.; Stuart, R.G.

    1977-01-01

    The EPRI LWR fuel rod modeling code evaluation program analyzed seven fuel rods with experimental fission gas release data. In these cases, rod-averged burnups are less than 20,000 MWD/MTM, while the fission gas release fractions range roughly from 2 to 27%. Code results demonstrate the complexities in calculating fission gas release in certain operating regimes. Beyond this work, the behavior of a pre-pressurized PWR rod is simulated to average burnups of 40,000 MWD/MTM using GAPCON-THERMAL-2. Analysis of the sensitivity of fission gas release to power histories and release correlations indicate the strong impact that LMFBR type release correlations induce at high burnup. 15 refs

  7. Investigation of stress distribution in normal and oblique partial penetration. Welded nozzles by 3-D photoelastic stress freezing method

    International Nuclear Information System (INIS)

    Miyamoto, H.; Kubo, M.; Katori, T.

    1981-01-01

    Experimental investigation by 3-D photoelasticity has been carried out to measure the stress distribution of partial penetration welded nozzles attached to the bottom head of a pressure vessel. A 3-D photoelastic stress freezing method was chosen as the most effective means of observation of the stress distribution in the vicinity of the nozzle/wall weld. The experimental model was a 1:20 scale spherical bottom head. Both an axisymmetric nozzle and an asymmetric nozzle were investigated. Epoxy resin, which is a thermosetting plastic, was used as the model material. The oblique effect was examined by comparing the stress distribution of the asymmetric nozzle with that of the axisymmetric nozzle. Furthermore, the experimental results were compared with the analytical results using 3-D finite element method (FEM). The stress distributions obtained from the frozen fringe pattern of the 3-D photoelastic model were in good agreement with those by 3-D FEM. (orig.)

  8. Uncertainty importance measure for models with correlated normal variables

    International Nuclear Information System (INIS)

    Hao, Wenrui; Lu, Zhenzhou; Wei, Pengfei

    2013-01-01

    In order to explore the contributions by correlated input variables to the variance of the model output, the contribution decomposition of the correlated input variables based on Mara's definition is investigated in detail. By taking the quadratic polynomial output without cross term as an illustration, the solution of the contribution decomposition is derived analytically using the statistical inference theory. After the correction of the analytical solution is validated by the numerical examples, they are employed to two engineering examples to show their wide application. The derived analytical solutions can directly be used to recognize the contributions by the correlated input variables in case of the quadratic or linear polynomial output without cross term, and the analytical inference method can be extended to the case of higher order polynomial output. Additionally, the origins of the interaction contribution of the correlated inputs are analyzed, and the comparisons of the existing contribution indices are completed, on which the engineer can select the suitable indices to know the necessary information. At last, the degeneration of the correlated inputs to the uncorrelated ones and some computational issues are discussed in concept

  9. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    Science.gov (United States)

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  10. One-dimensional time-dependent conduction states and temperature distribution along a normal zone during a quench

    International Nuclear Information System (INIS)

    Lopez, G.

    1991-01-01

    The quench simulations of a superconducting (s.c.) magnet requires some assumptions about the evolution of the normal zone and its temperature profile. The axial evolution of the normal zone is considered through the longitudinal quench velocity. However, the transversal quench propagation may be considered through the transversal quench velocity or with the turn-to-turn time delay quench propagation. The temperature distribution has been assumed adiabatic-like or cosine-like in two different computer programs. Although both profiles are different, they bring about more or less the same qualitative quench results differing only in about 8%. Unfortunately, there are not experimental data for the temperature profile along the conductor in a quench event to have a realistic comparison. Little attention has received the temperature profile, mainly because it is not so critical parameter in the quench analysis. Nonetheless, a confident quench analysis requires that the temperature distribution along the normal zone be taken into account with good approximation. In this paper, an analytical study is made about the temperature profile

  11. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  12. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  13. Online modelling of water distribution systems: a UK case study

    Directory of Open Access Journals (Sweden)

    J. Machell

    2010-03-01

    Full Text Available Hydraulic simulation models of water distribution networks are routinely used for operational investigations and network design purposes. However, their full potential is often never realised because, in the majority of cases, they have been calibrated with data collected manually from the field during a single historic time period and, as such, reflect the network operational conditions that were prevalent at that time, and they are then applied as part of a reactive, desktop investigation. In order to use a hydraulic model to assist proactive distribution network management its element asset information must be up to date and it should be able to access current network information to drive simulations. Historically this advance has been restricted by the high cost of collecting and transferring the necessary field measurements. However, recent innovation and cost reductions associated with data transfer is resulting in collection of data from increasing numbers of sensors in water supply systems, and automatic transfer of the data to point of use. This means engineers potentially have access to a constant stream of current network data that enables a new era of "on-line" modelling that can be used to continually assess standards of service compliance for pressure and reduce the impact of network events, such as mains bursts, on customers. A case study is presented here that shows how an online modelling system can give timely warning of changes from normal network operation, providing capacity to minimise customer impact.

  14. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.

  15. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research

    International Nuclear Information System (INIS)

    Currie, L.A.

    2001-01-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85 Kr and 14 C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger- Mueller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test - for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14 C- 12 C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban ''soot''. The third, environmentally skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85 Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom. (orig.)

  16. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    Science.gov (United States)

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  17. Normal values of regional left ventricular myocardial thickness, mass and distribution-assessed by 320-detector computed tomography angiography in the Copenhagen General Population Study

    DEFF Research Database (Denmark)

    Hindsø, Louise; Fuchs, Andreas; Kühl, Jørgen Tobias

    2017-01-01

    regional normal reference values of the left ventricle. The aim of this study was to derive reference values of regional LV myocardial thickness (LVMT) and mass (LVMM) from a healthy study group of the general population using cardiac computed tomography angiography (CCTA). We wanted to introduce LV...... myocardial distribution (LVMD) as a measure of regional variation of the LVMT. Moreover, we wanted to determine whether these parameters varied between men and women. We studied 568 (181 men; 32%) adults, free of cardiovascular disease and risk factors, who underwent 320-detector CCTA. Mean age was 55 (range...... 40-84) years. Regional LVMT and LVMM were measured, according to the American Heart Association's 17 segment model, using semi-automatic software. Mean LVMT were 6.6 mm for men and 5.4 mm for women (p normal LV was thickest in the basal septum (segment 3; men = 8.3 mm; women = 7.2 mm...

  18. Interaction between a normal shock wave and a turbulent boundary layer at high transonic speeds. I - Pressure distribution

    Science.gov (United States)

    Messiter, A. F.

    1980-01-01

    Asymptotic solutions are derived for the pressure distribution in the interaction of a weak normal shock wave with a turbulent boundary layer. The undisturbed boundary layer is characterized by the law of the wall and the law of the wake for compressible flow. In the limiting case considered, for 'high' transonic speeds, the sonic line is very close to the wall. Comparisons with experiment are shown, with corrections included for the effect of longitudinal wall curvature and for the boundary-layer displacement effect in a circular pipe.

  19. In-medium pion valence distributions in a light-front model

    Energy Technology Data Exchange (ETDEWEB)

    Melo, J.P.B.C. de, E-mail: joao.mello@cruzeirodosul.edu.br [Laboratório de Física Teórica e Computacional – LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo (Brazil); Tsushima, K. [Laboratório de Física Teórica e Computacional – LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo (Brazil); Ahmed, I. [Laboratório de Física Teórica e Computacional – LFTC, Universidade Cruzeiro do Sul, 01506-000 São Paulo (Brazil); National Center for Physics, Quaidi-i-Azam University Campus, Islamabad 45320 (Pakistan)

    2017-03-10

    Pion valence distributions in nuclear medium and vacuum are studied in a light-front constituent quark model. The in-medium input for studying the pion properties is calculated by the quark-meson coupling model. We find that the in-medium pion valence distribution, as well as the in-medium pion valence wave function, are substantially modified at normal nuclear matter density, due to the reduction in the pion decay constant.

  20. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    Science.gov (United States)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  1. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  2. A Distributed Snow Evolution Modeling System (SnowModel)

    Science.gov (United States)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  3. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  4. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon; Liang, Faming

    2014-01-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo

  5. Application of a Brittle Damage Model to Normal Plate-on-Plate Impact

    National Research Council Canada - National Science Library

    Raftenberg, Martin N

    2005-01-01

    A brittle damage model presented by Grinfeld and Wright of the U.S. Army Research Laboratory was implemented in the LS-DYNA finite element code and applied to the simulation of normal plate-on-plate impact...

  6. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  7. Currents, HF Radio-derived, Monterey Bay, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  8. Interlayer material transport during layer-normal shortening. Part I. The model

    NARCIS (Netherlands)

    Molen, I. van der

    1985-01-01

    To analyse mass-transfer during deformation, the case is considered of a multilayer experiencing a layer-normal shortening that is volume constant on the scale of many layers. Strain rate is homogeneously distributed on the layer-scale if diffusion is absent; when transport of matter between the

  9. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  10. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2017-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  11. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed

    2017-08-28

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  12. Differentiation in boron distribution in adult male and female rats' normal brain: A BNCT approach

    Energy Technology Data Exchange (ETDEWEB)

    Goodarzi, Samereh, E-mail: samere.g@gmail.com [Department of Nuclear Engineering, Science and Research Branch, Islamic Azad University, PO Box 19395-1943, Tehran (Iran, Islamic Republic of); Pazirandeh, Ali, E-mail: paziran@yahoo.com [Department of Nuclear Engineering, Science and Research Branch, Islamic Azad University, PO Box 19395-1943, Tehran (Iran, Islamic Republic of); Jameie, Seyed Behnamedin, E-mail: behnamjameie@tums.ac.ir [Basic Science Department, Faculty of Allied Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Anatomy, Faculty of Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Baghban Khojasteh, Nasrin, E-mail: khojasteh_n@yahoo.com [Department of Nuclear Engineering, Science and Research Branch, Islamic Azad University, PO Box 19395-1943, Tehran (Iran, Islamic Republic of)

    2012-06-15

    Boron distribution in adult male and female rats' normal brain after boron carrier injection (0.005 g Boric Acid+0.005 g Borax+10 ml distilled water, pH: 7.4) was studied in this research. Coronal sections of control and trial animal tissue samples were irradiated with thermal neutrons. Using alpha autoradiography, significant differences in boron concentration were seen in forebrain, midbrain and hindbrain sections of male and female animal groups with the highest value, four hours after boron compound injection. - Highlights: Black-Right-Pointing-Pointer Boron distribution in male and female rats' normal brain was studied in this research. Black-Right-Pointing-Pointer Coronal sections of animal tissue samples were irradiated with thermal neutrons. Black-Right-Pointing-Pointer Alpha and Lithium tracks were counted using alpha autoradiography. Black-Right-Pointing-Pointer Different boron concentration was seen in brain sections of male and female rats. Black-Right-Pointing-Pointer The highest boron concentration was seen in 4 h after boron compound injection.

  13. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  14. A preliminary evaluation of myoelectrical energy distribution of the front neck muscles in pharyngeal phase during normal swallowing.

    Science.gov (United States)

    Mingxing Zhu; Wanzhang Yang; Samuel, Oluwarotimi Williams; Yun Xiang; Jianping Huang; Haiqing Zou; Guanglin Li

    2016-08-01

    Pharyngeal phase is a central hub of swallowing in which food bolus pass through from the oral cavity to the esophageal. Proper understanding of the muscular activities in the pharyngeal phase is useful for assessing swallowing function and the occurrence of dysphagia in humans. In this study, high-density (HD) surface electromyography (sEMG) was used to study the muscular activities in the pharyngeal phase during swallowing tasks involving three healthy male subjects. The root mean square (RMS) of the HD sEMG data was computed by using a series of segmented windows as myoelectrical energy. And the RMS of each window covering all channels (16×5) formed a matrix. During the pharyngeal phase of swallowing, three of the matrixes were chosen and normalized to obtain the HD energy maps and the statistical parameter. The maps across different viscosity levels offered the energy distribution which showed the muscular activities of the left and right sides of the front neck muscles. In addition, the normalized average RMS (NARE) across different viscosity levels revealed a left-right significant correlation (r=0.868±0.629, pstronger correlation when swallowing water. This pilot study suggests that HD sEMG would be a potential tool to evaluate muscular activities in pharyngeal phase during normal swallowing. Also, it might provide useful information for dysphagia diagnosis.

  15. Effects of glucose, insulin, and insulin resistance on cerebral 18F-FDG distribution in cognitively normal older subjects

    Science.gov (United States)

    Onishi, Airin; Fujiwara, Yoshinori; Ishiwata, Kiichi; Ishii, Kenji

    2017-01-01

    Background Increasing plasma glucose levels and insulin resistance can alter the distribution pattern of fluorine-18-labeled fluorodeoxyglucose (18F-FDG) in the brain and relatively reduce 18F-FDG uptake in Alzheimer's disease (AD)-related hypometabolic regions, leading to the appearance of an AD-like pattern. However, its relationship with plasma insulin levels is unclear. We aimed to compare the effects of plasma glucose levels, plasma insulin levels and insulin resistance on the appearance of the AD-like pattern in 18F-FDG images. Methods Fifty-nine cognitively normal older subjects (age = 75.7 ± 6.4 years) underwent 18F-FDG positron emission tomography along with measurement of plasma glucose and insulin levels. As an index of insulin resistance, the Homeostasis model assessment of Insulin Resistance (HOMA-IR) was calculated. Results Plasma glucose levels, plasma insulin levels, and HOMA-IR were 102.2 ± 8.1 mg/dL, 4.1 ± 1.9 μU/mL, and 1.0 ± 0.5, respectively. Whole-brain voxelwise analysis showed a negative correlation of 18F-FDG uptake with plasma glucose levels in the precuneus and lateral parietotemporal regions (cluster-corrected p < 0.05), and no correlation with plasma insulin levels or HOMA-IR. In the significant cluster, 18F-FDG uptake decreased by approximately 4–5% when plasma glucose levels increased by 20 mg/dL. In the precuneus region, volume-of-interest analysis confirmed a negative correlation of 18F-FDG uptake with plasma glucose levels (r = -0.376, p = 0.002), and no correlation with plasma insulin levels (r = 0.156, p = 0.12) or HOMA-IR (r = 0.096, p = 0.24). Conclusion This study suggests that, of the three parameters, plasma glucose levels have the greatest effect on the appearance of the AD-like pattern in 18F-FDG images. PMID:28715453

  16. Effects of glucose, insulin, and insulin resistance on cerebral 18F-FDG distribution in cognitively normal older subjects.

    Directory of Open Access Journals (Sweden)

    Kenji Ishibashi

    Full Text Available Increasing plasma glucose levels and insulin resistance can alter the distribution pattern of fluorine-18-labeled fluorodeoxyglucose (18F-FDG in the brain and relatively reduce 18F-FDG uptake in Alzheimer's disease (AD-related hypometabolic regions, leading to the appearance of an AD-like pattern. However, its relationship with plasma insulin levels is unclear. We aimed to compare the effects of plasma glucose levels, plasma insulin levels and insulin resistance on the appearance of the AD-like pattern in 18F-FDG images.Fifty-nine cognitively normal older subjects (age = 75.7 ± 6.4 years underwent 18F-FDG positron emission tomography along with measurement of plasma glucose and insulin levels. As an index of insulin resistance, the Homeostasis model assessment of Insulin Resistance (HOMA-IR was calculated.Plasma glucose levels, plasma insulin levels, and HOMA-IR were 102.2 ± 8.1 mg/dL, 4.1 ± 1.9 μU/mL, and 1.0 ± 0.5, respectively. Whole-brain voxelwise analysis showed a negative correlation of 18F-FDG uptake with plasma glucose levels in the precuneus and lateral parietotemporal regions (cluster-corrected p < 0.05, and no correlation with plasma insulin levels or HOMA-IR. In the significant cluster, 18F-FDG uptake decreased by approximately 4-5% when plasma glucose levels increased by 20 mg/dL. In the precuneus region, volume-of-interest analysis confirmed a negative correlation of 18F-FDG uptake with plasma glucose levels (r = -0.376, p = 0.002, and no correlation with plasma insulin levels (r = 0.156, p = 0.12 or HOMA-IR (r = 0.096, p = 0.24.This study suggests that, of the three parameters, plasma glucose levels have the greatest effect on the appearance of the AD-like pattern in 18F-FDG images.

  17. Multicompartmental model for iodide, thyroxine, and triiodothyronine metabolism in normal and spontaneously hyperthyroid cats

    Energy Technology Data Exchange (ETDEWEB)

    Hays, M.T.; Broome, M.R.; Turrel, J.M.

    1988-06-01

    A comprehensive multicompartmental kinetic model was developed to account for the distribution and metabolism of simultaneously injected radioactive iodide (iodide*), T3 (T3*), and T4 (T4*) in six normal and seven spontaneously hyperthyroid cats. Data from plasma samples (analyzed by HPLC), urine, feces, and thyroid accumulation were incorporated into the model. The submodels for iodide*, T3*, and T4* all included both a fast and a slow exchange compartment connecting with the plasma compartment. The best-fit iodide* model also included a delay compartment, presumed to be pooling of gastrosalivary secretions. This delay was 62% longer in the hyperthyroid cats than in the euthyroid cats. Unexpectedly, all of the exchange parameters for both T4 and T3 were significantly slowed in hyperthyroidism, possibly because the hyperthyroid cats were older. None of the plasma equivalent volumes of the exchange compartments of iodide*, T3*, or T4* was significantly different in the hyperthyroid cats, although the plasma equivalent volume of the fast T4 exchange compartments were reduced. Secretion of recycled T4* from the thyroid into the plasma T4* compartment was essential to model fit, but its quantity could not be uniquely identified in the absence of multiple thyroid data points. Thyroid secretion of T3* was not detectable. Comparing the fast and slow compartments, there was a shift of T4* deiodination into the fast exchange compartment in hyperthyroidism. Total body mean residence times (MRTs) of iodide* and T3* were not affected by hyperthyroidism, but mean T4* MRT was decreased 23%. Total fractional T4 to T3 conversion was unchanged in hyperthyroidism, although the amount of T3 produced by this route was increased nearly 5-fold because of higher concentrations of donor stable T4.

  18. Multicompartmental model for iodide, thyroxine, and triiodothyronine metabolism in normal and spontaneously hyperthyroid cats

    International Nuclear Information System (INIS)

    Hays, M.T.; Broome, M.R.; Turrel, J.M.

    1988-01-01

    A comprehensive multicompartmental kinetic model was developed to account for the distribution and metabolism of simultaneously injected radioactive iodide (iodide*), T3 (T3*), and T4 (T4*) in six normal and seven spontaneously hyperthyroid cats. Data from plasma samples (analyzed by HPLC), urine, feces, and thyroid accumulation were incorporated into the model. The submodels for iodide*, T3*, and T4* all included both a fast and a slow exchange compartment connecting with the plasma compartment. The best-fit iodide* model also included a delay compartment, presumed to be pooling of gastrosalivary secretions. This delay was 62% longer in the hyperthyroid cats than in the euthyroid cats. Unexpectedly, all of the exchange parameters for both T4 and T3 were significantly slowed in hyperthyroidism, possibly because the hyperthyroid cats were older. None of the plasma equivalent volumes of the exchange compartments of iodide*, T3*, or T4* was significantly different in the hyperthyroid cats, although the plasma equivalent volume of the fast T4 exchange compartments were reduced. Secretion of recycled T4* from the thyroid into the plasma T4* compartment was essential to model fit, but its quantity could not be uniquely identified in the absence of multiple thyroid data points. Thyroid secretion of T3* was not detectable. Comparing the fast and slow compartments, there was a shift of T4* deiodination into the fast exchange compartment in hyperthyroidism. Total body mean residence times (MRTs) of iodide* and T3* were not affected by hyperthyroidism, but mean T4* MRT was decreased 23%. Total fractional T4 to T3 conversion was unchanged in hyperthyroidism, although the amount of T3 produced by this route was increased nearly 5-fold because of higher concentrations of donor stable T4

  19. Molecular dynamics study of lipid bilayers modeling the plasma membranes of normal murine thymocytes and leukemic GRSL cells.

    Science.gov (United States)

    Andoh, Yoshimichi; Okazaki, Susumu; Ueoka, Ryuichi

    2013-04-01

    Molecular dynamics (MD) calculations for the plasma membranes of normal murine thymocytes and thymus-derived leukemic GRSL cells in water have been performed under physiological isothermal-isobaric conditions (310.15K and 1 atm) to investigate changes in membrane properties induced by canceration. The model membranes used in our calculations for normal and leukemic thymocytes comprised 23 and 25 kinds of lipids, respectively, including phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, lysophospholipids, and cholesterol. The mole fractions of the lipids adopted here were based on previously published experimental values. Our calculations clearly showed that the membrane area was increased in leukemic cells, and that the isothermal area compressibility of the leukemic plasma membranes was double that of normal cells. The calculated membranes of leukemic cells were thus considerably bulkier and softer in the lateral direction compared with those of normal cells. The tilt angle of the cholesterol and the conformation of the phospholipid fatty acid tails both showed a lower level of order in leukemic cell membranes compared with normal cell membranes. The lateral radial distribution function of the lipids also showed a more disordered structure in leukemic cell membranes than in normal cell membranes. These observations all show that, for the present thymocytes, the lateral structure of the membrane is considerably disordered by canceration. Furthermore, the calculated lateral self-diffusion coefficient of the lipid molecules in leukemic cell membranes was almost double that in normal cell membranes. The calculated rotational and wobbling autocorrelation functions also indicated that the molecular motion of the lipids was enhanced in leukemic cell membranes. Thus, here we have demonstrated that the membranes of thymocyte leukemic cells are more disordered and more fluid than normal cell membranes. Copyright © 2013

  20. Global Bi-ventricular endocardial distribution of activation rate during long duration ventricular fibrillation in normal and heart failure canines.

    Science.gov (United States)

    Luo, Qingzhi; Jin, Qi; Zhang, Ning; Han, Yanxin; Wang, Yilong; Huang, Shangwei; Lin, Changjian; Ling, Tianyou; Chen, Kang; Pan, Wenqi; Wu, Liqun

    2017-04-13

    The objective of this study was to detect differences in the distribution of the left and right ventricle (LV & RV) activation rate (AR) during short-duration ventricular fibrillation (SDVF, 1 min) in normal and heart failure (HF) canine hearts. Ventricular fibrillation (VF) was electrically induced in six healthy dogs (control group) and six dogs with right ventricular pacing-induced congestive HF (HF group). Two 64-electrode basket catheters deployed in the LV and RV were used for global endocardium electrical mapping. The AR of VF was estimated by fast Fourier transform analysis from each electrode. In the control group, the LV was activated faster than the RV in the first 20 s, after which there was no detectable difference in the AR between them. When analyzing the distribution of the AR within the bi-ventricles at 3 min of LDVF, the posterior LV was activated fastest, while the anterior was slowest. In the HF group, a detectable AR gradient existed between the two ventricles within 3 min of VF, with the LV activating more quickly than the RV. When analyzing the distribution of the AR within the bi-ventricles at 3 min of LDVF, the septum of the LV was activated fastest, while the anterior was activated slowest. A global bi-ventricular endocardial AR gradient existed within the first 20 s of VF but disappeared in the LDVF in healthy hearts. However, the AR gradient was always observed in both SDVF and LDVF in HF hearts. The findings of this study suggest that LDVF in HF hearts can be maintained differently from normal hearts, which accordingly should lead to the development of different management strategies for LDVF resuscitation.

  1. Spatial Distribution of Iron Within the Normal Human Liver Using Dual-Source Dual-Energy CT Imaging.

    Science.gov (United States)

    Abadia, Andres F; Grant, Katharine L; Carey, Kathleen E; Bolch, Wesley E; Morin, Richard L

    2017-11-01

    Explore the potential of dual-source dual-energy (DSDE) computed tomography (CT) to retrospectively analyze the uniformity of iron distribution and establish iron concentration ranges and distribution patterns found in healthy livers. Ten mixtures consisting of an iron nitrate solution and deionized water were prepared in test tubes and scanned using a DSDE 128-slice CT system. Iron images were derived from a 3-material decomposition algorithm (optimized for the quantification of iron). A conversion factor (mg Fe/mL per Hounsfield unit) was calculated from this phantom study as the quotient of known tube concentrations and their corresponding CT values. Retrospective analysis was performed of patients who had undergone DSDE imaging for renal stones. Thirty-seven patients with normal liver function were randomly selected (mean age, 52.5 years). The examinations were processed for iron concentration. Multiple regions of interest were analyzed, and iron concentration (mg Fe/mL) and distribution was reported. The mean conversion factor obtained from the phantom study was 0.15 mg Fe/mL per Hounsfield unit. Whole-liver mean iron concentrations yielded a range of 0.0 to 2.91 mg Fe/mL, with 94.6% (35/37) of the patients exhibiting mean concentrations below 1.0 mg Fe/mL. The most important finding was that iron concentration was not uniform and patients exhibited regionally high concentrations (36/37). These regions of higher concentration were observed to be dominant in the middle-to-upper part of the liver (75%), medially (72.2%), and anteriorly (83.3%). Dual-source dual-energy CT can be used to assess the uniformity of iron distribution in healthy subjects. Applying similar techniques to unhealthy livers, future research may focus on the impact of hepatic iron content and distribution for noninvasive assessment in diseased subjects.

  2. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  3. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  4. Distribution and ultrastructure of pigment cells in the skins of normal and albino adult turbot, Scophthalmus Maximus

    Institute of Scientific and Technical Information of China (English)

    GUO Huarong; HUANG Bing; QI Fei; ZHANG Shicui

    2007-01-01

    The distribution and ultrastructure of pigment cells in skins of normal and albino adult turbots were examined with transmission electron microscopy (TEM). Three types of pigment cells of melanophore, iridophore and xanthophore have been recognized in adult turbot skins. The skin color depends mainly on the amount and distribution of melanophore and iridophore, as xanthophore is quite rare. No pigment cells can be found in the epidermis of the skins. In the pigmented ocular skin of the turbot, melanophore and iridophore are usually co-localized in the dermis. This is quite different from the distribution in larvae skin. In albino and white blind skins of adult turbots, however, only iridophore monolayer still exists, while the melanophore monolayer disappears. This cytological evidence explains why the albino adult turbot, unlike its larvae, could never resume its body color no matter what environmental and nutritional conditions were provided. Endocytosis is quite active in the cellular membrane of the iridophore. This might be related to the formation of reflective platelet and stability of the iridophore.

  5. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  6. Spatial distribution of cannabinoid receptor type 1 (CB1 in normal canine central and peripheral nervous system.

    Directory of Open Access Journals (Sweden)

    Jessica Freundt-Revilla

    Full Text Available The endocannabinoid system is a regulatory pathway consisting of two main types of cannabinoid receptors (CB1 and CB2 and their endogenous ligands, the endocannabinoids. The CB1 receptor is highly expressed in the central and peripheral nervous systems (PNS in mammalians and is involved in neuromodulatory functions. Since endocannabinoids were shown to be elevated in cerebrospinal fluid of epileptic dogs, knowledge about the species specific CB receptor expression in the nervous system is required. Therefore, we assessed the spatial distribution of CB1 receptors in the normal canine CNS and PNS. Immunohistochemistry of several regions of the brain, spinal cord and peripheral nerves from a healthy four-week-old puppy, three six-month-old dogs, and one ten-year-old dog revealed strong dot-like immunoreactivity in the neuropil of the cerebral cortex, Cornu Ammonis (CA and dentate gyrus of the hippocampus, midbrain, cerebellum, medulla oblongata and grey matter of the spinal cord. Dense CB1 expression was found in fibres of the globus pallidus and substantia nigra surrounding immunonegative neurons. Astrocytes were constantly positive in all examined regions. CB1 labelled neurons and satellite cells of the dorsal root ganglia, and myelinating Schwann cells in the PNS. These results demonstrate for the first time the spatial distribution of CB1 receptors in the healthy canine CNS and PNS. These results can be used as a basis for further studies aiming to elucidate the physiological consequences of this particular anatomical and cellular distribution.

  7. Determination and correlation of spatial distribution of trace elements in normal and neoplastic breast tissues evaluated by μ-XRF

    International Nuclear Information System (INIS)

    Silva, M.P.; Oliveira, M.A.; Poletti, M.E.

    2012-01-01

    Full text: Some trace elements, naturally present in breast tissues, participate in a large number of biological processes, which include among others, activation or inhibition of enzymatic reactions and changes on cell membranes permeability, suggesting that these elements may influence carcinogenic processes. Thus, knowledge of the amounts of these elements and their spatial distribution in normal and neoplastic tissues may help in understanding the role of these elements in the carcinogenic process and tumor progression of breast cancers. Concentrations of trace elements like Ca, Fe, Cu and Zn, previously studied at LNLS using TXRF and conventional XRF, were elevated in neoplastic breast tissues compared to normal tissues. In this study we determined the spatial distribution of these elements in normal and neoplastic breast tissues using μ-XRF technique. We analyzed 22 samples of normal and neoplastic breast tissues (malignant and benign) obtained from paraffin blocks available for study at the Department of Pathology HC-FMRP/USP. From the blocks, a small fraction of material was removed and subjected to histological sections of 60 μm thick made with a microtome. The slices where placed in holder samples and covered with ultralen film. Tissue samples were irradiated with a white beam of synchrotron radiation. The samples were positioned at 45 degrees with respect to the incident beam on a table with 3 freedom degrees (x, y and z), allowing independent positioning of the sample in these directions. The white beam was collimated by a 20 μm microcapillary and samples were fully scanned. At each step, a spectrum was detected for 10 s. The fluorescence emitted by elements present in the sample was detected by a Si (Li) detector with 165 eV at 5.9 keV energy resolution, placed at 90 deg with respect to the incident beam. Results reveal that trace elements Ca-Zn and Fe-Cu could to be correlated in malignant breast tissues. Quantitative results, achieved by Spearman

  8. An experimental randomized study of six different ventilatory modes in a piglet model with normal lungs

    DEFF Research Database (Denmark)

    Nielsen, J B; Sjöstrand, U H; Henneberg, S W

    1991-01-01

    A randomized study of 6 ventilatory modes was made in 7 piglets with normal lungs. Using a Servo HFV 970 (prototype system) and a Servo ventilator 900 C the ventilatory modes examined were as follows: SV-20V, i.e. volume-controlled intermittent positive-pressure ventilation (IPPV); SV-20VIosc, i...... ventilatory modes. Also the mean airway pressures were lower with the HFV modes 8-9 cm H2O compared to 11-14 cm H2O for the other modes. The gas distribution was evaluated by N2 wash-out and a modified lung clearance index. All modes showed N2 wash-out according to a two-compartment model. The SV-20P mode had.......e. volume-controlled ventilation (IPPV) with superimposed inspiratory oscillations; and SV-20VEf, i.e. volume-controlled ventilation (IPPV) with expiratory flush of fresh gas; HFV-60 denotes low-compressive high-frequency positive-pressure ventilation (HFPPV) and HVF-20 denotes low-compressive volume...

  9. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    Science.gov (United States)

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  10. Axial flow velocity patterns in a normal human pulmonary artery model: pulsatile in vitro studies.

    Science.gov (United States)

    Sung, H W; Yoganathan, A P

    1990-01-01

    It has been clinically observed that the flow velocity patterns in the pulmonary artery are directly modified by disease. The present study addresses the hypothesis that altered velocity patterns relate to the severity of various diseases in the pulmonary artery. This paper lays a foundation for that analysis by providing a detailed description of flow velocity patterns in the normal pulmonary artery, using flow visualization and laser Doppler anemometry techniques. The studies were conducted in an in vitro rigid model in a right heart pulse duplicator system. In the main pulmonary artery, a broad central flow field was observed throughout systole. The maximum axial velocity (150 cm s-1) was measured at peak systole. In the left pulmonary artery, the axial velocities were approximately evenly distributed in the perpendicular plane. However, in the bifurcation plane, they were slightly skewed toward the inner wall at peak systole and during the deceleration phase. In the right pulmonary artery, the axial velocity in the perpendicular plane had a very marked M-shaped profile at peak systole and during the deceleration phase, due to a pair of strong secondary flows. In the bifurcation plane, higher axial velocities were observed along the inner wall, while lower axial velocities were observed along the outer wall and in the center. Overall, relatively low levels of turbulence were observed in all the branches during systole. The maximum turbulence intensity measured was at the boundary of the broad central flow field in the main pulmonary artery at peak systole.

  11. Dynamical Models For Prices With Distributed Delays

    Directory of Open Access Journals (Sweden)

    Mircea Gabriela

    2015-06-01

    Full Text Available In the present paper we study some models for the price dynamics of a single commodity market. The quantities of supplied and demanded are regarded as a function of time. Nonlinearities in both supply and demand functions are considered. The inventory and the level of inventory are taken into consideration. Due to the fact that the consumer behavior affects commodity demand, and the behavior is influenced not only by the instantaneous price, but also by the weighted past prices, the distributed time delay is introduced. The following kernels are taken into consideration: demand price weak kernel and demand price Dirac kernel. Only one positive equilibrium point is found and its stability analysis is presented. When the demand price kernel is weak, under some conditions of the parameters, the equilibrium point is locally asymptotically stable. When the demand price kernel is Dirac, the existence of the local oscillations is investigated. A change in local stability of the equilibrium point, from stable to unstable, implies a Hopf bifurcation. A family of periodic orbits bifurcates from the positive equilibrium point when the time delay passes through a critical value. The last part contains some numerical simulations to illustrate the effectiveness of our results and conclusions.

  12. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  13. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Directory of Open Access Journals (Sweden)

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  14. Volatility modeling for IDR exchange rate through APARCH model with student-t distribution

    Science.gov (United States)

    Nugroho, Didit Budi; Susanto, Bambang

    2017-08-01

    The aim of this study is to empirically investigate the performance of APARCH(1,1) volatility model with the Student-t error distribution on five foreign currency selling rates to Indonesian rupiah (IDR), including the Swiss franc (CHF), the Euro (EUR), the British pound (GBP), Japanese yen (JPY), and the US dollar (USD). Six years daily closing rates over the period of January 2010 to December 2016 for a total number of 1722 observations have analysed. The Bayesian inference using the efficient independence chain Metropolis-Hastings and adaptive random walk Metropolis methods in the Markov chain Monte Carlo (MCMC) scheme has been applied to estimate the parameters of model. According to the DIC criterion, this study has found that the APARCH(1,1) model under Student-t distribution is a better fit than the model under normal distribution for any observed rate return series. The 95% highest posterior density interval suggested the APARCH models to model the IDR/JPY and IDR/USD volatilities. In particular, the IDR/JPY and IDR/USD data, respectively, have significant negative and positive leverage effect in the rate returns. Meanwhile, the optimal power coefficient of volatility has been found to be statistically different from 2 in adopting all rate return series, save the IDR/EUR rate return series.

  15. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  16. Exciting Normal Distribution

    Science.gov (United States)

    Fuchs, Karl Josef; Simonovits, Reinhard; Thaller, Bernd

    2008-01-01

    This paper describes a high school project where the mathematics teaching and learning software M@th Desktop (MD) based on the Computer Algebra System Mathematica was used for symbolical and numerical calculations and for visualisation. The mathematics teaching and learning software M@th Desktop 2.0 (MD) contains the modules Basics including tools…

  17. Blood Vessel Normalization in the Hamster Oral Cancer Model for Experimental Cancer Therapy Studies

    Energy Technology Data Exchange (ETDEWEB)

    Ana J. Molinari; Romina F. Aromando; Maria E. Itoiz; Marcela A. Garabalino; Andrea Monti Hughes; Elisa M. Heber; Emiliano C. C. Pozzi; David W. Nigg; Veronica A. Trivillin; Amanda E. Schwint

    2012-07-01

    Normalization of tumor blood vessels improves drug and oxygen delivery to cancer cells. The aim of this study was to develop a technique to normalize blood vessels in the hamster cheek pouch model of oral cancer. Materials and Methods: Tumor-bearing hamsters were treated with thalidomide and were compared with controls. Results: Twenty eight hours after treatment with thalidomide, the blood vessels of premalignant tissue observable in vivo became narrower and less tortuous than those of controls; Evans Blue Dye extravasation in tumor was significantly reduced (indicating a reduction in aberrant tumor vascular hyperpermeability that compromises blood flow), and tumor blood vessel morphology in histological sections, labeled for Factor VIII, revealed a significant reduction in compressive forces. These findings indicated blood vessel normalization with a window of 48 h. Conclusion: The technique developed herein has rendered the hamster oral cancer model amenable to research, with the potential benefit of vascular normalization in head and neck cancer therapy.

  18. Vibrational Spectra And Potential Energy Distributions of Normal Modes of N,N'-Etilenbis(P-Toluen sulfonamide)

    International Nuclear Information System (INIS)

    Alyar, S.

    2008-01-01

    N-substituted sulfonamides are well known for their diuretic, antidiabetic, antibacterial and antifungal, anticancer e.g., and are widely used in the therapy of patients. These important bioactive properties are strongly affected by the special features of -CH 2 -SO 2 -NR-linker and intramolecular motion Thus, the studies of energetic and spatial properties on N-substituted sulfonamides are of great importance to improve our understanding of their biological activities and enhance abilities to predict new drugs. Density Functional Theory B3LYP /6-31G(d,p) level has been applied to obtain the vibrational force field for the most stable conformation of N,N'-etilenbis(p-toluensulfonamit)(ptsen)having sulfonamide moiety. The results of these calculation have been compared with spectroscopic data to verify accuracy of calculation and applicability of the DFT approach to ptsen. Additionally, complete normal coordinate analyses with quantum mechanical scaling (SQM) were performed to derive the potential energy distributions (PE)

  19. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...

  20. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  1. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  2. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  4. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  5. Flash flood modeling with the MARINE hydrological distributed model

    Science.gov (United States)

    Estupina-Borrell, V.; Dartus, D.; Ababou, R.

    2006-11-01

    Flash floods are characterized by their violence and the rapidity of their occurrence. Because these events are rare and unpredictable, but also fast and intense, their anticipation with sufficient lead time for warning and broadcasting is a primary subject of research. Because of the heterogeneities of the rain and of the behavior of the surface, spatially distributed hydrological models can lead to a better understanding of the processes and so on they can contribute to a better forecasting of flash flood. Our main goal here is to develop an operational and robust methodology for flash flood forecasting. This methodology should provide relevant data (information) about flood evolution on short time scales, and should be applicable even in locations where direct observations are sparse (e.g. absence of historical and modern rainfalls and streamflows in small mountainous watersheds). The flash flood forecast is obtained by the physically based, space-time distributed hydrological model "MARINE'' (Model of Anticipation of Runoff and INondations for Extreme events). This model is presented and tested in this paper for a real flash flood event. The model consists in two steps, or two components: the first component is a "basin'' flood module which generates flood runoff in the upstream part of the watershed, and the second component is the "stream network'' module, which propagates the flood in the main river and its subsidiaries. The basin flash flood generation model is a rainfall-runoff model that can integrate remotely sensed data. Surface hydraulics equations are solved with enough simplifying hypotheses to allow real time exploitation. The minimum data required by the model are: (i) the Digital Elevation Model, used to calculate slopes that generate runoff, it can be issued from satellite imagery (SPOT) or from French Geographical Institute (IGN); (ii) the rainfall data from meteorological radar, observed or anticipated by the French Meteorological Service (M

  6. Deterministic Properties of Serially Connected Distributed Lag Models

    Directory of Open Access Journals (Sweden)

    Piotr Nowak

    2013-01-01

    Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract

  7. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  8. Perhitungan Iuran Normal Program Pensiun dengan Asumsi Suku Bunga Mengikuti Model Vasicek

    Directory of Open Access Journals (Sweden)

    I Nyoman Widana

    2017-12-01

    Full Text Available Labor has a very important role for national development. One way to optimize their productivity is to guarantee a certainty to earn income after retirement. Therefore the government and the private sector must have a program that can ensure the sustainability of this financial support. One option is a pension plan. The purpose of this study is to calculate the  normal cost  with the interest rate assumed to follow the Vasicek model and analyze the normal contribution of the pension program participants. Vasicek model is used to match with  the actual conditions. The method used in this research is the Projected Unit Credit Method and the Entry Age Normal method. The data source of this research is lecturers of FMIPA Unud. In addition, secondary data is also used in the form of the interest  rate of Bank Indonesia for the period of January 2006-December 2015. The results of this study indicate that  the older the age of the participants, when starting the pension program, the greater the first year normal cost  and the smaller the benefit which he or she  will get. Then, normal cost with constant interest rate  greater than normal cost with Vasicek interest rate. This occurs because the Vasicek model predicts interest between 4.8879%, up to 6.8384%. While constant interest is only 4.25%.  In addition, using normal cost that proportional to salary, it is found that the older the age of the participants the greater the proportion of the salary for normal cost.

  9. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    Science.gov (United States)

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  10. Condition monitoring with wind turbine SCADA data using Neuro-Fuzzy normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2012-01-01

    System (ANFIS) models are employed to learn the normal behavior in a training phase, where the component condition can be considered healthy. In the application phase the trained models are applied to predict the target signals, e.g. temperatures, pressures, currents, power output, etc. The behavior......This paper presents the latest research results of a project that focuses on normal behavior models for condition monitoring of wind turbines and their components, via ordinary Supervisory Control And Data Acquisition (SCADA) data. In this machine learning approach Adaptive Neuro-Fuzzy Interference...... of the prediction error is used as an indicator for normal and abnormal behavior, with respect to the learned behavior. The advantage of this approach is that the prediction error is widely decoupled from the typical fluctuations of the SCADA data caused by the different turbine operational modes. To classify...

  11. Normal and Abnormal Scenario Modeling with GoldSim for Radioactive Waste Disposal System

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae

    2010-08-01

    A modeling study and development of a total system performance assessment (TSPA) template program, by which an assessment of safety and performance for the radioactive waste repository with normal and/or abnormal nuclide release cases could be assessed has been carried out by utilizing a commercial development tool program, GoldSim. Scenarios associated with the various FEPs and involved in the performance of the proposed repository in view of nuclide transport and transfer both in the geosphere and biosphere has been also carried out. Selected normal and abnormal scenarios that could alter groundwater flow scheme and then nuclide transport are modeled with the template program. To this end in-depth system models for the normal and abnormal well and earthquake scenarios that are conceptually and rather practically described and then ready for implementing into a GoldSim TSPA template program are introduced with conceptual schemes for each repository system. Illustrative evaluations with data currently available are also shown

  12. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  13. MODELING THE ANATOMICAL DISTRIBUTION OF SUNLIGHT

    Science.gov (United States)

    One of the major technical challenges in calculating solar irradiance on the human form has been the complexity of the surface geometry (i.e. the surface normal vis a vis the incident radiation. Over 80 percent of skin cancers occur on the face, head, and back of the hands. The...

  14. Multiplicity distributions in the dual parton model

    International Nuclear Information System (INIS)

    Batunin, A.V.; Tolstenkov, A.N.

    1985-01-01

    Multiplicity distributions are calculated by means of a new mechanism of production of hadrons in a string, which was proposed previously by the authors and takes into account explicitly the valence character of the ends of the string. It is shown that allowance for this greatly improves the description of the low-energy multiplicity distributions. At superhigh energies, the contribution of the ends of the strings becomes negligibly small, but in this case multi-Pomeron contributions must be taken into account

  15. Electricity distribution management Smart Grid system model

    Directory of Open Access Journals (Sweden)

    Wiesław Nowak

    2012-06-01

    Full Text Available This paper presents issues concerning the implementation of Smart Grid solutions in a real distribution network. The main components possible to quick implementation were presented. Realization of these ideas should bring tangible benefi ts to both customers and distribution system operators. Moreover the paper shows selected research results which examine proposed solutions in area of improving supply reliability and reducing energy losses in analysed network.

  16. A model for the distribution channels planning process

    NARCIS (Netherlands)

    Neves, M.F.; Zuurbier, P.; Campomar, M.C.

    2001-01-01

    Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these

  17. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  18. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  19. Extreme-value limit of the convolution of exponential and multivariate normal distributions: Link to the Hüsler–Reiß distribution

    KAUST Repository

    Krupskii, Pavel; Joe, Harry; Lee, David; Genton, Marc G.

    2017-01-01

    The multivariate Hüsler–Reiß copula is obtained as a direct extreme-value limit from the convolution of a multivariate normal random vector and an exponential random variable multiplied by a vector of constants. It is shown how the set of Hüsler–Reiß parameters can be mapped to the parameters of this convolution model. Assuming there are no singular components in the Hüsler–Reiß copula, the convolution model leads to exact and approximate simulation methods. An application of simulation is to check if the Hüsler–Reiß copula with different parsimonious dependence structures provides adequate fit to some data consisting of multivariate extremes.

  20. Extreme-value limit of the convolution of exponential and multivariate normal distributions: Link to the Hüsler–Reiß distribution

    KAUST Repository

    Krupskii, Pavel

    2017-11-02

    The multivariate Hüsler–Reiß copula is obtained as a direct extreme-value limit from the convolution of a multivariate normal random vector and an exponential random variable multiplied by a vector of constants. It is shown how the set of Hüsler–Reiß parameters can be mapped to the parameters of this convolution model. Assuming there are no singular components in the Hüsler–Reiß copula, the convolution model leads to exact and approximate simulation methods. An application of simulation is to check if the Hüsler–Reiß copula with different parsimonious dependence structures provides adequate fit to some data consisting of multivariate extremes.

  1. Normal Brain-Skull Development with Hybrid Deformable VR Models Simulation.

    Science.gov (United States)

    Jin, Jing; De Ribaupierre, Sandrine; Eagleson, Roy

    2016-01-01

    This paper describes a simulation framework for a clinical application involving skull-brain co-development in infants, leading to a platform for craniosynostosis modeling. Craniosynostosis occurs when one or more sutures are fused early in life, resulting in an abnormal skull shape. Surgery is required to reopen the suture and reduce intracranial pressure, but is difficult without any predictive model to assist surgical planning. We aim to study normal brain-skull growth by computer simulation, which requires a head model and appropriate mathematical methods for brain and skull growth respectively. On the basis of our previous model, we further specified suture model into fibrous and cartilaginous sutures and develop algorithm for skull extension. We evaluate the resulting simulation by comparison with datasets of cases and normal growth.

  2. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  3. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  4. The κ-generalized distribution: A new descriptive model for the size distribution of incomes

    Science.gov (United States)

    Clementi, F.; Di Matteo, T.; Gallegati, M.; Kaniadakis, G.

    2008-05-01

    This paper proposes the κ-generalized distribution as a model for describing the distribution and dispersion of income within a population. Formulas for the shape, moments and standard tools for inequality measurement-such as the Lorenz curve and the Gini coefficient-are given. A method for parameter estimation is also discussed. The model is shown to fit extremely well the data on personal income distribution in Australia and in the United States.

  5. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model....

  6. From explicit to implicit normal mode initialization of a limited-area model

    Energy Technology Data Exchange (ETDEWEB)

    Bijlsma, S.J.

    2013-02-15

    In this note the implicit normal mode initialization of a limited-area model is discussed from a different point of view. To that end it is shown that the equations describing the explicit normal mode initialization applied to the shallow water equations in differentiated form on the sphere can readily be derived in normal mode space if the model equations are separable, but only in the case of stationary Rossby modes can be transformed into the implicit equations in physical space. This is a consequence of the simple relations between the components of the different modes in that case. In addition a simple eigenvalue problem is given for the frequencies of the gravity waves. (orig.)

  7. Fluid Distribution Pattern in Adult-Onset Congenital, Idiopathic, and Secondary Normal-Pressure Hydrocephalus: Implications for Clinical Care.

    Science.gov (United States)

    Yamada, Shigeki; Ishikawa, Masatsune; Yamamoto, Kazuo

    2017-01-01

    In spite of growing evidence of idiopathic normal-pressure hydrocephalus (NPH), a viewpoint about clinical care for idiopathic NPH is still controversial. A continuous divergence of viewpoints might be due to confusing classifications of idiopathic and adult-onset congenital NPH. To elucidate the classification of NPH, we propose that adult-onset congenital NPH should be explicitly distinguished from idiopathic and secondary NPH. On the basis of conventional CT scan or MRI, idiopathic NPH was defined as narrow sulci at the high convexity in concurrent with enlargement of the ventricles, basal cistern and Sylvian fissure, whereas adult-onset congenital NPH was defined as huge ventricles without high-convexity tightness. We compared clinical characteristics and cerebrospinal fluid distribution among 85 patients diagnosed with idiopathic NPH, 17 patients with secondary NPH, and 7 patients with adult-onset congenital NPH. All patients underwent 3-T MRI examinations and tap-tests. The volumes of ventricles and subarachnoid spaces were measured using a 3D workstation based on T2-weighted 3D sequences. The mean intracranial volume for the patients with adult-onset congenital NPH was almost 100 mL larger than the volumes for patients with idiopathic and secondary NPH. Compared with the patients with idiopathic or secondary NPH, patients with adult-onset congenital NPH exhibited larger ventricles but normal sized subarachnoid spaces. The mean volume ratio of the high-convexity subarachnoid space was significantly less in idiopathic NPH than in adult-onset congenital NPH, whereas the mean volume ratio of the basal cistern and Sylvian fissure in idiopathic NPH was >2 times larger than that in adult-onset congenital NPH. The symptoms of gait disturbance, cognitive impairment, and urinary incontinence in patients with adult-onset congenital NPH tended to progress more slowly compared to their progress in patients with idiopathic NPH. Cerebrospinal fluid distributions and

  8. Deterioration and optimal rehabilitation modelling for urban water distribution systems

    NARCIS (Netherlands)

    Zhou, Y.

    2018-01-01

    Pipe failures in water distribution systems can have a serious impact and hence it’s important to maintain the condition and integrity of the distribution system. This book presents a whole-life cost optimisation model for the rehabilitation of water distribution systems. It combines a pipe breakage

  9. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  10. Understanding the implementation of complex interventions in health care: the normalization process model

    Directory of Open Access Journals (Sweden)

    Rogers Anne

    2007-09-01

    Full Text Available Abstract Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration. Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model.

  11. Radar meteors range distribution model. I. Theory

    Czech Academy of Sciences Publication Activity Database

    Pecinová, Drahomíra; Pecina, Petr

    2007-01-01

    Roč. 37, č. 2 (2007), s. 83-106 ISSN 1335-1842 R&D Projects: GA ČR GA205/03/1405 Institutional research plan: CEZ:AV0Z10030501 Keywords : physics of meteors * radar meteors * range distribution Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  12. 76 FR 36864 - Special Conditions: Gulfstream Model GVI Airplane; Operation Without Normal Electric Power

    Science.gov (United States)

    2011-06-23

    ... Normal Electric Power AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special... Interface Branch, ANM-111, Transport Standards Staff, Transport Airplane Directorate, Aircraft Certification... Model GVI airplane will be an all-new, two- engine jet transport airplane. The maximum takeoff weight...

  13. Improved Discovery of Molecular Interactions in Genome-Scale Data with Adaptive Model-Based Normalization

    Science.gov (United States)

    Brown, Patrick O.

    2013-01-01

    Background High throughput molecular-interaction studies using immunoprecipitations (IP) or affinity purifications are powerful and widely used in biology research. One of many important applications of this method is to identify the set of RNAs that interact with a particular RNA-binding protein (RBP). Here, the unique statistical challenge presented is to delineate a specific set of RNAs that are enriched in one sample relative to another, typically a specific IP compared to a non-specific control to model background. The choice of normalization procedure critically impacts the number of RNAs that will be identified as interacting with an RBP at a given significance threshold – yet existing normalization methods make assumptions that are often fundamentally inaccurate when applied to IP enrichment data. Methods In this paper, we present a new normalization methodology that is specifically designed for identifying enriched RNA or DNA sequences in an IP. The normalization (called adaptive or AD normalization) uses a basic model of the IP experiment and is not a variant of mean, quantile, or other methodology previously proposed. The approach is evaluated statistically and tested with simulated and empirical data. Results and Conclusions The adaptive (AD) normalization method results in a greatly increased range in the number of enriched RNAs identified, fewer false positives, and overall better concordance with independent biological evidence, for the RBPs we analyzed, compared to median normalization. The approach is also applicable to the study of pairwise RNA, DNA and protein interactions such as the analysis of transcription factors via chromatin immunoprecipitation (ChIP) or any other experiments where samples from two conditions, one of which contains an enriched subset of the other, are studied. PMID:23349766

  14. Normal and Pathological NCAT Image and Phantom Data Based on Physiologically Realistic Left Ventricle Finite-Element Models

    International Nuclear Information System (INIS)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui, Benjamin M.W.; Gullberg, Grant T.

    2006-01-01

    differences in contractile function between the subendocardial and transmural infarcts manifest themselves in myocardial SPECT images. The normal FE model produced strain distributions that were consistent with those reported in the literature and a motion consistent with that defined in the normal 4D NCAT beating heart model based on tagged MRI data. The addition of a subendocardial ischemic region changed the average transmural circumferential strain from a contractile value of 0.19 to a tensile value of 0.03. The addition of a transmural ischemic region changed average circumferential strain to a value of 0.16, which is consistent with data reported in the literature. Model results demonstrated differences in contractile function between subendocardial and transmural infarcts and how these differences in function are documented in simulated myocardial SPECT images produced using the 4DNCAT phantom. In comparison to the original NCAT beating heart model, the FE mechanical model produced a more accurate simulation for the cardiac motion abnormalities. Such a model, when incorporated into the 4D NCAT phantom, has great potential for use in cardiac imaging research. With its enhanced physiologically-based cardiac model, the 4D NCAT phantom can be used to simulate realistic, predictive imaging data of a patient population with varying whole-body anatomy and with varying healthy and diseased states of the heart that will provide a known truth from which to evaluate and improve existing and emerging 4D imaging techniques used in the diagnosis of cardiac disease

  15. Modelling the distribution of pig production and diseases in Thailand

    OpenAIRE

    Thanapongtharm, Weerapong

    2015-01-01

    This thesis, entitled “Modelling the distribution of pig production and diseases in Thailand”, presents many aspects of pig production in Thailand including the characteristics of pig farming system, distribution of pig population and pig farms, spatio-temporal distribution and risk of most important diseases in pig at present, and the suitability area for pig farming. Spatial distribution and characteristics of pig farming in Thailand were studied using time-series pig population data to des...

  16. Modeling and optimization of an electric power distribution network ...

    African Journals Online (AJOL)

    Modeling and optimization of an electric power distribution network planning system using ... of the network was modelled with non-linear mathematical expressions. ... given feasible locations, re-conductoring of existing feeders in the network, ...

  17. Bilinear reduced order approximate model of parabolic distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low

  18. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  19. Wind turbine condition monitoring based on SCADA data using normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar; Achiche, Sofiane

    2013-01-01

    This paper proposes a system for wind turbine condition monitoring using Adaptive Neuro-Fuzzy Interference Systems (ANFIS). For this purpose: (1) ANFIS normal behavior models for common Supervisory Control And Data Acquisition (SCADA) data are developed in order to detect abnormal behavior...... the applicability of ANFIS models for monitoring wind turbine SCADA signals. The computational time needed for model training is compared to Neural Network (NN) models showing the strength of ANFIS in training speed. (2) For automation of fault diagnosis Fuzzy Interference Systems (FIS) are used to analyze...

  20. Neutron and PIMC determination of the longitudinal momentum distribution of HCP, BCC and normal liquid 4He

    International Nuclear Information System (INIS)

    Blasdell, R.C.; Ceperley, D.M.; Simmons, R.O.

    1993-07-01

    Deep inelastic neutron scattering has been used to measure the neutron Compton profile (NCP) of a series of condensed 4 He samples at densities from 28.8 atoms/nm 3 (essentially the minimum possible density in the solid phase) up to 39.8 atoms/nm 3 using a chopper spectrometer at the Argonne National Laboratory Intense Pulsed Neutron Source. At the lowest density, the NCP was measured along an isochore through the hcp, bcc, and normal liquid phases. Average atomic kinetic energies are extracted from each of the data sets and are compared to both published and new path integral Monte-Carlo (PIMC) calculations as well as other theoretical predictions. In this preliminary analysis of the data, account is taken of the effects of instrumental resolution, multiple scattering, and final-state interactions. Both our measurements and the PIMC theory show that there are only small differences in the kinetic energy and longitudinal momentum distribution of isochoric helium samples, regardless of their phase or crystal structure

  1. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  2. Normal tissue complication probabilities: dependence on choice of biological model and dose-volume histogram reduction scheme

    International Nuclear Information System (INIS)

    Moiseenko, Vitali; Battista, Jerry; Van Dyk, Jake

    2000-01-01

    Purpose: To evaluate the impact of dose-volume histogram (DVH) reduction schemes and models of normal tissue complication probability (NTCP) on ranking of radiation treatment plans. Methods and Materials: Data for liver complications in humans and for spinal cord in rats were used to derive input parameters of four different NTCP models. DVH reduction was performed using two schemes: 'effective volume' and 'preferred Lyman'. DVHs for competing treatment plans were derived from a sample DVH by varying dose uniformity in a high dose region so that the obtained cumulative DVHs intersected. Treatment plans were ranked according to the calculated NTCP values. Results: Whenever the preferred Lyman scheme was used to reduce the DVH, competing plans were indistinguishable as long as the mean dose was constant. The effective volume DVH reduction scheme did allow us to distinguish between these competing treatment plans. However, plan ranking depended on the radiobiological model used and its input parameters. Conclusions: Dose escalation will be a significant part of radiation treatment planning using new technologies, such as 3-D conformal radiotherapy and tomotherapy. Such dose escalation will depend on how the dose distributions in organs at risk are interpreted in terms of expected complication probabilities. The present study indicates considerable variability in predicted NTCP values because of the methods used for DVH reduction and radiobiological models and their input parameters. Animal studies and collection of standardized clinical data are needed to ascertain the effects of non-uniform dose distributions and to test the validity of the models currently in use

  3. Reflectance spectrometry of normal and bruised human skins: experiments and modeling

    International Nuclear Information System (INIS)

    Kim, Oleg; Alber, Mark; McMurdy, John; Lines, Collin; Crawford, Gregory; Duffy, Susan

    2012-01-01

    A stochastic photon transport model in multilayer skin tissue combined with reflectance spectroscopy measurements is used to study normal and bruised skins. The model is shown to provide a very good approximation to both normal and bruised real skin tissues by comparing experimental and simulated reflectance spectra. The sensitivity analysis of the skin reflectance spectrum to variations of skin layer thicknesses, blood oxygenation parameter and concentrations of main chromophores is performed to optimize model parameters. The reflectance spectrum of a developed bruise in a healthy adult is simulated, and the concentrations of bilirubin, blood volume fraction and blood oxygenation parameter are determined for different times as the bruise progresses. It is shown that bilirubin and blood volume fraction reach their peak values at 80 and 55 h after contusion, respectively, and the oxygenation parameter is lower than its normal value during 80 h after contusion occurred. The obtained time correlations of chromophore concentrations in developing contusions are shown to be consistent with previous studies. The developed model uses a detailed seven-layer skin approximation for contusion and allows one to obtain more biologically relevant results than those obtained with previous models using one- to three-layer skin approximations. A combination of modeling with spectroscopy measurements provides a new tool for detailed biomedical studies of human skin tissue and for age determination of contusions. (paper)

  4. Normal and Pathological NCAT Image and PhantomData Based onPhysiologically Realistic Left Ventricle Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui,Benjamin M.W.; Gullberg, Grant T.

    2006-08-02

    between thesubendocardial and transmural infarcts manifest themselves in myocardialSPECT images. The normal FE model produced strain distributions that wereconsistent with those reported in the literature and a motion consistentwith that defined in the normal 4D NCAT beating heart model based ontagged MRI data. The addition of a subendocardial ischemic region changedthe average transmural circumferential strain from a contractile value of0.19 to a tensile value of 0.03. The addition of a transmural ischemicregion changed average circumferential strain to a value of 0.16, whichis consistent with data reported in the literature. Model resultsdemonstrated differences in contractile function between subendocardialand transmural infarcts and how these differences in function aredocumented in simulated myocardial SPECT images produced using the 4DNCAT phantom. In comparison to the original NCAT beating heart model, theFE mechanical model produced a more accurate simulation for the cardiacmotion abnormalities. Such a model, when incorporated into the 4D NCATphantom, has great potential for use in cardiac imaging research. Withits enhanced physiologically-based cardiac model, the 4D NCAT phantom canbe used to simulate realistic, predictive imaging data of a patientpopulation with varying whole-body anatomy and with varying healthy anddiseased states of the heart that will provide a known truth from whichto evaluate and improve existing and emerging 4D imaging techniques usedin the diagnosis of cardiac disease.

  5. Designing the Distributed Model Integration Framework – DMIF

    NARCIS (Netherlands)

    Belete, Getachew F.; Voinov, Alexey; Morales, Javier

    2017-01-01

    We describe and discuss the design and prototype of the Distributed Model Integration Framework (DMIF) that links models deployed on different hardware and software platforms. We used distributed computing and service-oriented development approaches to address the different aspects of

  6. A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems

    DEFF Research Database (Denmark)

    Han, Pujie; Zhai, Zhengjun; Nielsen, Brian

    2018-01-01

    This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL...

  7. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  8. A generalized statistical model for the size distribution of wealth

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2012-01-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)

  9. A generalized statistical model for the size distribution of wealth

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  10. Electric Power Distribution System Model Simplification Using Segment Substitution

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2018-05-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  11. Electric Power Distribution System Model Simplification Using Segment Substitution

    International Nuclear Information System (INIS)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2017-01-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  12. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    Science.gov (United States)

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  13. Robustness of a Distributed Knowledge Management Model

    DEFF Research Database (Denmark)

    Pedersen, Mogens Kühn; Larsen, Michael Holm

    1999-01-01

    Knowledge management based on symmetric incentives is rarely found in literature. A knowledge exchange model relies upon a double loop knowledge conversion with symmetric incentives in a network. The model merges specific knowledge with knowledge from other actors into a decision support system...

  14. EARLY GUIDANCE FOR ASSIGNING DISTRIBUTION PARAMETERS TO GEOCHEMICAL INPUT TERMS TO STOCHASTIC TRANSPORT MODELS

    International Nuclear Information System (INIS)

    Kaplan, D; Margaret Millings, M

    2006-01-01

    Stochastic modeling is being used in the Performance Assessment program to provide a probabilistic estimate of the range of risk that buried waste may pose. The objective of this task was to provide early guidance for stochastic modelers for the selection of the range and distribution (e.g., normal, log-normal) of distribution coefficients (K d ) and solubility values (K sp ) to be used in modeling subsurface radionuclide transport in E- and Z-Area on the Savannah River Site (SRS). Due to the project's schedule, some modeling had to be started prior to collecting the necessary field and laboratory data needed to fully populate these models. For the interim, the project will rely on literature values and some statistical analyses of literature data as inputs. Based on statistical analyses of some literature sorption tests, the following early guidance was provided: (1) Set the range to an order of magnitude for radionuclides with K d values >1000 mL/g and to a factor of two for K d values of sp values -6 M and to a factor of two for K d values of >10 -6 M. This decision is based on the literature. (3) The distribution of K d values with a mean >1000 mL/g will be log-normally distributed. Those with a K d value <1000 mL/g will be assigned a normal distribution. This is based on statistical analysis of non-site-specific data. Results from on-going site-specific field/laboratory research involving E-Area sediments will supersede this guidance; these results are expected in 2007

  15. Experimental Modeling of VHTR Plenum Flows during Normal Operation and Pressurized Conduction Cooldown

    Energy Technology Data Exchange (ETDEWEB)

    Glenn E McCreery; Keith G Condie

    2006-09-01

    The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. The present document addresses experimental modeling of flow and thermal mixing phenomena of importance during normal or reduced power operation and during a loss of forced reactor cooling (pressurized conduction cooldown) scenario. The objectives of the experiments are, 1), provide benchmark data for assessment and improvement of codes proposed for NGNP designs and safety studies, and, 2), obtain a better understanding of related phenomena, behavior and needs. Physical models of VHTR vessel upper and lower plenums which use various working fluids to scale phenomena of interest are described. The models may be used to both simulate natural convection conditions during pressurized conduction cooldown and turbulent lower plenum flow during normal or reduced power operation.

  16. Mathematical model and computer code for coated particles performance at normal operating conditions

    International Nuclear Information System (INIS)

    Golubev, I.; Kadarmetov, I.; Makarov, V.

    2002-01-01

    Computer modeling of thermo-mechanical behavior of coated particles during operating both at normal and off-normal conditions has a very significant role particularly on a stage of new reactors development. In Russia a big experience has been accumulated on fabrication and reactor tests of CP and fuel elements with UO 2 kernels. However, this experience cannot be using in full volume for development of a new reactor installation GT-MHR. This is due to very deep burn-up of the fuel based on plutonium oxide (up to 70% fima). Therefore the mathematical modeling of CP thermal-mechanical behavior and failure prediction becomes particularly important. The authors have a clean understanding that serviceability of fuel with high burn-ups are defined not only by thermo-mechanics, but also by structured changes in coating materials, thermodynamics of chemical processes, 'amoeba-effect', formation CO etc. In the report the first steps of development of integrate code for numerical modeling of coated particles behavior and some calculating results concerning the influence of various design parameters on fuel coated particles endurance for GT-MHR normal operating conditions are submitted. A failure model is developed to predict the fraction of TRISO-coated particles. In this model it is assumed that the failure of CP depends not only on probability of SiC-layer fracture but also on the PyC-layers damage. The coated particle is considered as a uniform design. (author)

  17. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  18. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  19. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  20. Analysis of Jingdong Mall Logistics Distribution Model

    Science.gov (United States)

    Shao, Kang; Cheng, Feng

    In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.

  1. Tempered stable distributions stochastic models for multiscale processes

    CERN Document Server

    Grabchak, Michael

    2015-01-01

    This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions.  A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.

  2. Photovoltaic subsystem marketing and distribution model: programming manual. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-07-01

    Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.

  3. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  4. A multilayer electro-thermal model of pouch battery during normal discharge and internal short circuit process

    International Nuclear Information System (INIS)

    Chen, Mingbiao; Bai, Fanfei; Song, Wenji; Lv, Jie; Lin, Shili

    2017-01-01

    Highlights: • 2D network equivalent circuit considers the interplay of cell units. • The temperature non-uniformity Φ of multilayer model is bigger than that of lumped model. • The temperature non-uniformity is quantified and the reason of non-uniformity is analyzed. • Increasing the thermal conductivity of the separator can effectively relieve the heat spot effect of ISC. - Abstract: As the electrical and thermal characteristic will affect the batteries’ safety, performance, calendar life and capacity fading, an electro-thermal coupled model for pouch battery LiFePO_4/C is developed in normal discharge and internal short circuit process. The battery is discretized into many cell elements which are united as a 2D network equivalent circuit. The electro-thermal model is solved with finite difference method. Non-uniformity of current distribution and temperature distribution is simulated and the result is validated with experiment data at various discharge rates. Comparison of the lumped model and the multilayer structure model shows that the temperature non-uniformity Φ of multilayer model is bigger than that of lumped model and shows more precise. The temperature non-uniformity is quantified and the reason of non-uniformity is analyzed. The electro-thermal model can also be used to guide the safety design of battery. The temperature of the ISC element near tabs is the highest because the equivalent resistance of the external circuit (not including the ISC element) is the smallest when the resistance of cell units is small. It is found that increasing the thermal conductivity of integrated layer can effectively relieve the heat spot effect of ISC.

  5. Species Distribution modeling as a tool to unravel determinants of palm distribution in Thailand

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Balslev, Henrik

    2011-01-01

    As a consequence of the decimation of the forest cover in Thailand from 50% to ca. 20 % since the 1950ies, it is difficult to gain insight in the drivers behind past, present and future distribution ranges of plant species. Species distribution modeling allows visualization of potential species...... distribution under specific sets of assumptions. In this study we used maximum entropy to map potential distributions of 103 species of palms for which more than 5 herbarium records exist. Palms constitute key-stone plant group from both an ecological, economical and conservation perspective. The models were......) and the Area Under the Curve (AUC). All models performed well with AUC scores above 0.95. The predicted distribution ranges showed high suitability for palms in the southern region of Thailand. It also shows that spatial predictor variables are important in cases where historical processes may explain extant...

  6. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  7. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    Science.gov (United States)

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in

  8. Building a generalized distributed system model

    Science.gov (United States)

    Mukkamala, R.

    1993-01-01

    The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.

  9. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  10. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  12. MRI ductography of contrast agent distribution and leakage in normal mouse mammary ducts and ducts with in situ cancer.

    Science.gov (United States)

    Markiewicz, Erica; Fan, Xiaobing; Mustafi, Devkumar; Zamora, Marta; Conzen, Suzanne D; Karczmar, Gregory S

    2017-07-01

    High resolution 3D MRI was used to study contrast agent distribution and leakage in normal mouse mammary glands and glands containing in situ cancer after intra-ductal injection. Five female FVB/N mice (~19weeks old) with no detectable mammary cancer and eight C3(1) SV40 Tag virgin female mice (~15weeks old) with extensive in situ cancer were studied. A 34G, 45° tip Hamilton needle with a 25μL Hamilton syringe was inserted into the tip of the nipple and approximately 15μL of a Gadodiamide was injected slowly over 1min into the nipple and throughout the duct on one side of the inguinal gland. Following injection, the mouse was placed in a 9.4T MRI scanner, and a series of high resolution 3D T1-weighted images was acquired with a temporal resolution of 9.1min to follow contrast agent leakage from the ducts. The first image was acquired at about 12min after injection. Ductal enhancement regions detected in images acquired between 12 and 21min after contrast agent injection was five times smaller in SV40 mouse mammary ducts (pcontrast agent from the SV40 ducts. The contrast agent washout rate measured between 12min and 90min after injection was ~20% faster (p<0.004) in SV40 mammary ducts than in FVB/N mammary ducts. These results may be due to higher permeability of the SV40 ducts, likely due to the presence of in situ cancers. Therefore, increased permeability of ducts may indicate early stage breast cancers. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. The STIRPAT Analysis on Carbon Emission in Chinese Cities: An Asymmetric Laplace Distribution Mixture Model

    Directory of Open Access Journals (Sweden)

    Shanshan Wang

    2017-12-01

    Full Text Available In cities’ policy-making, it is a hot issue to grasp the determinants of carbon dioxide emission in Chinese cities. And the common method is to use the STIRPAT model, where its coefficients represent the influence intensity of each determinants of carbon emission. However, less work discusses estimation accuracy, especially in the framework of non-normal distribution and heterogeneity among cities’ emission. To improve the estimation accuracy, this paper employs a new method to estimate the STIRPAT model. The method uses a mixture of Asymmetric Laplace distributions (ALDs to approximate the true distribution of the error term. Meantime, a designed two-layer EM algorithm is used to obtain estimators. We test the robustness via the comparison results of five different models. We find that the ALDs Mixture Model is more reliable the others. Further, a significant Kuznets curve relationship is identified in China.

  14. The first-passage time distribution for the diffusion model with variable drift

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, Miriam; Gondan, Matthias

    2017-01-01

    across trials. This extra flexibility allows accounting for slow errors that often occur in response time experiments. So far, the predicted response time distributions were obtained by numerical evaluation as analytical solutions were not available. Here, we present an analytical expression...... for the cumulative first-passage time distribution in the diffusion model with normally distributed trial-to-trial variability in the drift. The solution is obtained with predefined precision, and its evaluation turns out to be extremely fast.......The Ratcliff diffusion model is now arguably the most widely applied model for response time data. Its major advantage is its description of both response times and the probabilities for correct as well as incorrect responses. The model assumes a Wiener process with drift between two constant...

  15. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  16. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Science.gov (United States)

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  17. Normal bone and soft tissue distribution of fluorine-18-sodium fluoride and artifacts on 18F-NaF PET/CT bone scan: a pictorial review.

    Science.gov (United States)

    Sarikaya, Ismet; Elgazzar, Abdelhamid H; Sarikaya, Ali; Alfeeli, Mahmoud

    2017-10-01

    Fluorine-18-sodium fluoride (F-NaF) PET/CT is a relatively new and high-resolution bone imaging modality. Since the use of F-NaF PET/CT has been increasing, it is important to accurately assess the images and be aware of normal distribution and major artifacts. In this pictorial review article, we will describe the normal uptake patterns of F-NaF in the bone tissues, particularly in complex structures, as well as its physiologic soft tissue distribution and certain artifacts seen on F-NaF PET/CT images.

  18. A numerical insight into elastomer normally closed micro valve actuation with cohesive interfacial cracking modelling

    Science.gov (United States)

    Wang, Dongyang; Ba, Dechun; Hao, Ming; Duan, Qihui; Liu, Kun; Mei, Qi

    2018-05-01

    Pneumatic NC (normally closed) valves are widely used in high density microfluidics systems. To improve actuation reliability, the actuation pressure needs to be reduced. In this work, we utilize 3D FEM (finite element method) modelling to get an insight into the valve actuation process numerically. Specifically, the progressive debonding process at the elastomer interface is simulated with CZM (cohesive zone model) method. To minimize the actuation pressure, the V-shape design has been investigated and compared with a normal straight design. The geometrical effects of valve shape has been elaborated, in terms of valve actuation pressure. Based on our simulated results, we formulate the main concerns for micro valve design and fabrication, which is significant for minimizing actuation pressures and ensuring reliable operation.

  19. Nitroglycerin provocation in normal subjects is not a useful human migraine model?

    DEFF Research Database (Denmark)

    Tvedskov, J F; Iversen, Helle Klingenberg; Olesen, J

    2010-01-01

    Provoking delayed migraine with nitroglycerin in migraine sufferers is a cumbersome model. Patients are difficult to recruit, migraine comes on late and variably and only 50-80% of patients develop an attack. A model using normal volunteers would be much more useful, but it should be validated...... aspirin 1000 mg, zolmitriptan 5 mg or placebo to normal healthy volunteers. The design was double-blind, placebo-controlled three-way crossover. Our hypothesis was that these drugs would be effective in the treatment of the mild constant headache induced by long-lasting GTN infusion. The headaches did...... experiment suggests that headache caused by direct nitric oxide (NO) action in the continued presence of NO is very resistance to analgesics and to specific acute migraine treatments. This suggests that NO works very deep in the cascade of events associated with vascular headache, whereas tested drugs work...

  20. Spreadsheet Modeling of Electron Distributions in Solids

    Science.gov (United States)

    Glassy, Wingfield V.

    2006-01-01

    A series of spreadsheet modeling exercises constructed as part of a new upper-level elective course on solid state materials and surface chemistry is described. The spreadsheet exercises are developed to provide students with the opportunity to interact with the conceptual framework where the role of the density of states and the Fermi-Dirac…

  1. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  2. Modeling wind speed and wind power distributions in Rwanda

    Energy Technology Data Exchange (ETDEWEB)

    Safari, Bonfils [Department of Physics, National University of Rwanda, P.O. Box 117, Huye District, South Province (Rwanda)

    2011-02-15

    Utilization of wind energy as an alternative energy source may offer many environmental and economical advantages compared to fossil fuels based energy sources polluting the lower layer atmosphere. Wind energy as other forms of alternative energy may offer the promise of meeting energy demand in the direct, grid connected modes as well as stand alone and remote applications. Wind speed is the most significant parameter of the wind energy. Hence, an accurate determination of probability distribution of wind speed values is very important in estimating wind speed energy potential over a region. In the present study, parameters of five probability density distribution functions such as Weibull, Rayleigh, lognormal, normal and gamma were calculated in the light of long term hourly observed data at four meteorological stations in Rwanda for the period of the year with fairly useful wind energy potential (monthly hourly mean wind speed anti v{>=}2 m s{sup -1}). In order to select good fitting probability density distribution functions, graphical comparisons to the empirical distributions were made. In addition, RMSE and MBE have been computed for each distribution and magnitudes of errors were compared. Residuals of theoretical distributions were visually analyzed graphically. Finally, a selection of three good fitting distributions to the empirical distribution of wind speed measured data was performed with the aid of a {chi}{sup 2} goodness-of-fit test for each station. (author)

  3. Normal Mode Derived Models of the Physical Properties of Earth's Outer Core

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.; Wu, W.

    2017-12-01

    Earth's outer core, the largest reservoir of metal in our planet, is comprised of an iron alloy of an uncertain composition. Its dynamical behaviour is responsible for the generation of Earth's magnetic field, with convection driven both by thermal and chemical buoyancy fluxes. Existing models of the seismic velocity and density of the outer core exhibit some variation, and there are only a small number of models which aim to represent the outer core's density.It is therefore important that we develop a better understanding of the physical properties of the outer core. Though most of the outer core is likely to be well mixed, it is possible that the uppermost outer core is stably stratified: it may be enriched in light elements released during the growth of the solid, iron enriched, inner core; by elements dissolved from the mantle into the outer core; or by exsolution of compounds previously dissolved in the liquid metal which will eventually be swept into the mantle. The stratified layer may host MAC or Rossby waves and it could impede communication between the chemically differentiated mantle and outer core, including screening out some of the geodynamo's signal. We use normal mode center frequencies to estimate the physical properties of the outer core in a Bayesian framework. We estimate the mineral physical parameters needed to best produce velocity and density models of the outer core which are consistent with the normal mode observations. We require that our models satisfy realistic physical constraints. We create models of the outer core with and without a distinct uppermost layer and assess the importance of this region.Our normal mode-derived models are compared with observations of body waves which travel through the outer core. In particular, we consider SmKS waves which are especially sensitive to the uppermost outer core and are therefore an important way to understand the robustness of our models.

  4. Pseudo SU(3) shell model: Normal parity bands in odd-mass nuclei

    International Nuclear Information System (INIS)

    Vargas, C.E.; Hirsch, J.G.; Draayer, J.P.

    2000-01-01

    A pseudo shell SU(3) model description of normal parity bands in 159 Tb is presented. The Hamiltonian includes spherical Nilsson single-particle energies, the quadrupole-quadrupole and pairing interactions, as well as three rotor terms. A systematic parametrization is introduced, accompanied by a detailed discussion of the effect each term in the Hamiltonian has on the energy spectrum. Yrast and excited band wavefunctions are analyzed together with their B(E2) values

  5. Modeling the effect of preexisting joints on normal fault geometries using a brittle and cohesive material

    Science.gov (United States)

    Kettermann, M.; van Gent, H. W.; Urai, J. L.

    2012-04-01

    , stereo-photography at the final stage of deformation enabled the creation of 3D models to preserve basic geometric information. The models showed that at the surface the deformation localized always along preexisting joints, even when they strike at an angle to the basement-fault. In most cases faults intersect precisely at the maximum depth of the joints. With increasing fault-joint angle the deformation occurred distributed over several joints by forming stepovers with fractures oriented normal to the strike of the joints. No fractures were observed parallel to the basement fault. At low angles stepovers coincided with wedge-shaped structures between two joints that remain higher than the surrounding joint-fault intersection. The wide opening gap along the main fault allowed detailed observations of the fault planes at depth, which revealed (1) changing dips according to joint-fault angles, (2) slickenlines, (3) superimposed steepening fault-planes, causing sharp sawtooth-shaped structures. Comparison to a field analogue at Canyonlands National Park, Utah/USA showed similar structures and features such as vertical fault escarpments at the surface coinciding with joint-surfaces. In the field and in the models stepovers were observed as well as conjugate faulting and incremental fault-steepening.

  6. Charge distribution in an two-chain dual model

    International Nuclear Information System (INIS)

    Fialkowski, K.; Kotanski, A.

    1983-01-01

    Charge distributions in the multiple production processes are analysed using the dual chain model. A parametrisation of charge distributions for single dual chains based on the νp and anti vp data is proposed. The rapidity charge distributions are then calculated for pp and anti pp collisions and compared with the previous calculations based on the recursive cascade model of single chains. The results differ at the SPS collider energies and in the energy dependence of the net forward charge supplying the useful tests of the dual chain model. (orig.)

  7. Quantifying Distributional Model Risk via Optimal Transport

    OpenAIRE

    Blanchet, Jose; Murthy, Karthyek R. A.

    2016-01-01

    This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...

  8. Publicly available models to predict normal boiling point of organic compounds

    International Nuclear Information System (INIS)

    Oprisiu, Ioana; Marcou, Gilles; Horvath, Dragos; Brunel, Damien Bernard; Rivollet, Fabien; Varnek, Alexandre

    2013-01-01

    Quantitative structure–property models to predict the normal boiling point (T b ) of organic compounds were developed using non-linear ASNNs (associative neural networks) as well as multiple linear regression – ISIDA-MLR and SQS (stochastic QSAR sampler). Models were built on a diverse set of 2098 organic compounds with T b varying in the range of 185–491 K. In ISIDA-MLR and ASNN calculations, fragment descriptors were used, whereas fragment, FPTs (fuzzy pharmacophore triplets), and ChemAxon descriptors were employed in SQS models. Prediction quality of the models has been assessed in 5-fold cross validation. Obtained models were implemented in the on-line ISIDA predictor at (http://infochim.u-strasbg.fr/webserv/VSEngine.html)

  9. Cost allocation model for distribution networks considering high penetration of distributed energy resources

    DEFF Research Database (Denmark)

    Soares, Tiago; Pereira, Fábio; Morais, Hugo

    2015-01-01

    The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used...... in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed......, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle...

  10. Rationalisation of distribution functions for models of nanoparticle magnetism

    International Nuclear Information System (INIS)

    El-Hilo, M.; Chantrell, R.W.

    2012-01-01

    A formalism is presented which reconciles the use of different distribution functions of particle diameter in analytical models of the magnetic properties of nanoparticle systems. For the lognormal distribution a transformation is derived which shows that a distribution of volume fraction transforms into a lognormal distribution of particle number albeit with a modified median diameter. This transformation resolves an apparent discrepancy reported in Tournus and Tamion [Journal of Magnetism and Magnetic Materials 323 (2011) 1118]. - Highlights: ► We resolve a problem resulting from the misunderstanding of the nature. ► The nature of dispersion functions in models of nanoparticle magnetism. ► The derived transformation between distributions will be of benefit in comparing models and experimental results.

  11. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bernstein, Andrey; Dall' Anese, Emiliano

    2017-05-26

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.

  12. Distributed modelling of shallow landslides triggered by intense rainfall

    Directory of Open Access Journals (Sweden)

    G. B. Crosta

    2003-01-01

    Full Text Available Hazard assessment of shallow landslides represents an important aspect of land management in mountainous areas. Among all the methods proposed in the literature, physically based methods are the only ones that explicitly includes the dynamic factors that control landslide triggering (rainfall pattern, land-use. For this reason, they allow forecasting both the temporal and the spatial distribution of shallow landslides. Physically based methods for shallow landslides are based on the coupling of the infinite slope stability analysis with hydrological models. Three different grid-based distributed hydrological models are presented in this paper: a steady state model, a transient "piston-flow" wetting front model, and a transient diffusive model. A comparative test of these models was performed to simulate landslide occurred during a rainfall event (27–28 June 1997 that triggered hundreds of shallow landslides within Lecco province (central Southern Alps, Italy. In order to test the potential for a completely distributed model for rainfall-triggered landslides, radar detected rainfall intensity has been used. A new procedure for quantitative evaluation of distributed model performance is presented and used in this paper. The diffusive model results in the best model for the simulation of shallow landslide triggering after a rainfall event like the one that we have analysed. Finally, radar data available for the June 1997 event permitted greatly improving the simulation. In particular, radar data allowed to explain the non-uniform distribution of landslides within the study area.

  13. Distributed MAP in the SpinJa Model Checker

    Directory of Open Access Journals (Sweden)

    Stefan Vijzelaar

    2011-10-01

    Full Text Available Spin in Java (SpinJa is an explicit state model checker for the Promela modelling language also used by the SPIN model checker. Designed to be extensible and reusable, the implementation of SpinJa follows a layered approach in which each new layer extends the functionality of the previous one. While SpinJa has preliminary support for shared-memory model checking, it did not yet support distributed-memory model checking. This tool paper presents a distributed implementation of a maximal accepting predecessors (MAP search algorithm on top of SpinJa.

  14. Hydrodynamic Cucker-Smale model with normalized communication weights and time delay

    KAUST Repository

    Choi, Young-Pil

    2017-07-17

    We study a hydrodynamic Cucker-Smale-type model with time delay in communication and information processing, in which agents interact with each other through normalized communication weights. The model consists of a pressureless Euler system with time delayed non-local alignment forces. We resort to its Lagrangian formulation and prove the existence of its global in time classical solutions. Moreover, we derive a sufficient condition for the asymptotic flocking behavior of the solutions. Finally, we show the presence of a critical phenomenon for the Eulerian system posed in the spatially one-dimensional setting.

  15. Modeling and analysis of solar distributed generation

    Science.gov (United States)

    Ortiz Rivera, Eduardo Ivan

    Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum

  16. Idealized models of the joint probability distribution of wind speeds

    Science.gov (United States)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  17. Model Checking Geographically Distributed Interlocking Systems Using UMC

    DEFF Research Database (Denmark)

    Fantechi, Alessandro; Haxthausen, Anne Elisabeth; Nielsen, Michel Bøje Randahl

    2017-01-01

    the relevant distributed protocols. By doing that we obey the safety guidelines of the railway signalling domain, that require formal methods to support the certification of such products. We also show how formal modelling can help designing alternative distributed solutions, while maintaining adherence...

  18. Modelling aspects of distributed processing in telecommunication networks

    NARCIS (Netherlands)

    Tomasgard, A; Audestad, JA; Dye, S; Stougie, L; van der Vlerk, MH; Wallace, SW

    1998-01-01

    The purpose of this paper is to formally describe new optimization models for telecommunication networks with distributed processing. Modem distributed networks put more focus on the processing of information and less on the actual transportation of data than we are traditionally used to in

  19. Income Distribution Over Educational Levels: A Simple Model.

    Science.gov (United States)

    Tinbergen, Jan

    An econometric model is formulated that explains income per person in various compartments of the labor market defined by three main levels of education and by education required. The model enables an estimation of the effect of increased access to education on that distribution. The model is based on a production for the economy as a whole; a…

  20. Distributed Generation Market Demand Model (dGen): Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Sigrin, Benjamin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Preus, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-02-01

    The Distributed Generation Market Demand model (dGen) is a geospatially rich, bottom-up, market-penetration model that simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the continental United States through 2050. The National Renewable Energy Laboratory (NREL) developed dGen to analyze the key factors that will affect future market demand for distributed solar, wind, storage, and other DER technologies in the United States. The new model builds off, extends, and replaces NREL's SolarDS model (Denholm et al. 2009a), which simulates the market penetration of distributed PV only. Unlike the SolarDS model, dGen can model various DER technologies under one platform--it currently can simulate the adoption of distributed solar (the dSolar module) and distributed wind (the dWind module) and link with the ReEDS capacity expansion model (Appendix C). The underlying algorithms and datasets in dGen, which improve the representation of customer decision making as well as the spatial resolution of analyses (Figure ES-1), also are improvements over SolarDS.

  1. Development of vortex model with realistic axial velocity distribution

    International Nuclear Information System (INIS)

    Ito, Kei; Ezure, Toshiki; Ohshima, Hiroyuki

    2014-01-01

    A vortex is considered as one of significant phenomena which may cause gas entrainment (GE) and/or vortex cavitation in sodium-cooled fast reactors. In our past studies, the vortex is assumed to be approximated by the well-known Burgers vortex model. However, the Burgers vortex model has a simple but unreal assumption that the axial velocity component is horizontally constant, while in real the free surface vortex has the axial velocity distribution which shows large gradient in radial direction near the vortex center. In this study, a new vortex model with realistic axial velocity distribution is proposed. This model is derived from the steady axisymmetric Navier-Stokes equation as well as the Burgers vortex model, but the realistic axial velocity distribution in radial direction is considered, which is defined to be zero at the vortex center and to approach asymptotically to zero at infinity. As the verification, the new vortex model is applied to the evaluation of a simple vortex experiment, and shows good agreements with the experimental data in terms of the circumferential velocity distribution and the free surface shape. In addition, it is confirmed that the Burgers vortex model fails to calculate accurate velocity distribution with the assumption of uniform axial velocity. However, the calculation accuracy of the Burgers vortex model can be enhanced close to that of the new vortex model in consideration of the effective axial velocity which is calculated as the average value only in the vicinity of the vortex center. (author)

  2. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  3. Normal modified stable processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process......This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...

  4. Pharmacokinetic-pharmacodynamic modeling of diclofenac in normal and Freund's complete adjuvant-induced arthritic rats

    Science.gov (United States)

    Zhang, Jing; Li, Pei; Guo, Hai-fang; Liu, Li; Liu, Xiao-dong

    2012-01-01

    Aim: To characterize pharmacokinetic-pharmacodynamic modeling of diclofenac in Freund's complete adjuvant (FCA)-induced arthritic rats using prostaglandin E2 (PGE2) as a biomarker. Methods: The pharmacokinetics of diclofenac was investigated using 20-day-old arthritic rats. PGE2 level in the rats was measured using an enzyme immunoassay. A pharmacokinetic-pharmacodynamic (PK-PD) model was developed to illustrate the relationship between the plasma concentration of diclofenac and the inhibition of PGE2 production. The inhibition of diclofenac on lipopolysaccharide (LPS)-induced PGE2 production in blood cells was investigated in vitro. Results: Similar pharmacokinetic behavior of diclofenac was found both in normal and FCA-induced arthritic rats. Diclofenac significantly decreased the plasma levels of PGE2 in both normal and arthritic rats. The inhibitory effect on PGE2 levels in the plasma was in proportion to the plasma concentration of diclofenac. No delay in the onset of inhibition was observed, suggesting that the effect compartment was located in the central compartment. An inhibitory effect sigmoid Imax model was selected to characterize the relationship between the plasma concentration of diclofenac and the inhibition of PGE2 production in vivo. The Imax model was also used to illustrate the inhibition of diclofenac on LPS-induced PGE2 production in blood cells in vitro. Conclusion: Arthritis induced by FCA does not alter the pharmacokinetic behaviors of diclofenac in rats, but the pharmacodynamics of diclofenac is slightly affected. A PK-PD model characterizing an inhibitory effect sigmoid Imax can be used to fit the relationship between the plasma PGE2 and diclofenac levels in both normal rats and FCA-induced arthritic rats. PMID:22842736

  5. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  6. Modelling distributed energy resources in energy service networks

    CERN Document Server

    Acha, Salvador

    2013-01-01

    Focuses on modelling two key infrastructures (natural gas and electrical) in urban energy systems with embedded technologies (cogeneration and electric vehicles) to optimise the operation of natural gas and electrical infrastructures under the presence of distributed energy resources

  7. A phenomenological retention tank model using settling velocity distributions.

    Science.gov (United States)

    Maruejouls, T; Vanrolleghem, P A; Pelletier, G; Lessard, P

    2012-12-15

    Many authors have observed the influence of the settling velocity distribution on the sedimentation process in retention tanks. However, the pollutants' behaviour in such tanks is not well characterized, especially with respect to their settling velocity distribution. This paper presents a phenomenological modelling study dealing with the way by which the settling velocity distribution of particles in combined sewage changes between entering and leaving an off-line retention tank. The work starts from a previously published model (Lessard and Beck, 1991) which is first implemented in a wastewater management modelling software, to be then tested with full-scale field data for the first time. Next, its performance is improved by integrating the particle settling velocity distribution and adding a description of the resuspension due to pumping for emptying the tank. Finally, the potential of the improved model is demonstrated by comparing the results for one more rain event. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Modeling of Drift Effects on Solar Tower Concentrated Flux Distributions

    Directory of Open Access Journals (Sweden)

    Luis O. Lara-Cerecedo

    2016-01-01

    Full Text Available A novel modeling tool for calculation of central receiver concentrated flux distributions is presented, which takes into account drift effects. This tool is based on a drift model that includes different geometrical error sources in a rigorous manner and on a simple analytic approximation for the individual flux distribution of a heliostat. The model is applied to a group of heliostats of a real field to obtain the resulting flux distribution and its variation along the day. The distributions differ strongly from those obtained assuming the ideal case without drift or a case with a Gaussian tracking error function. The time evolution of peak flux is also calculated to demonstrate the capabilities of the model. The evolution of this parameter also shows strong differences in comparison to the case without drift.

  9. A Framework for Modeling and Analyzing Complex Distributed Systems

    National Research Council Canada - National Science Library

    Lynch, Nancy A; Shvartsman, Alex Allister

    2005-01-01

    Report developed under STTR contract for topic AF04-T023. This Phase I project developed a modeling language and laid a foundation for computational support tools for specifying, analyzing, and verifying complex distributed system designs...

  10. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Science.gov (United States)

    Beauregard, Frieda; de Blois, Sylvie

    2014-01-01

    Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential

  11. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Directory of Open Access Journals (Sweden)

    Frieda Beauregard

    Full Text Available Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839 covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study

  12. Mixed normal inference on multicointegration

    NARCIS (Netherlands)

    Boswijk, H.P.

    2009-01-01

    Asymptotic likelihood analysis of cointegration in I(2) models, see Johansen (1997, 2006), Boswijk (2000) and Paruolo (2000), has shown that inference on most parameters is mixed normal, implying hypothesis test statistics with an asymptotic 2 null distribution. The asymptotic distribution of the

  13. Spin fluctuations in liquid 3He: a strong-coupling calculation of T/sub c/ and the normal-state distribution function

    International Nuclear Information System (INIS)

    Fay, D.; Layzer, A.

    1975-01-01

    The Berk--Schrieffer method of strong-coupling superconductivity for nearly ferromagnetic systems is generalized to arbitrary L-state pairing and realistic (hard-core) potentials. Application to 3 He yields a P-state transition but very low values for T/sub c/ and an unsatisfactory normal-state momentum distribution

  14. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    Science.gov (United States)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  15. Simplified analytical modeling of the normal hole erosion test; Modelado analitico simplificado del ensayo normal de ersoion de tubo

    Energy Technology Data Exchange (ETDEWEB)

    Khamlichi, A.; Bezzazi, M.; El Bakkali, L.; Jabbouri, A.; Kissi, B.; Yakhlef, F.; Parron Vera, M. A.; Rubio Cintas, M. D.; Castillo Lopez, O.

    2009-07-01

    The role erosion test was developed in order to study erosion phenomenon which occurs in cracks appearing in hydraulic infrastructures such as dams. This test enables describing experimentally the erosive characteristics of soils by means of an index which is called erosion rate and a critical tension which indicates the threshold of surface erosion initiation. The objective of this work is to five modelling of this experiment by means of a simplified analytical approach. The erosion law is derived by taking into account the flow regime. This law shows that the erosion occurring in the tube is controlled by a first order dynamics where only two parameters are involved: the characteristic's time linked to the erosion rate and the stress shear threshold for which erosion begins to develop. (Author) 5 refs.

  16. A Complex Network Approach to Distributional Semantic Models.

    Directory of Open Access Journals (Sweden)

    Akira Utsumi

    Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.

  17. A review of shear strength models for rock joints subjected to constant normal stiffness

    Directory of Open Access Journals (Sweden)

    Sivanathan Thirukumaran

    2016-06-01

    Full Text Available The typical shear behaviour of rough joints has been studied under constant normal load/stress (CNL boundary conditions, but recent studies have shown that this boundary condition may not replicate true practical situations. Constant normal stiffness (CNS is more appropriate to describe the stress–strain response of field joints since the CNS boundary condition is more realistic than CNL. The practical implications of CNS are movements of unstable blocks in the roof or walls of an underground excavation, reinforced rock wedges sliding in a rock slope or foundation, and the vertical movement of rock-socketed concrete piles. In this paper, the highlights and limitations of the existing models used to predict the shear strength/behaviour of joints under CNS conditions are discussed in depth.

  18. Improving normal tissue complication probability models: the need to adopt a "data-pooling" culture.

    Science.gov (United States)

    Deasy, Joseph O; Bentzen, Søren M; Jackson, Andrew; Ten Haken, Randall K; Yorke, Ellen D; Constine, Louis S; Sharma, Ashish; Marks, Lawrence B

    2010-03-01

    Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. Copyright 2010 Elsevier Inc. All rights reserved.

  19. Dynamic modeling method of the bolted joint with uneven distribution of joint surface pressure

    Science.gov (United States)

    Li, Shichao; Gao, Hongli; Liu, Qi; Liu, Bokai

    2018-03-01

    The dynamic characteristics of the bolted joints have a significant influence on the dynamic characteristics of the machine tool. Therefore, establishing a reasonable bolted joint dynamics model is helpful to improve the accuracy of machine tool dynamics model. Because the pressure distribution on the joint surface is uneven under the concentrated force of bolts, a dynamic modeling method based on the uneven pressure distribution of the joint surface is presented in this paper to improve the dynamic modeling accuracy of the machine tool. The analytic formulas between the normal, tangential stiffness per unit area and the surface pressure on the joint surface can be deduced based on the Hertz contact theory, and the pressure distribution on the joint surface can be obtained by the finite element software. Futhermore, the normal and tangential stiffness distribution on the joint surface can be obtained by the analytic formula and the pressure distribution on the joint surface, and assigning it into the finite element model of the joint. Qualitatively compared the theoretical mode shapes and the experimental mode shapes, as well as quantitatively compared the theoretical modal frequencies and the experimental modal frequencies. The comparison results show that the relative error between the first four-order theoretical modal frequencies and the first four-order experimental modal frequencies is 0.2% to 4.2%. Besides, the first four-order theoretical mode shapes and the first four-order experimental mode shapes are similar and one-to-one correspondence. Therefore, the validity of the theoretical model is verified. The dynamic modeling method proposed in this paper can provide a theoretical basis for the accurate dynamic modeling of the bolted joint in machine tools.

  20. Maxent modelling for predicting the potential distribution of Thai Palms

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach

    2011-01-01

    on presence data. The aim was to identify potential hot spot areas, assess the determinants of palm distribution ranges, and provide a firmer knowledge base for future conservation actions. We focused on a relatively small number of climatic, environmental and spatial variables in order to avoid...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...