WorldWideScience

Sample records for hurdle negative binomial

  1. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    Science.gov (United States)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  2. Simulation on Poisson and negative binomial models of count road accident modeling

    Science.gov (United States)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  3. Negative binomial multiplicity distribution from binomial cluster production

    International Nuclear Information System (INIS)

    Iso, C.; Mori, K.

    1990-01-01

    Two-step interpretation of negative binomial multiplicity distribution as a compound of binomial cluster production and negative binomial like cluster decay distribution is proposed. In this model we can expect the average multiplicity for the cluster production increases with increasing energy, different from a compound Poisson-Logarithmic distribution. (orig.)

  4. Zero-truncated negative binomial - Erlang distribution

    Science.gov (United States)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  5. Statistical inference involving binomial and negative binomial parameters.

    Science.gov (United States)

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  6. Distinguishing between Binomial, Hypergeometric and Negative Binomial Distributions

    Science.gov (United States)

    Wroughton, Jacqueline; Cole, Tarah

    2013-01-01

    Recognizing the differences between three discrete distributions (Binomial, Hypergeometric and Negative Binomial) can be challenging for students. We present an activity designed to help students differentiate among these distributions. In addition, we present assessment results in the form of pre- and post-tests that were designed to assess the…

  7. Stochastic background of negative binomial distribution

    International Nuclear Information System (INIS)

    Suzuki, N.; Biyajima, M.; Wilk, G.

    1991-01-01

    A branching equations of the birth process with immigration is taken as a model for the particle production process. Using it we investigate cases in which its solution becomes the negative binomial distribution. Furthermore, we compare our approach with the modified negative binomial distribution proposed recently by Chliapnikov and Tchikilev and use it to analyse the observed multiplicity distributions. (orig.)

  8. Multifractal structure of multiplicity distributions and negative binomials

    International Nuclear Information System (INIS)

    Malik, S.; Delhi, Univ.

    1997-01-01

    The paper presents experimental results of the multifractal structure analysis in proton-emulsion interactions at 800 GeV. The multiplicity moments have a power law dependence on the mean multiplicity in varying bin sizes of pseudorapidity. The values of generalised dimensions are calculated from the slope value. The multifractal characteristics are also examined in the light of negative binomials. The observed multiplicity moments and those derived from the negative-binomial fits agree well with each other. Also the values of D q , both observed and derived from the negative-binomial fits not only decrease with q typifying multifractality but also agree well each other showing consistency with the negative-binomial form

  9. Analysis of hypoglycemic events using negative binomial models.

    Science.gov (United States)

    Luo, Junxiang; Qu, Yongming

    2013-01-01

    Negative binomial regression is a standard model to analyze hypoglycemic events in diabetes clinical trials. Adjusting for baseline covariates could potentially increase the estimation efficiency of negative binomial regression. However, adjusting for covariates raises concerns about model misspecification, in which the negative binomial regression is not robust because of its requirement for strong model assumptions. In some literature, it was suggested to correct the standard error of the maximum likelihood estimator through introducing overdispersion, which can be estimated by the Deviance or Pearson Chi-square. We proposed to conduct the negative binomial regression using Sandwich estimation to calculate the covariance matrix of the parameter estimates together with Pearson overdispersion correction (denoted by NBSP). In this research, we compared several commonly used negative binomial model options with our proposed NBSP. Simulations and real data analyses showed that NBSP is the most robust to model misspecification, and the estimation efficiency will be improved by adjusting for baseline hypoglycemia. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Number-Phase Wigner Representation and Entropic Uncertainty Relations for Binomial and Negative Binomial States

    International Nuclear Information System (INIS)

    Amitabh, J.; Vaccaro, J.A.; Hill, K.E.

    1998-01-01

    We study the recently defined number-phase Wigner function S NP (n,θ) for a single-mode field considered to be in binomial and negative binomial states. These states interpolate between Fock and coherent states and coherent and quasi thermal states, respectively, and thus provide a set of states with properties ranging from uncertain phase and sharp photon number to sharp phase and uncertain photon number. The distribution function S NP (n,θ) gives a graphical representation of the complimentary nature of the number and phase properties of these states. We highlight important differences between Wigner's quasi probability function, which is associated with the position and momentum observables, and S NP (n,θ), which is associated directly with the photon number and phase observables. We also discuss the number-phase entropic uncertainty relation for the binomial and negative binomial states and we show that negative binomial states give a lower phase entropy than states which minimize the phase variance

  11. Zero inflated negative binomial-Sushila distribution and its application

    Science.gov (United States)

    Yamrubboon, Darika; Thongteeraparp, Ampai; Bodhisuwan, Winai; Jampachaisri, Katechan

    2017-11-01

    A new zero inflated distribution is proposed in this work, namely the zero inflated negative binomial-Sushila distribution. The new distribution which is a mixture of the Bernoulli and negative binomial-Sushila distributions is an alternative distribution for the excessive zero counts and over-dispersion. Some characteristics of the proposed distribution are derived including probability mass function, mean and variance. The parameter estimation of the zero inflated negative binomial-Sushila distribution is also implemented by maximum likelihood method. In application, the proposed distribution can provide a better fit than traditional distributions: zero inflated Poisson and zero inflated negative binomial distributions.

  12. Validity of the negative binomial distribution in particle production

    International Nuclear Information System (INIS)

    Cugnon, J.; Harouna, O.

    1987-01-01

    Some aspects of the clan picture for particle production in nuclear and in high-energy processes are examined. In particular, it is shown that the requirement of having logarithmic distribution for the number of particles within a clan in order to generate a negative binomial should not be taken strictly. Large departures are allowed without distorting too much the negative binomial. The question of the undetected particles is also studied. It is shown that, under reasonable circumstances, the latter do not affect the negative binomial character of the multiplicity distribution

  13. Modeling Tetanus Neonatorum case using the regression of negative binomial and zero-inflated negative binomial

    Science.gov (United States)

    Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni

    2017-12-01

    Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.

  14. The Binomial Coefficient for Negative Arguments

    OpenAIRE

    Kronenburg, M. J.

    2011-01-01

    The definition of the binomial coefficient in terms of gamma functions also allows non-integer arguments. For nonnegative integer arguments the gamma functions reduce to factorials, leading to the well-known Pascal triangle. Using a symmetry formula for the gamma function, this definition is extended to negative integer arguments, making the symmetry identity for binomial coefficients valid for all integer arguments. The agreement of this definition with some other identities and with the bin...

  15. Linking the Negative Binomial and Logarithmic Series Distributions via their Associated Series

    OpenAIRE

    SADINLE, MAURICIO

    2008-01-01

    The negative binomial distribution is associated to the series obtained by taking derivatives of the logarithmic series. Conversely, the logarithmic series distribution is associated to the series found by integrating the series associated to the negative binomial distribution. The parameter of the number of failures of the negative binomial distribution is the number of derivatives needed to obtain the negative binomial series from the logarithmic series. The reasoning in this article could ...

  16. Wigner Function of Density Operator for Negative Binomial Distribution

    International Nuclear Information System (INIS)

    Xu Xinglei; Li Hongqi

    2008-01-01

    By using the technique of integration within an ordered product (IWOP) of operator we derive Wigner function of density operator for negative binomial distribution of radiation field in the mixed state case, then we derive the Wigner function of squeezed number state, which yields negative binomial distribution by virtue of the entangled state representation and the entangled Wigner operator

  17. NBLDA: negative binomial linear discriminant analysis for RNA-Seq data.

    Science.gov (United States)

    Dong, Kai; Zhao, Hongyu; Tong, Tiejun; Wan, Xiang

    2016-09-13

    RNA-sequencing (RNA-Seq) has become a powerful technology to characterize gene expression profiles because it is more accurate and comprehensive than microarrays. Although statistical methods that have been developed for microarray data can be applied to RNA-Seq data, they are not ideal due to the discrete nature of RNA-Seq data. The Poisson distribution and negative binomial distribution are commonly used to model count data. Recently, Witten (Annals Appl Stat 5:2493-2518, 2011) proposed a Poisson linear discriminant analysis for RNA-Seq data. The Poisson assumption may not be as appropriate as the negative binomial distribution when biological replicates are available and in the presence of overdispersion (i.e., when the variance is larger than or equal to the mean). However, it is more complicated to model negative binomial variables because they involve a dispersion parameter that needs to be estimated. In this paper, we propose a negative binomial linear discriminant analysis for RNA-Seq data. By Bayes' rule, we construct the classifier by fitting a negative binomial model, and propose some plug-in rules to estimate the unknown parameters in the classifier. The relationship between the negative binomial classifier and the Poisson classifier is explored, with a numerical investigation of the impact of dispersion on the discriminant score. Simulation results show the superiority of our proposed method. We also analyze two real RNA-Seq data sets to demonstrate the advantages of our method in real-world applications. We have developed a new classifier using the negative binomial model for RNA-seq data classification. Our simulation results show that our proposed classifier has a better performance than existing works. The proposed classifier can serve as an effective tool for classifying RNA-seq data. Based on the comparison results, we have provided some guidelines for scientists to decide which method should be used in the discriminant analysis of RNA-Seq data

  18. Self-similarity of the negative binomial multiplicity distributions

    International Nuclear Information System (INIS)

    Calucci, G.; Treleani, D.

    1998-01-01

    The negative binomial distribution is self-similar: If the spectrum over the whole rapidity range gives rise to a negative binomial, in the absence of correlation and if the source is unique, also a partial range in rapidity gives rise to the same distribution. The property is not seen in experimental data, which are rather consistent with the presence of a number of independent sources. When multiplicities are very large, self-similarity might be used to isolate individual sources in a complex production process. copyright 1997 The American Physical Society

  19. On the revival of the negative binomial distribution in multiparticle production

    International Nuclear Information System (INIS)

    Ekspong, G.

    1990-01-01

    This paper is based on published and some unpublished material pertaining to the revival of interest in and success of applying the negative binomial distribution to multiparticle production since 1983. After a historically oriented introduction going farther back in time, the main part of the paper is devoted to an unpublished derivation of the negative binomial distribution based on empirical observations of forward-backward multiplicity correlations. Some physical processes leading to the negative binomial distribution are mentioned and some comments made on published criticisms

  20. Hadronic multiplicity distributions: the negative binomial and its alternatives

    International Nuclear Information System (INIS)

    Carruthers, P.

    1986-01-01

    We review properties of the negative binomial distribution, along with its many possible statistical or dynamical origins. Considering the relation of the multiplicity distribution to the density matrix for Boson systems, we re-introduce the partially coherent laser distribution, which allows for coherent as well as incoherent hadronic emission from the k fundamental cells, and provides equally good phenomenological fits to existing data. The broadening of non-single diffractive hadron-hadron distributions can be equally well due to the decrease of coherent with increasing energy as to the large (and rapidly decreasing) values of k deduced from negative binomial fits. Similarly the narrowness of e + -e - multiplicity distribution is due to nearly coherent (therefore nearly Poissonian) emission from a small number of jets, in contrast to the negative binomial with enormous values of k. 31 refs

  1. Hadronic multiplicity distributions: the negative binomial and its alternatives

    International Nuclear Information System (INIS)

    Carruthers, P.

    1986-01-01

    We review properties of the negative binomial distribution, along with its many possible statistical or dynamical origins. Considering the relation of the multiplicity distribution to the density matrix for boson systems, we re-introduce the partially coherent laser distribution, which allows for coherent as well as incoherent hadronic emission from the k fundamental cells, and provides equally good phenomenological fits to existing data. The broadening of non-single diffractive hadron-hadron distributions can be equally well due to the decrease of coherence with increasing energy as to the large (and rapidly decreasing) values of k deduced from negative binomial fits. Similarly the narrowness of e + -e - multiplicity distribution is due to nearly coherent (therefore nearly Poissonian) emission from a small number of jets, in contrast to the negative binomial with enormous values of k. 31 refs

  2. Negative binomial properties and clan structure in multiplicity distributions

    International Nuclear Information System (INIS)

    Giovannini, A.; Van Hove, L.

    1988-01-01

    We review the negative binomial properties measured recently for many multiplicity distributions of high energy hadronic, semi-leptonic reactions in selected rapidity intervals. We analyse them in terms of the ''clan'' structure which can be defined for any negative binomial distribution. By comparing reactions we exhibit a number of regularities for the average number N-bar of clans and the average charged multiplicity (n-bar) c per clan. 22 refs., 6 figs. (author)

  3. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    Science.gov (United States)

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  5. Negative Binomial Distribution and the multiplicity moments at the LHC

    International Nuclear Information System (INIS)

    Praszalowicz, Michal

    2011-01-01

    In this work we show that the latest LHC data on multiplicity moments C 2 -C 5 are well described by a two-step model in the form of a convolution of the Poisson distribution with energy-dependent source function. For the source function we take Γ Negative Binomial Distribution. No unexpected behavior of Negative Binomial Distribution parameter k is found. We give also predictions for the higher energies of 10 and 14 TeV.

  6. Estimating negative binomial parameters from occurrence data with detection times.

    Science.gov (United States)

    Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub

    2016-11-01

    The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Marginalized zero-inflated negative binomial regression with application to dental caries.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Long, D Leann; Divaris, Kimon

    2016-05-10

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared with marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels

    International Nuclear Information System (INIS)

    Chhaiba, Hassan; Demni, Nizar; Mouayn, Zouhair

    2016-01-01

    To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering the total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.

  9. Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels

    Energy Technology Data Exchange (ETDEWEB)

    Chhaiba, Hassan, E-mail: chhaiba.hassan@gmail.com [Department of Mathematics, Faculty of Sciences, Ibn Tofail University, P.O. Box 133, Kénitra (Morocco); Demni, Nizar, E-mail: nizar.demni@univ-rennes1.fr [IRMAR, Université de Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex (France); Mouayn, Zouhair, E-mail: mouayn@fstbm.ac.ma [Department of Mathematics, Faculty of Sciences and Technics (M’Ghila), Sultan Moulay Slimane, P.O. Box 523, Béni Mellal (Morocco)

    2016-07-15

    To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering the total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.

  10. Interpretations and implications of negative binomial distributions of multiparticle productions

    International Nuclear Information System (INIS)

    Arisawa, Tetsuo

    2006-01-01

    The number of particles produced in high energy experiments is approximated by a negative binomial distribution. Deriving a representation of the distribution from a stochastic equation, conditions for the process to satisfy the distribution are clarified. Based on them, it is proposed that multiparticle production consists of spontaneous and induced production. The rate of the induced production is proportional to the number of existing particles. The ratio of the two production rates remains constant during the process. The ''NBD space'' is also defined where the number of particles produced in its subspaces follows negative binomial distributions with different parameters

  11. Parameter estimation of the zero inflated negative binomial beta exponential distribution

    Science.gov (United States)

    Sirichantra, Chutima; Bodhisuwan, Winai

    2017-11-01

    The zero inflated negative binomial-beta exponential (ZINB-BE) distribution is developed, it is an alternative distribution for the excessive zero counts with overdispersion. The ZINB-BE distribution is a mixture of two distributions which are Bernoulli and negative binomial-beta exponential distributions. In this work, some characteristics of the proposed distribution are presented, such as, mean and variance. The maximum likelihood estimation is applied to parameter estimation of the proposed distribution. Finally some results of Monte Carlo simulation study, it seems to have high-efficiency when the sample size is large.

  12. Application of Negative Binomial Regression for Assessing Public ...

    African Journals Online (AJOL)

    Because the variance was nearly two times greater than the mean, the negative binomial regression model provided an improved fit to the data and accounted better for overdispersion than the Poisson regression model, which assumed that the mean and variance are the same. The level of education and race were found

  13. Zero inflated negative binomial-generalized exponential distributionand its applications

    Directory of Open Access Journals (Sweden)

    Sirinapa Aryuyuen

    2014-08-01

    Full Text Available In this paper, we propose a new zero inflated distribution, namely, the zero inflated negative binomial-generalized exponential (ZINB-GE distribution. The new distribution is used for count data with extra zeros and is an alternative for data analysis with over-dispersed count data. Some characteristics of the distribution are given, such as mean, variance, skewness, and kurtosis. Parameter estimation of the ZINB-GE distribution uses maximum likelihood estimation (MLE method. Simulated and observed data are employed to examine this distribution. The results show that the MLE method seems to have high efficiency for large sample sizes. Moreover, the mean square error of parameter estimation is increased when the zero proportion is higher. For the real data sets, this new zero inflated distribution provides a better fit than the zero inflated Poisson and zero inflated negative binomial distributions.

  14. Variability in results from negative binomial models for Lyme disease measured at different spatial scales.

    Science.gov (United States)

    Tran, Phoebe; Waller, Lance

    2015-01-01

    Lyme disease has been the subject of many studies due to increasing incidence rates year after year and the severe complications that can arise in later stages of the disease. Negative binomial models have been used to model Lyme disease in the past with some success. However, there has been little focus on the reliability and consistency of these models when they are used to study Lyme disease at multiple spatial scales. This study seeks to explore how sensitive/consistent negative binomial models are when they are used to study Lyme disease at different spatial scales (at the regional and sub-regional levels). The study area includes the thirteen states in the Northeastern United States with the highest Lyme disease incidence during the 2002-2006 period. Lyme disease incidence at county level for the period of 2002-2006 was linked with several previously identified key landscape and climatic variables in a negative binomial regression model for the Northeastern region and two smaller sub-regions (the New England sub-region and the Mid-Atlantic sub-region). This study found that negative binomial models, indeed, were sensitive/inconsistent when used at different spatial scales. We discuss various plausible explanations for such behavior of negative binomial models. Further investigation of the inconsistency and sensitivity of negative binomial models when used at different spatial scales is important for not only future Lyme disease studies and Lyme disease risk assessment/management but any study that requires use of this model type in a spatial context. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Statistical inference for a class of multivariate negative binomial distributions

    DEFF Research Database (Denmark)

    Rubak, Ege Holger; Møller, Jesper; McCullagh, Peter

    This paper considers statistical inference procedures for a class of models for positively correlated count variables called α-permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...

  16. Zero inflated Poisson and negative binomial regression models: application in education.

    Science.gov (United States)

    Salehi, Masoud; Roudbari, Masoud

    2015-01-01

    The number of failed courses and semesters in students are indicators of their performance. These amounts have zero inflated (ZI) distributions. Using ZI Poisson and negative binomial distributions we can model these count data to find the associated factors and estimate the parameters. This study aims at to investigate the important factors related to the educational performance of students. This cross-sectional study performed in 2008-2009 at Iran University of Medical Sciences (IUMS) with a population of almost 6000 students, 670 students selected using stratified random sampling. The educational and demographical data were collected using the University records. The study design was approved at IUMS and the students' data kept confidential. The descriptive statistics and ZI Poisson and negative binomial regressions were used to analyze the data. The data were analyzed using STATA. In the number of failed semesters, Poisson and negative binomial distributions with ZI, students' total average and quota system had the most roles. For the number of failed courses, total average, and being in undergraduate or master levels had the most effect in both models. In all models the total average have the most effect on the number of failed courses or semesters. The next important factor is quota system in failed semester and undergraduate and master levels in failed courses. Therefore, average has an important inverse effect on the numbers of failed courses and semester.

  17. e+-e- hadronic multiplicity distributions: negative binomial or Poisson

    International Nuclear Information System (INIS)

    Carruthers, P.; Shih, C.C.

    1986-01-01

    On the basis of fits to the multiplicity distributions for variable rapidity windows and the forward backward correlation for the 2 jet subset of e + e - data it is impossible to distinguish between a global negative binomial and its generalization, the partially coherent distribution. It is suggested that intensity interferometry, especially the Bose-Einstein correlation, gives information which will discriminate among dynamical models. 16 refs

  18. Negative binomial models for abundance estimation of multiple closed populations

    Science.gov (United States)

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  19. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  20. Estimation of adjusted rate differences using additive negative binomial regression.

    Science.gov (United States)

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Time evolution of negative binomial optical field in a diffusion channel

    International Nuclear Information System (INIS)

    Liu Tang-Kun; Wu Pan-Pan; Shan Chuan-Jia; Liu Ji-Bing; Fan Hong-Yi

    2015-01-01

    We find the time evolution law of a negative binomial optical field in a diffusion channel. We reveal that by adjusting the diffusion parameter, the photon number can be controlled. Therefore, the diffusion process can be considered a quantum controlling scheme through photon addition. (paper)

  2. The negative binomial distribution as a model for external corrosion defect counts in buried pipelines

    International Nuclear Information System (INIS)

    Valor, Alma; Alfonso, Lester; Caleyo, Francisco; Vidal, Julio; Perez-Baruch, Eloy; Hallen, José M.

    2015-01-01

    Highlights: • Observed external-corrosion defects in underground pipelines revealed a tendency to cluster. • The Poisson distribution is unable to fit extensive count data for these type of defects. • In contrast, the negative binomial distribution provides a suitable count model for them. • Two spatial stochastic processes lead to the negative binomial distribution for defect counts. • They are the Gamma-Poisson mixed process and the compound Poisson process. • A Rogeŕs process also arises as a plausible temporal stochastic process leading to corrosion defect clustering and to negative binomially distributed defect counts. - Abstract: The spatial distribution of external corrosion defects in buried pipelines is usually described as a Poisson process, which leads to corrosion defects being randomly distributed along the pipeline. However, in real operating conditions, the spatial distribution of defects considerably departs from Poisson statistics due to the aggregation of defects in groups or clusters. In this work, the statistical analysis of real corrosion data from underground pipelines operating in southern Mexico leads to conclude that the negative binomial distribution provides a better description for defect counts. The origin of this distribution from several processes is discussed. The analysed processes are: mixed Gamma-Poisson, compound Poisson and Roger’s processes. The physical reasons behind them are discussed for the specific case of soil corrosion.

  3. Negative binomial distribution for multiplicity distributions in e/sup +/e/sup -/ annihilation

    International Nuclear Information System (INIS)

    Chew, C.K.; Lim, Y.K.

    1986-01-01

    The authors show that the negative binomial distribution fits excellently the available charged-particle multiplicity distributions of e/sup +/e/sup -/ annihilation into hadrons at three different energies √s = 14, 22 and 34 GeV

  4. The effect of the negative binomial distribution on the line-width of the micromaser cavity field

    International Nuclear Information System (INIS)

    Kremid, A. M.

    2009-01-01

    The influence of negative binomial distribution (NBD) on the line-width of the negative binomial distribution (NBD) on the line-width of the micromaser is considered. The threshold of the micromaser is shifted towards higher values of the pumping parameter q. Moreover the line-width exhibits sharp dips 'resonances' when the cavity temperature reduces to a very low value. These dips are very clear evidence for the occurrence of the so-called trapping states regime in the micromaser. This statistics prevents the appearance of these trapping states, namely by increasing the negative binomial parameter q these dips wash out and the line-width becomes more broadening. For small values of the parameter q the line-width at large values of q randomly oscillates around its transition line. As q becomes large this oscillatory behavior occurs at rarely values of q. (author)

  5. Poisson and negative binomial item count techniques for surveys with sensitive question.

    Science.gov (United States)

    Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin

    2017-04-01

    Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.

  6. Squeezing and other non-classical features in k-photon anharmonic oscillator in binomial and negative binomial states of the field

    International Nuclear Information System (INIS)

    Joshi, A.; Lawande, S.V.

    1990-01-01

    A systematic study of squeezing obtained from k-photon anharmonic oscillator (with interaction hamiltonian of the form (a † ) k , k ≥ 2) interacting with light whose statistics can be varied from sub-Poissonian to poissonian via binomial state of field and super-Poissonian to poissonian via negative binomial state of field is presented. The authors predict that for all values of k there is a tendency increase in squeezing with increased sub-Poissonian character of the field while the reverse is true with super-Poissonian field. They also present non-classical behavior of the first order coherence function explicitly for k = 2 case (i.e., for two-photon anharmonic oscillator model used for a Kerr-like medium) with variation in the statistics of the input light

  7. Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction

    Science.gov (United States)

    Cohen, A. C.

    1971-01-01

    A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.

  8. On bounds in Poisson approximation for distributions of independent negative-binomial distributed random variables.

    Science.gov (United States)

    Hung, Tran Loc; Giang, Le Truong

    2016-01-01

    Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note.

  9. Estimation Parameters And Modelling Zero Inflated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2016-11-01

    Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

  10. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    Science.gov (United States)

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  11. Bayesian analysis of overdispersed chromosome aberration data with the negative binomial model

    International Nuclear Information System (INIS)

    Brame, R.S.; Groer, P.G.

    2002-01-01

    The usual assumption of a Poisson model for the number of chromosome aberrations in controlled calibration experiments implies variance equal to the mean. However, it is known that chromosome aberration data from experiments involving high linear energy transfer radiations can be overdispersed, i.e. the variance is greater than the mean. Present methods for dealing with overdispersed chromosome data rely on frequentist statistical techniques. In this paper, the problem of overdispersion is considered from a Bayesian standpoint. The Bayes Factor is used to compare Poisson and negative binomial models for two previously published calibration data sets describing the induction of dicentric chromosome aberrations by high doses of neutrons. Posterior densities for the model parameters, which characterise dose response and overdispersion are calculated and graphed. Calibrative densities are derived for unknown neutron doses from hypothetical radiation accident data to determine the impact of different model assumptions on dose estimates. The main conclusion is that an initial assumption of a negative binomial model is the conservative approach to chromosome dosimetry for high LET radiations. (author)

  12. Statistical Inference for a Class of Multivariate Negative Binomial Distributions

    DEFF Research Database (Denmark)

    Rubak, Ege H.; Møller, Jesper; McCullagh, Peter

    This paper considers statistical inference procedures for a class of models for positively correlated count variables called -permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...... studied in the literature, while this is the first statistical paper on -permanental random fields. The focus is on maximum likelihood estimation, maximum quasi-likelihood estimation and on maximum composite likelihood estimation based on uni- and bivariate distributions. Furthermore, new results...

  13. Marginal likelihood estimation of negative binomial parameters with applications to RNA-seq data.

    Science.gov (United States)

    León-Novelo, Luis; Fuentes, Claudio; Emerson, Sarah

    2017-10-01

    RNA-Seq data characteristically exhibits large variances, which need to be appropriately accounted for in any proposed model. We first explore the effects of this variability on the maximum likelihood estimator (MLE) of the dispersion parameter of the negative binomial distribution, and propose instead to use an estimator obtained via maximization of the marginal likelihood in a conjugate Bayesian framework. We show, via simulation studies, that the marginal MLE can better control this variation and produce a more stable and reliable estimator. We then formulate a conjugate Bayesian hierarchical model, and use this new estimator to propose a Bayesian hypothesis test to detect differentially expressed genes in RNA-Seq data. We use numerical studies to show that our much simpler approach is competitive with other negative binomial based procedures, and we use a real data set to illustrate the implementation and flexibility of the procedure. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Negative binomial mixed models for analyzing microbiome count data.

    Science.gov (United States)

    Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun

    2017-01-03

    Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.

  15. Negative-binomial multiplicity distributions in the interaction of light ions with 12C at 4.2 GeV/c

    International Nuclear Information System (INIS)

    Tucholski, A.; Bogdanowicz, J.; Moroz, Z.; Wojtkowska, J.

    1989-01-01

    Multiplicity distribution of single-charged particles in the interaction of p, d, α and 12 C projectiles with C target nuclei at 4.2 GeV/c were analysed in terms of the negative binomial distribution. The experimental data obtained by the Dubna Propane Bubble Chamber Group were used. It is shown that the experimental distributions are satisfactorily described by the negative-binomial distribution. Values of the parameters of these distributions are discussed. (orig.)

  16. Validity of the negative binomial multiplicity distribution in case of ultra-relativistic nucleus-nucleus interaction in different azimuthal bins

    International Nuclear Information System (INIS)

    Ghosh, D.; Deb, A.; Haldar, P.K.; Sahoo, S.R.; Maity, D.

    2004-01-01

    This work studies the validity of the negative binomial distribution in the multiplicity distribution of charged secondaries in 16 O and 32 S interactions with AgBr at 60 GeV/c per nucleon and 200 GeV/c per nucleon, respectively. The validity of negative binomial distribution (NBD) is studied in different azimuthal phase spaces. It is observed that the data can be well parameterized in terms of the NBD law for different azimuthal phase spaces. (authors)

  17. Bayesian inference for disease prevalence using negative binomial group testing

    Science.gov (United States)

    Pritchard, Nicholas A.; Tebbs, Joshua M.

    2011-01-01

    Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308

  18. Negative binomial multiplicity distributions, a new empirical law for high energy collisions

    International Nuclear Information System (INIS)

    Van Hove, L.; Giovannini, A.

    1987-01-01

    For a variety of high energy hadron production reactions, recent experiments have confirmed the findings of the UA5 Collaboration that charged particle multiplicities in central (pseudo) rapidity intervals and in full phase space obey negative binomial (NB) distributions. The authors discuss the meaning of this new empirical law on the basis of new data and they show that they support the interpretation of the NB distributions in terms of a cascading mechanism of hardron production

  19. Two-part zero-inflated negative binomial regression model for quantitative trait loci mapping with count trait.

    Science.gov (United States)

    Moghimbeigi, Abbas

    2015-05-07

    Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Technique of Hurdle Clearing in 400 Meters Hurdles

    OpenAIRE

    Jakoubek, Jiří

    2018-01-01

    Title: Technique of Hurdle Clearing in 400 Meters Hurdles Authors: Jiří Jakoubek Supervisor: PhDr. Aleš Kaplan, Ph.D. Aims: The aim of this thesis is to describe technique of hurdle clearing in 400 meters hurdle race using study review and to examine this technique at particular athlete during training and racing sessions in 400 meters hurdles race. Methods: Technique was compared and examined at young athlete. Two kinograms were used for analysis, one from training and one from racing sessio...

  1. A Mixed-Effects Heterogeneous Negative Binomial Model for Postfire Conifer Regeneration in Northeastern California, USA

    Science.gov (United States)

    Justin S. Crotteau; Martin W. Ritchie; J. Morgan. Varner

    2014-01-01

    Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...

  2. Technique of Hurdle Clearing in 400 Meters Hurdles (Study Review)

    OpenAIRE

    Jakoubek, Jiří

    2017-01-01

    Title: Technique of Hurdle Clearing in 400 Meters Hurdles (Study Review) Authors: Jiří Jakoubek Supervisor: PhDr. Aleš Kaplan, Ph.D. Aims: The aim of this thesis is to describe technique of hurdle clearing in 400 meters hurdle race using study review and to examine this technique at particular athlete during training and racing sessions in 400 meters hurdles race. Methods: Technique was compared and examined at young athlete. Two kinograms were used for analysis, one from training and one fro...

  3. Estimating cavity tree and snag abundance using negative binomial regression models and nearest neighbor imputation methods

    Science.gov (United States)

    Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett

    2009-01-01

    Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....

  4. Geographically weighted negative binomial regression applied to zonal level safety performance models.

    Science.gov (United States)

    Gomes, Marcos José Timbó Lima; Cunto, Flávio; da Silva, Alan Ricardo

    2017-09-01

    Generalized Linear Models (GLM) with negative binomial distribution for errors, have been widely used to estimate safety at the level of transportation planning. The limited ability of this technique to take spatial effects into account can be overcome through the use of local models from spatial regression techniques, such as Geographically Weighted Poisson Regression (GWPR). Although GWPR is a system that deals with spatial dependency and heterogeneity and has already been used in some road safety studies at the planning level, it fails to account for the possible overdispersion that can be found in the observations on road-traffic crashes. Two approaches were adopted for the Geographically Weighted Negative Binomial Regression (GWNBR) model to allow discrete data to be modeled in a non-stationary form and to take note of the overdispersion of the data: the first examines the constant overdispersion for all the traffic zones and the second includes the variable for each spatial unit. This research conducts a comparative analysis between non-spatial global crash prediction models and spatial local GWPR and GWNBR at the level of traffic zones in Fortaleza/Brazil. A geographic database of 126 traffic zones was compiled from the available data on exposure, network characteristics, socioeconomic factors and land use. The models were calibrated by using the frequency of injury crashes as a dependent variable and the results showed that GWPR and GWNBR achieved a better performance than GLM for the average residuals and likelihood as well as reducing the spatial autocorrelation of the residuals, and the GWNBR model was more able to capture the spatial heterogeneity of the crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Risk indicators of oral health status among young adults aged 18 years analyzed by negative binomial regression.

    Science.gov (United States)

    Lu, Hai-Xia; Wong, May Chun Mei; Lo, Edward Chin Man; McGrath, Colman

    2013-08-19

    Limited information on oral health status for young adults aged 18 year-olds is known, and no available data exists in Hong Kong. The aims of this study were to investigate the oral health status and its risk indicators among young adults in Hong Kong using negative binomial regression. A survey was conducted in a representative sample of Hong Kong young adults aged 18 years. Clinical examinations were taken to assess oral health status using DMFT index and Community Periodontal Index (CPI) according to WHO criteria. Negative binomial regressions for DMFT score and the number of sextants with healthy gums were performed to identify the risk indicators of oral health status. A total of 324 young adults were examined. Prevalence of dental caries experience among the subjects was 59% and the overall mean DMFT score was 1.4. Most subjects (95%) had a score of 2 as their highest CPI score. Negative binomial regression analyses revealed that subjects who had a dental visit within 3 years had significantly higher DMFT scores (IRR = 1.68, p < 0.001). Subjects who brushed their teeth more frequently (IRR = 1.93, p < 0.001) and those with better dental knowledge (IRR = 1.09, p = 0.002) had significantly more sextants with healthy gums. Dental caries experience of the young adults aged 18 years in Hong Kong was not high but their periodontal condition was unsatisfactory. Their oral health status was related to their dental visit behavior, oral hygiene habit, and oral health knowledge.

  6. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  7. Predicting expressway crash frequency using a random effect negative binomial model: A case study in China.

    Science.gov (United States)

    Ma, Zhuanglin; Zhang, Honglu; Chien, Steven I-Jy; Wang, Jin; Dong, Chunjiao

    2017-01-01

    To investigate the relationship between crash frequency and potential influence factors, the accident data for events occurring on a 50km long expressway in China, including 567 crash records (2006-2008), were collected and analyzed. Both the fixed-length and the homogeneous longitudinal grade methods were applied to divide the study expressway section into segments. A negative binomial (NB) model and a random effect negative binomial (RENB) model were developed to predict crash frequency. The parameters of both models were determined using the maximum likelihood (ML) method, and the mixed stepwise procedure was applied to examine the significance of explanatory variables. Three explanatory variables, including longitudinal grade, road width, and ratio of longitudinal grade and curve radius (RGR), were found as significantly affecting crash frequency. The marginal effects of significant explanatory variables to the crash frequency were analyzed. The model performance was determined by the relative prediction error and the cumulative standardized residual. The results show that the RENB model outperforms the NB model. It was also found that the model performance with the fixed-length segment method is superior to that with the homogeneous longitudinal grade segment method. Copyright © 2016. Published by Elsevier Ltd.

  8. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  9. Use of a negative binomial distribution to describe the presence of Sphyrion laevigatum in Genypterus blacodes

    Directory of Open Access Journals (Sweden)

    Patricio Peña-Rehbein

    Full Text Available This paper describes the frequency and number of Sphyrion laevigatum in the skin of Genypterus blacodes, an important economic resource in Chile. The analysis of a spatial distribution model indicated that the parasites tended to cluster. Variations in the number of parasites per host could be described by a negative binomial distribution. The maximum number of parasites observed per host was two.

  10. Deteksi Dini Kasus Demam Berdarah Dengue Berdasarkan Faktor Cuaca di DKI Jakarta Menggunakan Metode Zero Truncated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Robert Kurniawan

    2017-11-01

    Full Text Available AbstractThe incidence rates of DHF in Jakarta in 2010 to 2014 are always higher than that of the national rates. Therefore, this study aims to find the effect of weather parameter on DHF cases. Weather is chosen because it can be observed daily and can be predicted so that it can be used as earlydetection in estimating the number of DHF cases. Data use includes DHF cases which is collected daily and weather data including lowest and highest temperatures and rainfall. Data analysis used is zero-truncated negative binomial analysis at 10% significance level. Based on the periodic dataof selected variables from January 1st 2015 until May 31st 2015, the study revealed that weather factors consisting of highest temperature, lowest temperature, and rainfall rate were significant enough to predict the number of DHF patients in DKI Jakarta. The three variables had positiveeffects in influencing the number of DHF patients in the same period. However, the weather factors cannot be controlled by humans, so that appropriate preventions are required whenever weather’s predictions indicate the increasing number of DHF cases in DKI Jakarta.Keywords: Dengue Hemorrhagic Fever, zero truncated negative binomial, early warning.AbstrakAngka kesakitan DBD pada tahun 2010 hingga 2014 selalu lebih tinggi dibandingkan dengan angka kesakitan DBD nasional. Oleh karena itu, penelitian ini bertujuan untuk mencari pengaruh faktor cuaca terhadap kasus DBD. Faktor cuaca dipilih karena dapat diamati setiap harinya dan dapat diprediksi sehingga dapat dijadikan deteksi dini dalam perkiraan jumlah penderita DBD. Data yang digunakan dalam penelitian ini adalah data jumlah penderita DBD di DKI Jakarta per hari dan data cuaca yang meliputi suhu terendah, suhu tertinggi dan curah hujan. Untuk mengetahui pengaruh faktor cuaca tersebut terhadap jumlah penderita DBD di DKI Jakarta digunakan metode analisis zero-truncated negative binomial. Berdasarkan data periode 1 Januari 2015 hingga

  11. Comparison of multiplicity distributions to the negative binomial distribution in muon-proton scattering

    International Nuclear Information System (INIS)

    Arneodo, M.; Ferrero, M.I.; Peroni, C.; Bee, C.P.; Bird, I.; Coughlan, J.; Sloan, T.; Braun, H.; Brueck, H.; Drees, J.; Edwards, A.; Krueger, J.; Montgomery, H.E.; Peschel, H.; Pietrzyk, U.; Poetsch, M.; Schneider, A.; Dreyer, T.; Ernst, T.; Haas, J.; Kabuss, E.M.; Landgraf, U.; Mohr, W.; Rith, K.; Schlagboehmer, A.; Schroeder, T.; Stier, H.E.; Wallucks, W.

    1987-01-01

    The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4-20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the binomial distribution via Poissonian to the ordinary binomial is observed. (orig.)

  12. The importance of distribution-choice in modeling substance use data: a comparison of negative binomial, beta binomial, and zero-inflated distributions.

    Science.gov (United States)

    Wagner, Brandie; Riggs, Paula; Mikulich-Gilbertson, Susan

    2015-01-01

    It is important to correctly understand the associations among addiction to multiple drugs and between co-occurring substance use and psychiatric disorders. Substance-specific outcomes (e.g. number of days used cannabis) have distributional characteristics which range widely depending on the substance and the sample being evaluated. We recommend a four-part strategy for determining the appropriate distribution for modeling substance use data. We demonstrate this strategy by comparing the model fit and resulting inferences from applying four different distributions to model use of substances that range greatly in the prevalence and frequency of their use. Using Timeline Followback (TLFB) data from a previously-published study, we used negative binomial, beta-binomial and their zero-inflated counterparts to model proportion of days during treatment of cannabis, cigarettes, alcohol, and opioid use. The fit for each distribution was evaluated with statistical model selection criteria, visual plots and a comparison of the resulting inferences. We demonstrate the feasibility and utility of modeling each substance individually and show that no single distribution provides the best fit for all substances. Inferences regarding use of each substance and associations with important clinical variables were not consistent across models and differed by substance. Thus, the distribution chosen for modeling substance use must be carefully selected and evaluated because it may impact the resulting conclusions. Furthermore, the common procedure of aggregating use across different substances may not be ideal.

  13. Modelling noise in second generation sequencing forensic genetics STR data using a one-inflated (zero-truncated) negative binomial model

    DEFF Research Database (Denmark)

    Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt

    2015-01-01

    We present a model fitting the distribution of non-systematic errors in STR second generation sequencing, SGS, analysis. The model fits the distribution of non-systematic errors, i.e. the noise, using a one-inflated, zero-truncated, negative binomial model. The model is a two component model...

  14. Use of negative binomial distribution to describe the presence of Anisakis in Thyrsites atun.

    Science.gov (United States)

    Peña-Rehbein, Patricio; De los Ríos-Escalante, Patricio

    2012-01-01

    Nematodes of the genus Anisakis have marine fishes as intermediate hosts. One of these hosts is Thyrsites atun, an important fishery resource in Chile between 38 and 41° S. This paper describes the frequency and number of Anisakis nematodes in the internal organs of Thyrsites atun. An analysis based on spatial distribution models showed that the parasites tend to be clustered. The variation in the number of parasites per host could be described by the negative binomial distribution. The maximum observed number of parasites was nine parasites per host. The environmental and zoonotic aspects of the study are also discussed.

  15. A methodology to design heuristics for model selection based on the characteristics of data: Application to investigate when the Negative Binomial Lindley (NB-L) is preferred over the Negative Binomial (NB).

    Science.gov (United States)

    Shirazi, Mohammadali; Dhavala, Soma Sekhar; Lord, Dominique; Geedipally, Srinivas Reddy

    2017-10-01

    Safety analysts usually use post-modeling methods, such as the Goodness-of-Fit statistics or the Likelihood Ratio Test, to decide between two or more competitive distributions or models. Such metrics require all competitive distributions to be fitted to the data before any comparisons can be accomplished. Given the continuous growth in introducing new statistical distributions, choosing the best one using such post-modeling methods is not a trivial task, in addition to all theoretical or numerical issues the analyst may face during the analysis. Furthermore, and most importantly, these measures or tests do not provide any intuitions into why a specific distribution (or model) is preferred over another (Goodness-of-Logic). This paper ponders into these issues by proposing a methodology to design heuristics for Model Selection based on the characteristics of data, in terms of descriptive summary statistics, before fitting the models. The proposed methodology employs two analytic tools: (1) Monte-Carlo Simulations and (2) Machine Learning Classifiers, to design easy heuristics to predict the label of the 'most-likely-true' distribution for analyzing data. The proposed methodology was applied to investigate when the recently introduced Negative Binomial Lindley (NB-L) distribution is preferred over the Negative Binomial (NB) distribution. Heuristics were designed to select the 'most-likely-true' distribution between these two distributions, given a set of prescribed summary statistics of data. The proposed heuristics were successfully compared against classical tests for several real or observed datasets. Not only they are easy to use and do not need any post-modeling inputs, but also, using these heuristics, the analyst can attain useful information about why the NB-L is preferred over the NB - or vice versa- when modeling data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Negative binomial distribution to describe the presence of Trifur tortuosus (Crustacea: Copepoda in Merluccius gayi (Osteichthyes: Gadiformes

    Directory of Open Access Journals (Sweden)

    Giselle Garcia-Sepulveda

    2017-06-01

    Full Text Available This paper describes the frequency and number of Trifur tortuosus in the skin of Merluccius gayi, an important economic resource in Chile. Analysis of a spatial distribution model indicated that the parasites tended to cluster. Variations in the number of parasites per host can be described by a negative binomial distribution. The maximum number of parasites observed per host was one, similar patterns was described for other parasites in Chilean marine fishes.

  17. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    Science.gov (United States)

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  18. Chain binomial models and binomial autoregressive processes.

    Science.gov (United States)

    Weiss, Christian H; Pollett, Philip K

    2012-09-01

    We establish a connection between a class of chain-binomial models of use in ecology and epidemiology and binomial autoregressive (AR) processes. New results are obtained for the latter, including expressions for the lag-conditional distribution and related quantities. We focus on two types of chain-binomial model, extinction-colonization and colonization-extinction models, and present two approaches to parameter estimation. The asymptotic distributions of the resulting estimators are studied, as well as their finite-sample performance, and we give an application to real data. A connection is made with standard AR models, which also has implications for parameter estimation. © 2011, The International Biometric Society.

  19. Bipartite binomial heaps

    DEFF Research Database (Denmark)

    Elmasry, Amr; Jensen, Claus; Katajainen, Jyrki

    2017-01-01

    the (total) number of elements stored in the data structure(s) prior to the operation. As the resulting data structure consists of two components that are different variants of binomial heaps, we call it a bipartite binomial heap. Compared to its counterpart, a multipartite binomial heap, the new structure...

  20. The validity of the negative binomial multiplicity distribution in the case of the relativistic nucleus-nucleus interaction

    International Nuclear Information System (INIS)

    Ghosh, D.; Mukhopadhyay, A.; Ghosh, A.; Roy, J.

    1989-01-01

    This letter presents new data on the multiplicity distribution of charged secondaries in 24 Mg interactions with AgBr at 4.5 GeV/c per nucleon. The validity of the negative binomial distribution (NBD) is studied. It is observed that the data can be well parametrized in terms of the NBD law for the whole phase space and also for different pseudo-rapidity bins. A comparison of different parameters with those in the case of h-h interactions reveals some interesting results, the implications of which are discussed. (orig.)

  1. First study of the negative binomial distribution applied to higher moments of net-charge and net-proton multiplicity distributions

    International Nuclear Information System (INIS)

    Tarnowsky, Terence J.; Westfall, Gary D.

    2013-01-01

    A study of the first four moments (mean, variance, skewness, and kurtosis) and their products (κσ 2 and Sσ) of the net-charge and net-proton distributions in Au + Au collisions at √(s NN )=7.7–200 GeV from HIJING simulations has been carried out. The skewness and kurtosis and the collision volume independent products κσ 2 and Sσ have been proposed as sensitive probes for identifying the presence of a QCD critical point. A discrete probability distribution that effectively describes the separate positively and negatively charged particle (or proton and anti-proton) multiplicity distributions is the negative binomial (or binomial) distribution (NBD/BD). The NBD/BD has been used to characterize particle production in high-energy particle and nuclear physics. Their application to the higher moments of the net-charge and net-proton distributions is examined. Differences between κσ 2 and a statistical Poisson assumption of a factor of four (for net-charge) and 40% (for net-protons) can be accounted for by the NBD/BD. This is the first application of the properties of the NBD/BD to describe the behavior of the higher moments of net-charge and net-proton distributions in nucleus–nucleus collisions

  2. The Validation of a Beta-Binomial Model for Overdispersed Binomial Data.

    Science.gov (United States)

    Kim, Jongphil; Lee, Ji-Hyun

    2017-01-01

    The beta-binomial model has been widely used as an analytically tractable alternative that captures the overdispersion of an intra-correlated, binomial random variable, X . However, the model validation for X has been rarely investigated. As a beta-binomial mass function takes on a few different shapes, the model validation is examined for each of the classified shapes in this paper. Further, the mean square error (MSE) is illustrated for each shape by the maximum likelihood estimator (MLE) based on a beta-binomial model approach and the method of moments estimator (MME) in order to gauge when and how much the MLE is biased.

  3. A comparison of different ways of including baseline counts in negative binomial models for data from falls prevention trials.

    Science.gov (United States)

    Zheng, Han; Kimber, Alan; Goodwin, Victoria A; Pickering, Ruth M

    2018-01-01

    A common design for a falls prevention trial is to assess falling at baseline, randomize participants into an intervention or control group, and ask them to record the number of falls they experience during a follow-up period of time. This paper addresses how best to include the baseline count in the analysis of the follow-up count of falls in negative binomial (NB) regression. We examine the performance of various approaches in simulated datasets where both counts are generated from a mixed Poisson distribution with shared random subject effect. Including the baseline count after log-transformation as a regressor in NB regression (NB-logged) or as an offset (NB-offset) resulted in greater power than including the untransformed baseline count (NB-unlogged). Cook and Wei's conditional negative binomial (CNB) model replicates the underlying process generating the data. In our motivating dataset, a statistically significant intervention effect resulted from the NB-logged, NB-offset, and CNB models, but not from NB-unlogged, and large, outlying baseline counts were overly influential in NB-unlogged but not in NB-logged. We conclude that there is little to lose by including the log-transformed baseline count in standard NB regression compared to CNB for moderate to larger sized datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. The analysis of incontinence episodes and other count data in patients with overactive bladder by Poisson and negative binomial regression.

    Science.gov (United States)

    Martina, R; Kay, R; van Maanen, R; Ridder, A

    2015-01-01

    Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Forecasting asthma-related hospital admissions in London using negative binomial models.

    Science.gov (United States)

    Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe

    2013-05-01

    Health forecasting can improve health service provision and individual patient outcomes. Environmental factors are known to impact chronic respiratory conditions such as asthma, but little is known about the extent to which these factors can be used for forecasting. Using weather, air quality and hospital asthma admissions, in London (2005-2006), two related negative binomial models were developed and compared with a naive seasonal model. In the first approach, predictive forecasting models were fitted with 7-day averages of each potential predictor, and then a subsequent multivariable model is constructed. In the second strategy, an exhaustive search of the best fitting models between possible combinations of lags (0-14 days) of all the environmental effects on asthma admission was conducted. Three models were considered: a base model (seasonal effects), contrasted with a 7-day average model and a selected lags model (weather and air quality effects). Season is the best predictor of asthma admissions. The 7-day average and seasonal models were trivial to implement. The selected lags model was computationally intensive, but of no real value over much more easily implemented models. Seasonal factors can predict daily hospital asthma admissions in London, and there is a little evidence that additional weather and air quality information would add to forecast accuracy.

  6. On a Fractional Binomial Process

    Science.gov (United States)

    Cahoy, Dexter O.; Polito, Federico

    2012-02-01

    The classical binomial process has been studied by Jakeman (J. Phys. A 23:2815-2825, 1990) (and the references therein) and has been used to characterize a series of radiation states in quantum optics. In particular, he studied a classical birth-death process where the chance of birth is proportional to the difference between a larger fixed number and the number of individuals present. It is shown that at large times, an equilibrium is reached which follows a binomial process. In this paper, the classical binomial process is generalized using the techniques of fractional calculus and is called the fractional binomial process. The fractional binomial process is shown to preserve the binomial limit at large times while expanding the class of models that include non-binomial fluctuations (non-Markovian) at regular and small times. As a direct consequence, the generality of the fractional binomial model makes the proposed model more desirable than its classical counterpart in describing real physical processes. More statistical properties are also derived.

  7. Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.

    Science.gov (United States)

    He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L

    2015-10-01

    Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.

  8. PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON

    Directory of Open Access Journals (Sweden)

    PUTU SUSAN PRADAWATI

    2013-09-01

    Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.

  9. Longitudinal beta-binomial modeling using GEE for overdispersed binomial data.

    Science.gov (United States)

    Wu, Hongqian; Zhang, Ying; Long, Jeffrey D

    2017-03-15

    Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta-binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Improved binomial charts for high-quality processes

    NARCIS (Netherlands)

    Albers, Willem/Wim

    For processes concerning attribute data with (very) small failure rate p, often negative binomial control charts are used. The decision whether to stop or continue is made each time r failures have occurred, for some r≥1. Finding the optimal r for detecting a given increase of p first requires

  11. Binomial Rings: Axiomatisation, Transfer and Classification

    OpenAIRE

    Xantcha, Qimh Richey

    2011-01-01

    Hall's binomial rings, rings with binomial coefficients, are given an axiomatisation and proved identical to the numerical rings studied by Ekedahl. The Binomial Transfer Principle is established, enabling combinatorial proofs of algebraical identities. The finitely generated binomial rings are completely classified. An application to modules over binomial rings is given.

  12. Using a Negative Binomial Regression Model for Early Warning at the Start of a Hand Foot Mouth Disease Epidemic in Dalian, Liaoning Province, China.

    Science.gov (United States)

    An, Qingyu; Wu, Jun; Fan, Xuesong; Pan, Liyang; Sun, Wei

    2016-01-01

    The hand foot and mouth disease (HFMD) is a human syndrome caused by intestinal viruses like that coxsackie A virus 16, enterovirus 71 and easily developed into outbreak in kindergarten and school. Scientifically and accurately early detection of the start time of HFMD epidemic is a key principle in planning of control measures and minimizing the impact of HFMD. The objective of this study was to establish a reliable early detection model for start timing of hand foot mouth disease epidemic in Dalian and to evaluate the performance of model by analyzing the sensitivity in detectability. The negative binomial regression model was used to estimate the weekly baseline case number of HFMD and identified the optimal alerting threshold between tested difference threshold values during the epidemic and non-epidemic year. Circular distribution method was used to calculate the gold standard of start timing of HFMD epidemic. From 2009 to 2014, a total of 62022 HFMD cases were reported (36879 males and 25143 females) in Dalian, Liaoning Province, China, including 15 fatal cases. The median age of the patients was 3 years. The incidence rate of epidemic year ranged from 137.54 per 100,000 population to 231.44 per 100,000population, the incidence rate of non-epidemic year was lower than 112 per 100,000 population. The negative binomial regression model with AIC value 147.28 was finally selected to construct the baseline level. The threshold value was 100 for the epidemic year and 50 for the non- epidemic year had the highest sensitivity(100%) both in retrospective and prospective early warning and the detection time-consuming was 2 weeks before the actual starting of HFMD epidemic. The negative binomial regression model could early warning the start of a HFMD epidemic with good sensitivity and appropriate detection time in Dalian.

  13. Binomial collisions and near collisions

    OpenAIRE

    Blokhuis, Aart; Brouwer, Andries; de Weger, Benne

    2017-01-01

    We describe efficient algorithms to search for cases in which binomial coefficients are equal or almost equal, give a conjecturally complete list of all cases where two binomial coefficients differ by 1, and give some identities for binomial coefficients that seem to be new.

  14. Improved binomial charts for monitoring high-quality processes

    NARCIS (Netherlands)

    Albers, Willem/Wim

    2009-01-01

    For processes concerning attribute data with (very) small failure rate p, often negative binomial control charts are used. The decision whether to stop or continue is made each time r failures have occurred, for some r≥1. Finding the optimal r for detecting a given increase of p first requires

  15. Measured PET Data Characterization with the Negative Binomial Distribution Model.

    Science.gov (United States)

    Santarelli, Maria Filomena; Positano, Vincenzo; Landini, Luigi

    2017-01-01

    Accurate statistical model of PET measurements is a prerequisite for a correct image reconstruction when using statistical image reconstruction algorithms, or when pre-filtering operations must be performed. Although radioactive decay follows a Poisson distribution, deviation from Poisson statistics occurs on projection data prior to reconstruction due to physical effects, measurement errors, correction of scatter and random coincidences. Modelling projection data can aid in understanding the statistical nature of the data in order to develop efficient processing methods and to reduce noise. This paper outlines the statistical behaviour of measured emission data evaluating the goodness of fit of the negative binomial (NB) distribution model to PET data for a wide range of emission activity values. An NB distribution model is characterized by the mean of the data and the dispersion parameter α that describes the deviation from Poisson statistics. Monte Carlo simulations were performed to evaluate: (a) the performances of the dispersion parameter α estimator, (b) the goodness of fit of the NB model for a wide range of activity values. We focused on the effect produced by correction for random and scatter events in the projection (sinogram) domain, due to their importance in quantitative analysis of PET data. The analysis developed herein allowed us to assess the accuracy of the NB distribution model to fit corrected sinogram data, and to evaluate the sensitivity of the dispersion parameter α to quantify deviation from Poisson statistics. By the sinogram ROI-based analysis, it was demonstrated that deviation on the measured data from Poisson statistics can be quantitatively characterized by the dispersion parameter α, in any noise conditions and corrections.

  16. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    Science.gov (United States)

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  17. Perbandingan Regresi Binomial Negatif dan Regresi Conway-Maxwell-Poisson dalam Mengatasi Overdispersi pada Regresi Poisson

    Directory of Open Access Journals (Sweden)

    Lusi Eka Afri

    2017-03-01

    Full Text Available Regresi Binomial Negatif dan regresi Conway-Maxwell-Poisson merupakan solusi untuk mengatasi overdispersi pada regresi Poisson. Kedua model tersebut merupakan perluasan dari model regresi Poisson. Menurut Hinde dan Demetrio (2007, terdapat beberapa kemungkinan terjadi overdispersi pada regresi Poisson yaitu keragaman hasil pengamatan keragaman individu sebagai komponen yang tidak dijelaskan oleh model, korelasi antar respon individu, terjadinya pengelompokan dalam populasi dan peubah teramati yang dihilangkan. Akibatnya dapat menyebabkan pendugaan galat baku yang terlalu rendah dan akan menghasilkan pendugaan parameter yang bias ke bawah (underestimate. Penelitian ini bertujuan untuk membandingan model Regresi Binomial Negatif dan model regresi Conway-Maxwell-Poisson (COM-Poisson dalam mengatasi overdispersi pada data distribusi Poisson berdasarkan statistik uji devians. Data yang digunakan dalam penelitian ini terdiri dari dua sumber data yaitu data simulasi dan data kasus terapan. Data simulasi yang digunakan diperoleh dengan membangkitkan data berdistribusi Poisson yang mengandung overdispersi dengan menggunakan bahasa pemrograman R berdasarkan karakteristik data berupa , peluang munculnya nilai nol (p serta ukuran sampel (n. Data dibangkitkan berguna untuk mendapatkan estimasi koefisien parameter pada regresi binomial negatif dan COM-Poisson.   Kata Kunci: overdispersi, regresi binomial negatif, regresi Conway-Maxwell-Poisson Negative binomial regression and Conway-Maxwell-Poisson regression could be used to overcome over dispersion on Poisson regression. Both models are the extension of Poisson regression model. According to Hinde and Demetrio (2007, there will be some over dispersion on Poisson regression: observed variance in individual variance cannot be described by a model, correlation among individual response, and the population group and the observed variables are eliminated. Consequently, this can lead to low standard error

  18. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  19. Beta-binomial regression and bimodal utilization.

    Science.gov (United States)

    Liu, Chuan-Fen; Burgess, James F; Manning, Willard G; Maciejewski, Matthew L

    2013-10-01

    To illustrate how the analysis of bimodal U-shaped distributed utilization can be modeled with beta-binomial regression, which is rarely used in health services research. Veterans Affairs (VA) administrative data and Medicare claims in 2001-2004 for 11,123 Medicare-eligible VA primary care users in 2000. We compared means and distributions of VA reliance (the proportion of all VA/Medicare primary care visits occurring in VA) predicted from beta-binomial, binomial, and ordinary least-squares (OLS) models. Beta-binomial model fits the bimodal distribution of VA reliance better than binomial and OLS models due to the nondependence on normality and the greater flexibility in shape parameters. Increased awareness of beta-binomial regression may help analysts apply appropriate methods to outcomes with bimodal or U-shaped distributions. © Health Research and Educational Trust.

  20. Regular exercise and related factors in patients with Parkinson's disease: Applying zero-inflated negative binomial modeling of exercise count data.

    Science.gov (United States)

    Lee, JuHee; Park, Chang Gi; Choi, Moonki

    2016-05-01

    This study was conducted to identify risk factors that influence regular exercise among patients with Parkinson's disease in Korea. Parkinson's disease is prevalent in the elderly, and may lead to a sedentary lifestyle. Exercise can enhance physical and psychological health. However, patients with Parkinson's disease are less likely to exercise than are other populations due to physical disability. A secondary data analysis and cross-sectional descriptive study were conducted. A convenience sample of 106 patients with Parkinson's disease was recruited at an outpatient neurology clinic of a tertiary hospital in Korea. Demographic characteristics, disease-related characteristics (including disease duration and motor symptoms), self-efficacy for exercise, balance, and exercise level were investigated. Negative binomial regression and zero-inflated negative binomial regression for exercise count data were utilized to determine factors involved in exercise. The mean age of participants was 65.85 ± 8.77 years, and the mean duration of Parkinson's disease was 7.23 ± 6.02 years. Most participants indicated that they engaged in regular exercise (80.19%). Approximately half of participants exercised at least 5 days per week for 30 min, as recommended (51.9%). Motor symptoms were a significant predictor of exercise in the count model, and self-efficacy for exercise was a significant predictor of exercise in the zero model. Severity of motor symptoms was related to frequency of exercise. Self-efficacy contributed to the probability of exercise. Symptom management and improvement of self-efficacy for exercise are important to encourage regular exercise in patients with Parkinson's disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Prediction of province-level outbreaks of foot-and-mouth disease in Iran using a zero-inflated negative binomial model.

    Science.gov (United States)

    Jafarzadeh, S Reza; Norris, Michelle; Thurmond, Mark C

    2014-08-01

    To identify events that could predict province-level frequency of foot-and-mouth disease (FMD) outbreaks in Iran, 5707 outbreaks reported from April 1995 to March 2002 were studied. A zero-inflated negative binomial model was used to estimate the probability of a 'no-outbreak' status and the number of outbreaks in a province, using the number of previous occurrences of FMD for the same or adjacent provinces and season as covariates. For each province, the probability of observing no outbreak was negatively associated with the number of outbreaks in the same province in the previous month (odds ratio [OR]=0.06, 95% confidence interval [CI]: 0.01, 0.30) and in 'the second previous month' (OR=0.10, 95% CI: 0.02, 0.51), the total number of outbreaks in the second previous month in adjacent provinces (OR=0.57, 95% CI: 0.36, 0.91) and the season (winter [OR=0.18, 95% CI: 0.06, 0.55] and spring [OR=0.27, 95% CI: 0.09, 0.81], compared with summer). The expected number of outbreaks in a province was positively associated with number of outbreaks in the same province in previous month (coefficient [coef]=0.74, 95% CI: 0.66, 0.82) and in the second previous month (coef=0.23, 95% CI: 0.16, 0.31), total number of outbreaks in adjacent provinces in the previous month (coef=0.32, 95% CI: 0.22, 0.41) and season (fall [coef=0.20, 95% CI: 0.07, 0.33] and spring [coef=0.18, 95% CI: 0.05, 0.31], compared to summer); however, number of outbreaks was negatively associated with the total number of outbreaks in adjacent provinces in the second previous month (coef=-0.19, 95% CI: -0.28, -0.09). The findings indicate that the probability of an outbreak (and the expected number of outbreaks if any) may be predicted based on previous province information, which could help decision-makers allocate resources more efficiently for province-level disease control measures. Further, the study illustrates use of zero inflated negative binomial model to study diseases occurrence where disease is

  2. Expansion around half-integer values, binomial sums, and inverse binomial sums

    International Nuclear Information System (INIS)

    Weinzierl, Stefan

    2004-01-01

    I consider the expansion of transcendental functions in a small parameter around rational numbers. This includes in particular the expansion around half-integer values. I present algorithms which are suitable for an implementation within a symbolic computer algebra system. The method is an extension of the technique of nested sums. The algorithms allow in addition the evaluation of binomial sums, inverse binomial sums and generalizations thereof

  3. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  4. Comparison of linear and zero-inflated negative binomial regression models for appraisal of risk factors associated with dental caries.

    Science.gov (United States)

    Batra, Manu; Shah, Aasim Farooq; Rajput, Prashant; Shah, Ishrat Aasim

    2016-01-01

    Dental caries among children has been described as a pandemic disease with a multifactorial nature. Various sociodemographic factors and oral hygiene practices are commonly tested for their influence on dental caries. In recent years, a recent statistical model that allows for covariate adjustment has been developed and is commonly referred zero-inflated negative binomial (ZINB) models. To compare the fit of the two models, the conventional linear regression (LR) model and ZINB model to assess the risk factors associated with dental caries. A cross-sectional survey was conducted on 1138 12-year-old school children in Moradabad Town, Uttar Pradesh during months of February-August 2014. Selected participants were interviewed using a questionnaire. Dental caries was assessed by recording decayed, missing, or filled teeth (DMFT) index. To assess the risk factor associated with dental caries in children, two approaches have been applied - LR model and ZINB model. The prevalence of caries-free subjects was 24.1%, and mean DMFT was 3.4 ± 1.8. In LR model, all the variables were statistically significant. Whereas in ZINB model, negative binomial part showed place of residence, father's education level, tooth brushing frequency, and dental visit statistically significant implying that the degree of being caries-free (DMFT = 0) increases for group of children who are living in urban, whose father is university pass out, who brushes twice a day and if have ever visited a dentist. The current study report that the LR model is a poorly fitted model and may lead to spurious conclusions whereas ZINB model has shown better goodness of fit (Akaike information criterion values - LR: 3.94; ZINB: 2.39) and can be preferred if high variance and number of an excess of zeroes are present.

  5. Modelling the Frequency of Operational Risk Losses under the Basel II Capital Accord: A Comparative study of Poisson and Negative Binomial Distributions

    OpenAIRE

    Silver, Toni O.

    2013-01-01

    2013 dissertation for MSc in Finance and Risk Management. Selected by academic staff as a good example of a masters level dissertation. \\ud \\ud This study investigated the two major methods of modelling the frequency of\\ud operational losses under the BCBS Accord of 1998 known as Basel II Capital\\ud Accord. It compared the Poisson method of modelling the frequency of\\ud losses to that of the Negative Binomial. The frequency of operational losses\\ud was investigated using a cross section of se...

  6. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  7. Pooling overdispersed binomial data to estimate event rate.

    Science.gov (United States)

    Young-Xu, Yinong; Chan, K Arnold

    2008-08-19

    The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.

  8. Pooling overdispersed binomial data to estimate event rate

    Directory of Open Access Journals (Sweden)

    Chan K Arnold

    2008-08-01

    Full Text Available Abstract Background The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. Methods We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. Results In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. Conclusion The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.

  9. Effect of Hurdle Technology in Food Preservation: A Review.

    Science.gov (United States)

    Singh, Shiv; Shalini, Rachana

    2016-01-01

    Hurdle technology is used in industrialized as well as in developing countries for the gentle but effective preservation of foods. Hurdle technology was developed several years ago as a new concept for the production of safe, stable, nutritious, tasty, and economical foods. Previously hurdle technology, i.e., a combination of preservation methods, was used empirically without much knowledge of the governing principles. The intelligent application of hurdle technology has become more prevalent now, because the principles of major preservative factors for foods (e.g., temperature, pH, aw, Eh, competitive flora), and their interactions, became better known. Recently, the influence of food preservation methods on the physiology and behavior of microorganisms in foods, i.e. their homeostasis, metabolic exhaustion, stress reactions, are taken into account, and the novel concept of multi-target food preservation emerged. The present contribution reviews the concept of the potential hurdles for foods, the hurdle effect, and the hurdle technology for the prospects of the future goal of a multi-target preservation of foods.

  10. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  11. Una prueba de razón de verosimilitudes para discriminar entre la distribución Poisson, Binomial y Binomial Negativa.

    OpenAIRE

    López Martínez, Laura Elizabeth

    2010-01-01

    En este trabajo se realiza inferencia estadística en la distribución Binomial Negativa Generalizada (BNG) y los modelos que anida, los cuales son Binomial, Binomial Negativa y Poisson. Se aborda el problema de estimación de parámetros en la distribución BNG y se propone una prueba de razón de verosimilitud generalizada para discernir si un conjunto de datos se ajusta en particular al modelo Binomial, Binomial Negativa o Poisson. Además, se estudian las potencias y tamaños de la prueba p...

  12. Log-binomial models: exploring failed convergence.

    Science.gov (United States)

    Williamson, Tyler; Eliasziw, Misha; Fick, Gordon Hilton

    2013-12-13

    Relative risk is a summary metric that is commonly used in epidemiological investigations. Increasingly, epidemiologists are using log-binomial models to study the impact of a set of predictor variables on a single binary outcome, as they naturally offer relative risks. However, standard statistical software may report failed convergence when attempting to fit log-binomial models in certain settings. The methods that have been proposed in the literature for dealing with failed convergence use approximate solutions to avoid the issue. This research looks directly at the log-likelihood function for the simplest log-binomial model where failed convergence has been observed, a model with a single linear predictor with three levels. The possible causes of failed convergence are explored and potential solutions are presented for some cases. Among the principal causes is a failure of the fitting algorithm to converge despite the log-likelihood function having a single finite maximum. Despite these limitations, log-binomial models are a viable option for epidemiologists wishing to describe the relationship between a set of predictors and a binary outcome where relative risk is the desired summary measure. Epidemiologists are encouraged to continue to use log-binomial models and advocate for improvements to the fitting algorithms to promote the widespread use of log-binomial models.

  13. Jumps in binomial AR(1) processes

    OpenAIRE

    Weiß , Christian H.

    2009-01-01

    Abstract We consider the binomial AR(1) model for serially dependent processes of binomial counts. After a review of its definition and known properties, we investigate marginal and serial properties of jumps in such processes. Based on these results, we propose the jumps control chart for monitoring a binomial AR(1) process. We show how to evaluate the performance of this control chart and give design recommendations. correspondance: Tel.: +49 931 31 84968; ...

  14. Use of negative binomial distribution to describe the presence of Anisakis in Thyrsites atun Uso de distribuição binomial negativa para descrever a presença de Anisakis em Thyrsites atun

    Directory of Open Access Journals (Sweden)

    Patricio Peña-Rehbein

    2012-03-01

    Full Text Available Nematodes of the genus Anisakis have marine fishes as intermediate hosts. One of these hosts is Thyrsites atun, an important fishery resource in Chile between 38 and 41° S. This paper describes the frequency and number of Anisakis nematodes in the internal organs of Thyrsites atun. An analysis based on spatial distribution models showed that the parasites tend to be clustered. The variation in the number of parasites per host could be described by the negative binomial distribution. The maximum observed number of parasites was nine parasites per host. The environmental and zoonotic aspects of the study are also discussed.Nematóides do gênero Anisakis têm nos peixes marinhos seus hospedeiros intermediários. Um desses hospedeiros é Thyrsites atun, um importante recurso pesqueiro no Chile entre 38 e 41° S. Este artigo descreve a freqüência e o número de nematóides Anisakis nos órgãos internos de Thyrsites atun. Uma análise baseada em modelos de distribuição espacial demonstrou que os parasitos tendem a ficar agrupados. A variação numérica de parasitas por hospedeiro pôde ser descrita por distribuição binomial negativa. O número máximo observado de parasitas por hospedeiro foi nove. Os aspectos ambientais e zoonóticos desse estudo também serão discutidos.

  15. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  16. Firm-level innovation activity, employee turnover and HRM practices

    DEFF Research Database (Denmark)

    Eriksson, Tor; Qin, Zhihua; Wang, Wenjing

    2014-01-01

    This paper examines the relationship between employee turnover, HRM practices and innovation in Chinese firms in five high technology sectors. We estimate hurdle negative binomial models for count data on survey data allowing for analyses of the extensive as well as intensive margins of firms' in...

  17. A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data.

    Science.gov (United States)

    Smith, Gregory R; Birtwistle, Marc R

    2016-01-01

    A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes.

  18. INVESTIGATION OF E-MAIL TRAFFIC BY USING ZERO-INFLATED REGRESSION MODELS

    Directory of Open Access Journals (Sweden)

    Yılmaz KAYA

    2012-06-01

    Full Text Available Based on count data obtained with a value of zero may be greater than anticipated. These types of data sets should be used to analyze by regression methods taking into account zero values. Zero- Inflated Poisson (ZIP, Zero-Inflated negative binomial (ZINB, Poisson Hurdle (PH, negative binomial Hurdle (NBH are more common approaches in modeling more zero value possessing dependent variables than expected. In the present study, the e-mail traffic of Yüzüncü Yıl University in 2009 spring semester was investigated. ZIP and ZINB, PH and NBH regression methods were applied on the data set because more zeros counting (78.9% were found in data set than expected. ZINB and NBH regression considered zero dispersion and overdispersion were found to be more accurate results due to overdispersion and zero dispersion in sending e-mail. ZINB is determined to be best model accordingto Vuong statistics and information criteria.

  19. System-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, NEWTONP, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), used independently of one another. Program finds probability required to yield given system reliability. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  20. Some considerations for excess zeroes in substance abuse research.

    Science.gov (United States)

    Bandyopadhyay, Dipankar; DeSantis, Stacia M; Korte, Jeffrey E; Brady, Kathleen T

    2011-09-01

    Count data collected in substance abuse research often come with an excess of "zeroes," which are typically handled using zero-inflated regression models. However, there is a need to consider the design aspects of those studies before using such a statistical model to ascertain the sources of zeroes. We sought to illustrate hurdle models as alternatives to zero-inflated models to validate a two-stage decision-making process in situations of "excess zeroes." We use data from a study of 45 cocaine-dependent subjects where the primary scientific question was to evaluate whether study participation influences drug-seeking behavior. The outcome, "the frequency (count) of cocaine use days per week," is bounded (ranging from 0 to 7). We fit and compare binomial, Poisson, negative binomial, and the hurdle version of these models to study the effect of gender, age, time, and study participation on cocaine use. The hurdle binomial model provides the best fit. Gender and time are not predictive of use. Higher odds of use versus no use are associated with age; however once use is experienced, odds of further use decrease with increase in age. Participation was associated with higher odds of no-cocaine use; once there is use, participation reduced the odds of further use. Age and study participation are significantly predictive of cocaine-use behavior. The two-stage decision process as modeled by a hurdle binomial model (appropriate for bounded count data with excess zeroes) provides interesting insights into the study of covariate effects on count responses of substance use, when all enrolled subjects are believed to be "at-risk" of use.

  1. Correlated binomial models and correlation structures

    International Nuclear Information System (INIS)

    Hisakado, Masato; Kitsukawa, Kenji; Mori, Shintaro

    2006-01-01

    We discuss a general method to construct correlated binomial distributions by imposing several consistent relations on the joint probability function. We obtain self-consistency relations for the conditional correlations and conditional probabilities. The beta-binomial distribution is derived by a strong symmetric assumption on the conditional correlations. Our derivation clarifies the 'correlation' structure of the beta-binomial distribution. It is also possible to study the correlation structures of other probability distributions of exchangeable (homogeneous) correlated Bernoulli random variables. We study some distribution functions and discuss their behaviours in terms of their correlation structures

  2. Common-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest, M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CROSSER, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), used independently of one another. Point of equality between reliability of system and common reliability of components found. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  3. Single-vehicle crashes along rural mountainous highways in Malaysia: An application of random parameters negative binomial model.

    Science.gov (United States)

    Rusli, Rusdi; Haque, Md Mazharul; King, Mark; Voon, Wong Shaw

    2017-05-01

    Mountainous highways generally associate with complex driving environment because of constrained road geometries, limited cross-section elements, inappropriate roadside features, and adverse weather conditions. As a result, single-vehicle (SV) crashes are overrepresented along mountainous roads, particularly in developing countries, but little attention is known about the roadway geometric, traffic and weather factors contributing to these SV crashes. As such, the main objective of the present study is to investigate SV crashes using detailed data obtained from a rigorous site survey and existing databases. The final dataset included a total of 56 variables representing road geometries including horizontal and vertical alignment, traffic characteristics, real-time weather condition, cross-sectional elements, roadside features, and spatial characteristics. To account for structured heterogeneities resulting from multiple observations within a site and other unobserved heterogeneities, the study applied a random parameters negative binomial model. Results suggest that rainfall during the crash is positively associated with SV crashes, but real-time visibility is negatively associated. The presence of a road shoulder, particularly a bitumen shoulder or wider shoulders, along mountainous highways is associated with less SV crashes. While speeding along downgrade slopes increases the likelihood of SV crashes, proper delineation decreases the likelihood. Findings of this study have significant implications for designing safer highways in mountainous areas, particularly in the context of a developing country. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Defining the critical hurdles in cancer immunotherapy

    DEFF Research Database (Denmark)

    Fox, Bernard A; Schendel, Dolores J; Butterfield, Lisa H

    2011-01-01

    of cancer immunotherapy. With consensus on these hurdles, international working groups could be developed to make recommendations vetted by the participating organizations. These recommendations could then be considered by regulatory bodies, governmental and private funding agencies, pharmaceutical...... immunotherapy organizations representing Europe, Japan, China and North America to discuss collaborations to improve development and delivery of cancer immunotherapy. One of the concepts raised by SITC and defined as critical by all parties was the need to identify hurdles that impede effective translation...... companies and academic institutions to facilitate changes necessary to accelerate clinical translation of novel immune-based cancer therapies. The critical hurdles identified by representatives of the collaborating organizations, now organized as the World Immunotherapy Council, are presented and discussed...

  5. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution.

    Science.gov (United States)

    Harrison, Xavier A

    2015-01-01

    Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed

  6. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  7. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  8. Defining the critical hurdles in cancer immunotherapy

    Science.gov (United States)

    2011-01-01

    Scientific discoveries that provide strong evidence of antitumor effects in preclinical models often encounter significant delays before being tested in patients with cancer. While some of these delays have a scientific basis, others do not. We need to do better. Innovative strategies need to move into early stage clinical trials as quickly as it is safe, and if successful, these therapies should efficiently obtain regulatory approval and widespread clinical application. In late 2009 and 2010 the Society for Immunotherapy of Cancer (SITC), convened an "Immunotherapy Summit" with representatives from immunotherapy organizations representing Europe, Japan, China and North America to discuss collaborations to improve development and delivery of cancer immunotherapy. One of the concepts raised by SITC and defined as critical by all parties was the need to identify hurdles that impede effective translation of cancer immunotherapy. With consensus on these hurdles, international working groups could be developed to make recommendations vetted by the participating organizations. These recommendations could then be considered by regulatory bodies, governmental and private funding agencies, pharmaceutical companies and academic institutions to facilitate changes necessary to accelerate clinical translation of novel immune-based cancer therapies. The critical hurdles identified by representatives of the collaborating organizations, now organized as the World Immunotherapy Council, are presented and discussed in this report. Some of the identified hurdles impede all investigators; others hinder investigators only in certain regions or institutions or are more relevant to specific types of immunotherapy or first-in-humans studies. Each of these hurdles can significantly delay clinical translation of promising advances in immunotherapy yet if overcome, have the potential to improve outcomes of patients with cancer. PMID:22168571

  9. Defining the critical hurdles in cancer immunotherapy

    Directory of Open Access Journals (Sweden)

    Fox Bernard A

    2011-12-01

    Full Text Available Abstract Scientific discoveries that provide strong evidence of antitumor effects in preclinical models often encounter significant delays before being tested in patients with cancer. While some of these delays have a scientific basis, others do not. We need to do better. Innovative strategies need to move into early stage clinical trials as quickly as it is safe, and if successful, these therapies should efficiently obtain regulatory approval and widespread clinical application. In late 2009 and 2010 the Society for Immunotherapy of Cancer (SITC, convened an "Immunotherapy Summit" with representatives from immunotherapy organizations representing Europe, Japan, China and North America to discuss collaborations to improve development and delivery of cancer immunotherapy. One of the concepts raised by SITC and defined as critical by all parties was the need to identify hurdles that impede effective translation of cancer immunotherapy. With consensus on these hurdles, international working groups could be developed to make recommendations vetted by the participating organizations. These recommendations could then be considered by regulatory bodies, governmental and private funding agencies, pharmaceutical companies and academic institutions to facilitate changes necessary to accelerate clinical translation of novel immune-based cancer therapies. The critical hurdles identified by representatives of the collaborating organizations, now organized as the World Immunotherapy Council, are presented and discussed in this report. Some of the identified hurdles impede all investigators; others hinder investigators only in certain regions or institutions or are more relevant to specific types of immunotherapy or first-in-humans studies. Each of these hurdles can significantly delay clinical translation of promising advances in immunotherapy yet if overcome, have the potential to improve outcomes of patients with cancer.

  10. Problems on Divisibility of Binomial Coefficients

    Science.gov (United States)

    Osler, Thomas J.; Smoak, James

    2004-01-01

    Twelve unusual problems involving divisibility of the binomial coefficients are represented in this article. The problems are listed in "The Problems" section. All twelve problems have short solutions which are listed in "The Solutions" section. These problems could be assigned to students in any course in which the binomial theorem and Pascal's…

  11. Predicting Cumulative Incidence Probability by Direct Binomial Regression

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...

  12. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2015-07-01

    Full Text Available Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels, I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial

  13. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    Science.gov (United States)

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  14. Distinguishing between Rural and Urban Road Segment Traffic Safety Based on Zero-Inflated Negative Binomial Regression Models

    Directory of Open Access Journals (Sweden)

    Xuedong Yan

    2012-01-01

    Full Text Available In this study, the traffic crash rate, total crash frequency, and injury and fatal crash frequency were taken into consideration for distinguishing between rural and urban road segment safety. The GIS-based crash data during four and half years in Pikes Peak Area, US were applied for the analyses. The comparative statistical results show that the crash rates in rural segments are consistently lower than urban segments. Further, the regression results based on Zero-Inflated Negative Binomial (ZINB regression models indicate that the urban areas have a higher crash risk in terms of both total crash frequency and injury and fatal crash frequency, compared to rural areas. Additionally, it is found that crash frequencies increase as traffic volume and segment length increase, though the higher traffic volume lower the likelihood of severe crash occurrence; compared to 2-lane roads, the 4-lane roads have lower crash frequencies but have a higher probability of severe crash occurrence; and better road facilities with higher free flow speed can benefit from high standard design feature thus resulting in a lower total crash frequency, but they cannot mitigate the severe crash risk.

  15. Detecting non-binomial sex allocation when developmental mortality operates.

    Science.gov (United States)

    Wilkinson, Richard D; Kapranas, Apostolos; Hardy, Ian C W

    2016-11-07

    Optimal sex allocation theory is one of the most intricately developed areas of evolutionary ecology. Under a range of conditions, particularly under population sub-division, selection favours sex being allocated to offspring non-randomly, generating non-binomial variances of offspring group sex ratios. Detecting non-binomial sex allocation is complicated by stochastic developmental mortality, as offspring sex can often only be identified on maturity with the sex of non-maturing offspring remaining unknown. We show that current approaches for detecting non-binomiality have limited ability to detect non-binomial sex allocation when developmental mortality has occurred. We present a new procedure using an explicit model of sex allocation and mortality and develop a Bayesian model selection approach (available as an R package). We use the double and multiplicative binomial distributions to model over- and under-dispersed sex allocation and show how to calculate Bayes factors for comparing these alternative models to the null hypothesis of binomial sex allocation. The ability to detect non-binomial sex allocation is greatly increased, particularly in cases where mortality is common. The use of Bayesian methods allows for the quantification of the evidence in favour of each hypothesis, and our modelling approach provides an improved descriptive capability over existing approaches. We use a simulation study to demonstrate substantial improvements in power for detecting non-binomial sex allocation in situations where current methods fail, and we illustrate the approach in real scenarios using empirically obtained datasets on the sexual composition of groups of gregarious parasitoid wasps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Discovering Binomial Identities with PascGaloisJE

    Science.gov (United States)

    Evans, Tyler J.

    2008-01-01

    We describe exercises in which students use PascGaloisJE to formulate conjectures about certain binomial identities which hold when the binomial coefficients are interpreted as elements in the cyclic group Z[subscript p] of integers modulo a prime integer "p". In addition to having an appealing visual component, these exercises are open-ended and…

  17. Binomial vs poisson statistics in radiation studies

    International Nuclear Information System (INIS)

    Foster, J.; Kouris, K.; Spyrou, N.M.; Matthews, I.P.; Welsh National School of Medicine, Cardiff

    1983-01-01

    The processes of radioactive decay, decay and growth of radioactive species in a radioactive chain, prompt emission(s) from nuclear reactions, conventional activation and cyclic activation are discussed with respect to their underlying statistical density function. By considering the transformation(s) that each nucleus may undergo it is shown that all these processes are fundamentally binomial. Formally, when the number of experiments N is large and the probability of success p is close to zero, the binomial is closely approximated by the Poisson density function. In radiation and nuclear physics, N is always large: each experiment can be conceived of as the observation of the fate of each of the N nuclei initially present. Whether p, the probability that a given nucleus undergoes a prescribed transformation, is close to zero depends on the process and nuclide(s) concerned. Hence, although a binomial description is always valid, the Poisson approximation is not always adequate. Therefore further clarification is provided as to when the binomial distribution must be used in the statistical treatment of detected events. (orig.)

  18. Tomography of binomial states of the radiation field

    NARCIS (Netherlands)

    Bazrafkan, MR; Man'ko, [No Value

    2004-01-01

    The symplectic, optical, and photon-number tomographic symbols of binomial states of the radiation field are studied. Explicit relations for all tomograms of the binomial states are obtained. Two measures for nonclassical properties of these states are discussed.

  19. Application of binomial-edited CPMG to shale characterization.

    Science.gov (United States)

    Washburn, Kathryn E; Birdwell, Justin E

    2014-09-01

    Unconventional shale resources may contain a significant amount of hydrogen in organic solids such as kerogen, but it is not possible to directly detect these solids with many NMR systems. Binomial-edited pulse sequences capitalize on magnetization transfer between solids, semi-solids, and liquids to provide an indirect method of detecting solid organic materials in shales. When the organic solids can be directly measured, binomial-editing helps distinguish between different phases. We applied a binomial-edited CPMG pulse sequence to a range of natural and experimentally-altered shale samples. The most substantial signal loss is seen in shales rich in organic solids while fluids associated with inorganic pores seem essentially unaffected. This suggests that binomial-editing is a potential method for determining fluid locations, solid organic content, and kerogen-bitumen discrimination. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. A class of orthogonal nonrecursive binomial filters.

    Science.gov (United States)

    Haddad, R. A.

    1971-01-01

    The time- and frequency-domain properties of the orthogonal binomial sequences are presented. It is shown that these sequences, or digital filters based on them, can be generated using adders and delay elements only. The frequency-domain behavior of these nonrecursive binomial filters suggests a number of applications as low-pass Gaussian filters or as inexpensive bandpass filters.

  1. Robust inference in the negative binomial regression model with an application to falls data.

    Science.gov (United States)

    Aeberhard, William H; Cantoni, Eva; Heritier, Stephane

    2014-12-01

    A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.

  2. Quality of hurdle treated pork sausages during refrigerated (4 ± 1°C) storage.

    Science.gov (United States)

    Thomas, R; Anjaneyulu, A S R; Kondaiah, N

    2010-06-01

    Pork sausages developed using hurdle technology was evaluated during refrigerated storage (4 ± 1°C). Hurdles incorporated were low pH, low water activity, vacuum packaging and post package reheating. Dipping in potassium sorbate solution prior to vacuum packaging was also tried. Hurdle treatment significantly (p sausages during storage, as indicated by TBARS and tyrosine values. Incorporation of hurdles decreased the growth of different spoilage and pathogenic microorganisms. Combination of pH, water activity, vacuum packaging and reheating inhibited the growth of yeast and molds up to 12 days, while additional dipping of sausages in 1% potassium sorbate solution prior to packaging inhibited their growth even on 30(th) day of storage. Incorporation of hurdles resulted in initial reduction in all the sensory attributes, but they helped to maintain these attributes for significantly longer period compared to control. Hurdle treated sausages exhibited no spoilage signs even on day 30, while the control sausages were found acceptable only up to 18 days.

  3. Generalization of Binomial Coefficients to Numbers on the Nodes of Graphs

    NARCIS (Netherlands)

    Khmelnitskaya, A.; van der Laan, G.; Talman, Dolf

    2016-01-01

    The triangular array of binomial coefficients, or Pascal's triangle, is formed by starting with an apex of 1. Every row of Pascal's triangle can be seen as a line-graph, to each node of which the corresponding binomial coefficient is assigned. We show that the binomial coefficient of a node is equal

  4. Generalization of binomial coefficients to numbers on the nodes of graphs

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; van der Laan, Gerard; Talman, Dolf

    The triangular array of binomial coefficients, or Pascal's triangle, is formed by starting with an apex of 1. Every row of Pascal's triangle can be seen as a line-graph, to each node of which the corresponding binomial coefficient is assigned. We show that the binomial coefficient of a node is equal

  5. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  6. Smoothness in Binomial Edge Ideals

    Directory of Open Access Journals (Sweden)

    Hamid Damadi

    2016-06-01

    Full Text Available In this paper we study some geometric properties of the algebraic set associated to the binomial edge ideal of a graph. We study the singularity and smoothness of the algebraic set associated to the binomial edge ideal of a graph. Some of these algebraic sets are irreducible and some of them are reducible. If every irreducible component of the algebraic set is smooth we call the graph an edge smooth graph, otherwise it is called an edge singular graph. We show that complete graphs are edge smooth and introduce two conditions such that the graph G is edge singular if and only if it satisfies these conditions. Then, it is shown that cycles and most of trees are edge singular. In addition, it is proved that complete bipartite graphs are edge smooth.

  7. Perbandingan Metode Binomial dan Metode Black-Scholes Dalam Penentuan Harga Opsi

    Directory of Open Access Journals (Sweden)

    Surya Amami Pramuditya

    2016-04-01

    Full Text Available ABSTRAKOpsi adalah kontrak antara pemegang dan penulis  (buyer (holder dan seller (writer di mana penulis (writer memberikan hak (bukan kewajiban kepada holder untuk membeli atau menjual aset dari writer pada harga tertentu (strike atau latihan harga dan pada waktu tertentu dalam waktu (tanggal kadaluwarsa atau jatuh tempo waktu. Ada beberapa cara untuk menentukan harga opsi, diantaranya adalah  Metode Black-Scholes dan Metode Binomial. Metode binomial berasal dari model pergerakan harga saham yang membagi waktu interval [0, T] menjadi n sama panjang. Sedangkan metode Black-Scholes, dimodelkan dengan pergerakan harga saham sebagai suatu proses stokastik. Semakin besar partisi waktu n pada Metode Binomial, maka nilai opsinya akan konvergen ke nilai opsi Metode Black-Scholes.Kata kunci: opsi, Binomial, Black-Scholes.ABSTRACT Option is a contract between the holder and the writer in which the writer gives the right (not the obligation to the holder to buy or sell an asset of a writer at a specified price (the strike or exercise price and at a specified time in the future (expiry date or maturity time. There are several ways to determine the price of options, including the Black-Scholes Method and Binomial Method. Binomial method come from a model of stock price movement that divide time interval [0, T] into n equally long. While the Black Scholes method, the stock price movement is modeled as a stochastic process. More larger the partition of time n in Binomial Method, the value option will converge to the value option in Black-Scholes Method.Key words: Options, Binomial, Black-Scholes

  8. The Binomial Distribution in Shooting

    Science.gov (United States)

    Chalikias, Miltiadis S.

    2009-01-01

    The binomial distribution is used to predict the winner of the 49th International Shooting Sport Federation World Championship in double trap shooting held in 2006 in Zagreb, Croatia. The outcome of the competition was definitely unexpected.

  9. Speech-discrimination scores modeled as a binomial variable.

    Science.gov (United States)

    Thornton, A R; Raffin, M J

    1978-09-01

    Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.

  10. Factors related to the use of antenatal care services in Ethiopia: Application of the zero-inflated negative binomial model.

    Science.gov (United States)

    Assefa, Enyew; Tadesse, Mekonnen

    2017-08-01

    The major causes for poor health in developing countries are inadequate access and under-use of modern health care services. The objective of this study was to identify and examine factors related to the use of antenatal care services using the 2011 Ethiopia Demographic and Health Survey data. The number of antenatal care visits during the last pregnancy by mothers aged 15 to 49 years (n = 7,737) was analyzed. More than 55% of the mothers did not use antenatal care (ANC) services, while more than 22% of the women used antenatal care services less than four times. More than half of the women (52%) who had access to health services had at least four antenatal care visits. The zero-inflated negative binomial model was found to be more appropriate for analyzing the data. Place of residence, age of mothers, woman's educational level, employment status, mass media exposure, religion, and access to health services were significantly associated with the use of antenatal care services. Accordingly, there should be progress toward a health-education program that enables more women to utilize ANC services, with the program targeting women in rural areas, uneducated women, and mothers with higher birth orders through appropriate media.

  11. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  12. Penggunaan Model Binomial Pada Penentuan Harga Opsi Saham Karyawan

    Directory of Open Access Journals (Sweden)

    Dara Puspita Anggraeni

    2015-11-01

    Full Text Available Binomial Model for Valuing Employee Stock Options. Employee Stock Options (ESO differ from standard exchange-traded options. The three main differences in a valuation model for employee stock options : Vesting Period, Exit Rate and Non-Transferability. In this thesis, the model for valuing employee stock options discussed. This model are implement with a generalized binomial model.

  13. Newton Binomial Formulas in Schubert Calculus

    OpenAIRE

    Cordovez, Jorge; Gatto, Letterio; Santiago, Taise

    2008-01-01

    We prove Newton's binomial formulas for Schubert Calculus to determine numbers of base point free linear series on the projective line with prescribed ramification divisor supported at given distinct points.

  14. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  15. On some binomial [Formula: see text]-difference sequence spaces.

    Science.gov (United States)

    Meng, Jian; Song, Meimei

    2017-01-01

    In this paper, we introduce the binomial sequence spaces [Formula: see text], [Formula: see text] and [Formula: see text] by combining the binomial transformation and difference operator. We prove the BK -property and some inclusion relations. Furthermore, we obtain Schauder bases and compute the α -, β - and γ -duals of these sequence spaces. Finally, we characterize matrix transformations on the sequence space [Formula: see text].

  16. An efficient binomial model-based measure for sequence comparison and its application.

    Science.gov (United States)

    Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong

    2011-04-01

    Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.

  17. A fast algorithm for computing binomial coefficients modulo powers of two.

    Science.gov (United States)

    Andreica, Mugurel Ionut

    2013-01-01

    I present a new algorithm for computing binomial coefficients modulo 2N. The proposed method has an O(N3·Multiplication(N)+N4) preprocessing time, after which a binomial coefficient C(P, Q) with 0≤Q≤P≤2N-1 can be computed modulo 2N in O(N2·log(N)·Multiplication(N)) time. Multiplication(N) denotes the time complexity of multiplying two N-bit numbers, which can range from O(N2) to O(N·log(N)·log(log(N))) or better. Thus, the overall time complexity for evaluating M binomial coefficients C(P, Q) modulo 2N with 0≤Q≤P≤2N-1 is O((N3+M·N2·log(N))·Multiplication(N)+N4). After preprocessing, we can actually compute binomial coefficients modulo any 2R with R≤N. For larger values of P and Q, variations of Lucas' theorem must be used first in order to reduce the computation to the evaluation of multiple (O(log(P))) binomial coefficients C(P', Q') (or restricted types of factorials P'!) modulo 2N with 0≤Q'≤P'≤2N-1.

  18. Application of a random effects negative binomial model to examine tram-involved crash frequency on route sections in Melbourne, Australia.

    Science.gov (United States)

    Naznin, Farhana; Currie, Graham; Logan, David; Sarvi, Majid

    2016-07-01

    Safety is a key concern in the design, operation and development of light rail systems including trams or streetcars as they impose crash risks on road users in terms of crash frequency and severity. The aim of this study is to identify key traffic, transit and route factors that influence tram-involved crash frequencies along tram route sections in Melbourne. A random effects negative binomial (RENB) regression model was developed to analyze crash frequency data obtained from Yarra Trams, the tram operator in Melbourne. The RENB modelling approach can account for spatial and temporal variations within observation groups in panel count data structures by assuming that group specific effects are randomly distributed across locations. The results identify many significant factors effecting tram-involved crash frequency including tram service frequency (2.71), tram stop spacing (-0.42), tram route section length (0.31), tram signal priority (-0.25), general traffic volume (0.18), tram lane priority (-0.15) and ratio of platform tram stops (-0.09). Findings provide useful insights on route section level tram-involved crashes in an urban tram or streetcar operating environment. The method described represents a useful planning tool for transit agencies hoping to improve safety performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Using the {Beta}-binomial distribution to characterize forest health

    Energy Technology Data Exchange (ETDEWEB)

    Zarnoch, S. J. [USDA Forest Service, Southern Research Station, Athens, GA (United States); Anderson, R.L.; Sheffield, R. M. [USDA Forest Service, Southern Research Station, Asheville, NC (United States)

    1995-03-01

    Forest health monitoring programs often use base variables which are dichotomous (i e. alive/dead, damaged/undamaged) to describe the health of trees. Typical sampling designs usually consist of randomly or systematically chosen clusters of trees for observation.It was claimed that contagiousness of diseases for example may result in non-uniformity of affected trees, so that distribution of the proportions, rather than simply the mean proportion, becomes important. The use of the {Beta}-binomial model was suggested for such cases. Use of the {Beta}-binomial distribution model applied in forest health analyses, was described.. Data on dogwood anthracnose (caused by Discula destructiva), a disease of flowering dogwood (Cornus florida L.), was used to illustrate the utility of the model. The {Beta}-binomial model allowed the detection of different distributional patterns of dogwood anthracnose over time and space. Results led to further speculation regarding the cause of the patterns. Traditional proportion analyses like ANOVA would not have detected the trends found using the {Beta}-binomial model, until more distinct patterns had evolved at a later date. The model was said to be flexible and require no special weighting or transformations of data.Another advantage claimed was its ability to handle unequal sample sizes.

  20. Entanglement of Generalized Two-Mode Binomial States and Teleportation

    International Nuclear Information System (INIS)

    Wang Dongmei; Yu Youhong

    2009-01-01

    The entanglement of the generalized two-mode binomial states in the phase damping channel is studied by making use of the relative entropy of the entanglement. It is shown that the factors of q and p play the crucial roles in control the relative entropy of the entanglement. Furthermore, we propose a scheme of teleporting an unknown state via the generalized two-mode binomial states, and calculate the mean fidelity of the scheme. (general)

  1. Optimized Binomial Quantum States of Complex Oscillators with Real Spectrum

    International Nuclear Information System (INIS)

    Zelaya, K D; Rosas-Ortiz, O

    2016-01-01

    Classical and nonclassical states of quantum complex oscillators with real spectrum are presented. Such states are bi-orthonormal superpositions of n +1 energy eigenvectors of the system with binomial-like coefficients. For large values of n these optimized binomial states behave as photon added coherent states when the imaginary part of the potential is cancelled. (paper)

  2. Comparison: Binomial model and Black Scholes model

    Directory of Open Access Journals (Sweden)

    Amir Ahmad Dar

    2018-03-01

    Full Text Available The Binomial Model and the Black Scholes Model are the popular methods that are used to solve the option pricing problems. Binomial Model is a simple statistical method and Black Scholes model requires a solution of a stochastic differential equation. Pricing of European call and a put option is a very difficult method used by actuaries. The main goal of this study is to differentiate the Binominal model and the Black Scholes model by using two statistical model - t-test and Tukey model at one period. Finally, the result showed that there is no significant difference between the means of the European options by using the above two models.

  3. Integer Solutions of Binomial Coefficients

    Science.gov (United States)

    Gilbertson, Nicholas J.

    2016-01-01

    A good formula is like a good story, rich in description, powerful in communication, and eye-opening to readers. The formula presented in this article for determining the coefficients of the binomial expansion of (x + y)n is one such "good read." The beauty of this formula is in its simplicity--both describing a quantitative situation…

  4. Chromosome aberration analysis based on a beta-binomial distribution

    International Nuclear Information System (INIS)

    Otake, Masanori; Prentice, R.L.

    1983-10-01

    Analyses carried out here generalized on earlier studies of chromosomal aberrations in the populations of Hiroshima and Nagasaki, by allowing extra-binomial variation in aberrant cell counts corresponding to within-subject correlations in cell aberrations. Strong within-subject correlations were detected with corresponding standard errors for the average number of aberrant cells that were often substantially larger than was previously assumed. The extra-binomial variation is accomodated in the analysis in the present report, as described in the section on dose-response models, by using a beta-binomial (B-B) variance structure. It is emphasized that we have generally satisfactory agreement between the observed and the B-B fitted frequencies by city-dose category. The chromosomal aberration data considered here are not extensive enough to allow a precise discrimination between competing dose-response models. A quadratic gamma ray and linear neutron model, however, most closely fits the chromosome data. (author)

  5. A mixed-binomial model for Likert-type personality measures.

    Science.gov (United States)

    Allik, Jüri

    2014-01-01

    Personality measurement is based on the idea that values on an unobservable latent variable determine the distribution of answers on a manifest response scale. Typically, it is assumed in the Item Response Theory (IRT) that latent variables are related to the observed responses through continuous normal or logistic functions, determining the probability with which one of the ordered response alternatives on a Likert-scale item is chosen. Based on an analysis of 1731 self- and other-rated responses on the 240 NEO PI-3 questionnaire items, it was proposed that a viable alternative is a finite number of latent events which are related to manifest responses through a binomial function which has only one parameter-the probability with which a given statement is approved. For the majority of items, the best fit was obtained with a mixed-binomial distribution, which assumes two different subpopulations who endorse items with two different probabilities. It was shown that the fit of the binomial IRT model can be improved by assuming that about 10% of random noise is contained in the answers and by taking into account response biases toward one of the response categories. It was concluded that the binomial response model for the measurement of personality traits may be a workable alternative to the more habitual normal and logistic IRT models.

  6. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Linking parasite populations in hosts to parasite populations in space through Taylor's law and the negative binomial distribution.

    Science.gov (United States)

    Cohen, Joel E; Poulin, Robert; Lagrue, Clément

    2017-01-03

    The spatial distribution of individuals of any species is a basic concern of ecology. The spatial distribution of parasites matters to control and conservation of parasites that affect human and nonhuman populations. This paper develops a quantitative theory to predict the spatial distribution of parasites based on the distribution of parasites in hosts and the spatial distribution of hosts. Four models are tested against observations of metazoan hosts and their parasites in littoral zones of four lakes in Otago, New Zealand. These models differ in two dichotomous assumptions, constituting a 2 × 2 theoretical design. One assumption specifies whether the variance function of the number of parasites per host individual is described by Taylor's law (TL) or the negative binomial distribution (NBD). The other assumption specifies whether the numbers of parasite individuals within each host in a square meter of habitat are independent or perfectly correlated among host individuals. We find empirically that the variance-mean relationship of the numbers of parasites per square meter is very well described by TL but is not well described by NBD. Two models that posit perfect correlation of the parasite loads of hosts in a square meter of habitat approximate observations much better than two models that posit independence of parasite loads of hosts in a square meter, regardless of whether the variance-mean relationship of parasites per host individual obeys TL or NBD. We infer that high local interhost correlations in parasite load strongly influence the spatial distribution of parasites. Local hotspots could influence control and conservation of parasites.

  8. Quantum Theory for the Binomial Model in Finance Thoery

    OpenAIRE

    Chen, Zeqian

    2001-01-01

    In this paper, a quantum model for the binomial market in finance is proposed. We show that its risk-neutral world exhibits an intriguing structure as a disk in the unit ball of ${\\bf R}^3,$ whose radius is a function of the risk-free interest rate with two thresholds which prevent arbitrage opportunities from this quantum market. Furthermore, from the quantum mechanical point of view we re-deduce the Cox-Ross-Rubinstein binomial option pricing formula by considering Maxwell-Boltzmann statist...

  9. Water-selective excitation of short T2 species with binomial pulses.

    Science.gov (United States)

    Deligianni, Xeni; Bär, Peter; Scheffler, Klaus; Trattnig, Siegfried; Bieri, Oliver

    2014-09-01

    For imaging of fibrous musculoskeletal components, ultra-short echo time methods are often combined with fat suppression. Due to the increased chemical shift, spectral excitation of water might become a favorable option at ultra-high fields. Thus, this study aims to compare and explore short binomial excitation schemes for spectrally selective imaging of fibrous tissue components with short transverse relaxation time (T2 ). Water selective 1-1-binomial excitation is compared with nonselective imaging using a sub-millisecond spoiled gradient echo technique for in vivo imaging of fibrous tissue at 3T and 7T. Simulations indicate a maximum signal loss from binomial excitation of approximately 30% in the limit of very short T2 (0.1 ms), as compared to nonselective imaging; decreasing rapidly with increasing field strength and increasing T2 , e.g., to 19% at 3T and 10% at 7T for T2 of 1 ms. In agreement with simulations, a binomial phase close to 90° yielded minimum signal loss: approximately 6% at 3T and close to 0% at 7T for menisci, and for ligaments 9% and 13%, respectively. Overall, for imaging of short-lived T2 components, short 1-1 binomial excitation schemes prove to offer marginal signal loss especially at ultra-high fields with overall improved scanning efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  10. Binomial test models and item difficulty

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1979-01-01

    In choosing a binomial test model, it is important to know exactly what conditions are imposed on item difficulty. In this paper these conditions are examined for both a deterministic and a stochastic conception of item responses. It appears that they are more restrictive than is generally

  11. Binomial distribution for the charge asymmetry parameter

    International Nuclear Information System (INIS)

    Chou, T.T.; Yang, C.N.

    1984-01-01

    It is suggested that for high energy collisions the distribution with respect to the charge asymmetry z = nsub(F) - nsub(B) is binomial, where nsub(F) and nsub(B) are the forward and backward charge multiplicities. (orig.)

  12. CROSSER - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.

  13. Stochastic analysis of complex reaction networks using binomial moment equations.

    Science.gov (United States)

    Barzel, Baruch; Biham, Ofer

    2012-09-01

    The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.

  14. Prediction of vehicle crashes by drivers' characteristics and past traffic violations in Korea using a zero-inflated negative binomial model.

    Science.gov (United States)

    Kim, Dae-Hwan; Ramjan, Lucie M; Mak, Kwok-Kei

    2016-01-01

    Traffic safety is a significant public health challenge, and vehicle crashes account for the majority of injuries. This study aims to identify whether drivers' characteristics and past traffic violations may predict vehicle crashes in Korea. A total of 500,000 drivers were randomly selected from the 11.6 million driver records of the Ministry of Land, Transport and Maritime Affairs in Korea. Records of traffic crashes were obtained from the archives of the Korea Insurance Development Institute. After matching the past violation history for the period 2004-2005 with the number of crashes in year 2006, a total of 488,139 observations were used for the analysis. Zero-inflated negative binomial model was used to determine the incident risk ratio (IRR) of vehicle crashes by past violations of individual drivers. The included covariates were driver's age, gender, district of residence, vehicle choice, and driving experience. Drivers violating (1) a hit-and-run or drunk driving regulation at least once and (2) a signal, central line, or speed regulation more than once had a higher risk of a vehicle crash with respective IRRs of 1.06 and 1.15. Furthermore, female gender, a younger age, fewer years of driving experience, and middle-sized vehicles were all significantly associated with a higher likelihood of vehicle crashes. Drivers' demographic characteristics and past traffic violations could predict vehicle crashes in Korea. Greater resources should be assigned to the provision of traffic safety education programs for the high-risk driver groups.

  15. Relacionando las distribuciones binomial negativa y logarítmica vía sus series asociadas

    OpenAIRE

    Sadinle, Mauricio

    2011-01-01

    La distribución binomial negativa está asociada a la serie obtenida de derivar la serie logarítmica. Recíprocamente, la distribución logarítmica está asociada a la serie obtenida de integrar la serie asociada a la distribución binomial negativa. El parámetro del número de fallas de la distribución Binomial negativa es el número de derivadas necesarias para obtener la serie binomial negativa de la serie logarítmica. El razonamiento presentado puede emplearse como un método alternativo para pro...

  16. Abstract knowledge versus direct experience in processing of binomial expressions.

    Science.gov (United States)

    Morgan, Emily; Levy, Roger

    2016-12-01

    We ask whether word order preferences for binomial expressions of the form A and B (e.g. bread and butter) are driven by abstract linguistic knowledge of ordering constraints referencing the semantic, phonological, and lexical properties of the constituent words, or by prior direct experience with the specific items in questions. Using forced-choice and self-paced reading tasks, we demonstrate that online processing of never-before-seen binomials is influenced by abstract knowledge of ordering constraints, which we estimate with a probabilistic model. In contrast, online processing of highly frequent binomials is primarily driven by direct experience, which we estimate from corpus frequency counts. We propose a trade-off wherein processing of novel expressions relies upon abstract knowledge, while reliance upon direct experience increases with increased exposure to an expression. Our findings support theories of language processing in which both compositional generation and direct, holistic reuse of multi-word expressions play crucial roles. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Typification of the binomial Sedum boissierianum Hausskncht Crassulaceae

    Directory of Open Access Journals (Sweden)

    Valentin Bârcă

    2016-06-01

    Full Text Available Genus Sedum, described by Linne in 1753, a polyphyletic genus of flowering plants in the family Crassulaceae, was extensively studied both by scholars and succulents enthusiasts. Despite the great attention it has received over time, many binomials lack types and consistent taxonomic treatments. My currently undergoing process of the comprehensive revision of the genus Sedum in România and the Balkans called for the clarification of taxonomical status and typification of several basionyms described based on plants collected from the Balkans and the region around the Black Sea. Almost a century and a half ago, Haussknecht intensely studied flora of Near East and assigned several binomials to plants of the genus Sedum, some of which were neglected and forgotten although they represent interesting taxa. The binomial Sedum boissierianum Haussknecht first appeared in schedae as nomen nudum around 1892, for specimens originating from Near East, but was later validated by Froederstroem in 1932. Following extensive revision of the relevant literature and herbarium specimens, I located original material for this taxon in several herbaria in Europe (BUCF, LD, P, Z. I hereby designate the lectotype and isolectotypes for this basionym and furthermore provide pictures of the herbarium specimens and pinpoint on a map the original collections sites for the designated types.

  18. Adaptive estimation of binomial probabilities under misclassification

    NARCIS (Netherlands)

    Albers, Willem/Wim; Veldman, H.J.

    1984-01-01

    If misclassification occurs the standard binomial estimator is usually seriously biased. It is known that an improvement can be achieved by using more than one observer in classifying the sample elements. Here it will be investigated which number of observers is optimal given the total number of

  19. Preservation of South African steamed bread using hurdle technology

    CSIR Research Space (South Africa)

    Lombard, GE

    2000-01-01

    Full Text Available was the most effective hurdle. Glycerol levels of 150 and 180 g/kg flour produced a(w) levels of 0.908 and 0.880, respectively, which were sufficient to inhibit C. botulinum yeast and moulds were effectively inhibited during the shelf life study. Staling...

  20. Entropy, energy and negativity in Fermi-resonance coupled states of substituted methanes

    International Nuclear Information System (INIS)

    Hou Xiwen; Wan Mingfang; Ma Zhongqi

    2010-01-01

    Several measures of entanglement have attracted considerable interest in the relationship of a measure of entanglement with other quantities. The dynamics of entropy, energy and negativity is studied for Fermi-resonance coupled vibrations in substituted methanes with three kinds of initial mixed states, which are the mixed density matrices of binomial states, thermal states and squeezed states on two vibrational modes, respectively. It is demonstrated that for mixed binomial states and mixed thermal states with small magnitudes the entropies of the stretch and the bend are anti-correlated in the same oscillatory frequency, so do the energies for each kind of state with small magnitudes, whereas the entropies exhibit positive correlations with the corresponding energies. Furthermore, for small magnitudes quantum mutual entropy is positively correlated with the interacting energy. Analytic forms of entropies and energies are provided with initial conditions in which they are stationary, and the agreement between analytic and numerical simulations is satisfactory. The dynamical entanglement measured by negativity is examined for those states and conditions. It is shown that negativity displays a sudden death for mixed binomial states and mixed thermal states with small magnitudes, and the time-averaged negativity has the minimal value under the conditions of stationary entropies and energies. Moreover, negativity is positively correlated with the mutual entropy and the interacting energy just for mixed squeezed states with small magnitudes. Those are useful for molecular quantum information processing and dynamical entanglement.

  1. An empirical tool to evaluate the safety of cyclists: Community based, macro-level collision prediction models using negative binomial regression.

    Science.gov (United States)

    Wei, Feng; Lovegrove, Gordon

    2013-12-01

    Today, North American governments are more willing to consider compact neighborhoods with increased use of sustainable transportation modes. Bicycling, one of the most effective modes for short trips with distances less than 5km is being encouraged. However, as vulnerable road users (VRUs), cyclists are more likely to be injured when involved in collisions. In order to create a safe road environment for them, evaluating cyclists' road safety at a macro level in a proactive way is necessary. In this paper, different generalized linear regression methods for collision prediction model (CPM) development are reviewed and previous studies on micro-level and macro-level bicycle-related CPMs are summarized. On the basis of insights gained in the exploration stage, this paper also reports on efforts to develop negative binomial models for bicycle-auto collisions at a community-based, macro-level. Data came from the Central Okanagan Regional District (CORD), of British Columbia, Canada. The model results revealed two types of statistical associations between collisions and each explanatory variable: (1) An increase in bicycle-auto collisions is associated with an increase in total lane kilometers (TLKM), bicycle lane kilometers (BLKM), bus stops (BS), traffic signals (SIG), intersection density (INTD), and arterial-local intersection percentage (IALP). (2) A decrease in bicycle collisions was found to be associated with an increase in the number of drive commuters (DRIVE), and in the percentage of drive commuters (DRP). These results support our hypothesis that in North America, with its current low levels of bicycle use (macro-level CPMs. Copyright © 2012. Published by Elsevier Ltd.

  2. On pricing futures options on random binomial tree

    International Nuclear Information System (INIS)

    Bayram, Kamola; Ganikhodjaev, Nasir

    2013-01-01

    The discrete-time approach to real option valuation has typically been implemented in the finance literature using a binomial tree framework. Instead we develop a new model by randomizing the environment and call such model a random binomial tree. Whereas the usual model has only one environment (u, d) where the price of underlying asset can move by u times up and d times down, and pair (u, d) is constant over the life of the underlying asset, in our new model the underlying security is moving in two environments namely (u 1 , d 1 ) and (u 2 , d 2 ). Thus we obtain two volatilities σ 1 and σ 2 . This new approach enables calculations reflecting the real market since it consider the two states of market normal and extra ordinal. In this paper we define and study Futures options for such models.

  3. On extinction time of a generalized endemic chain-binomial model.

    Science.gov (United States)

    Aydogmus, Ozgur

    2016-09-01

    We considered a chain-binomial epidemic model not conferring immunity after infection. Mean field dynamics of the model has been analyzed and conditions for the existence of a stable endemic equilibrium are determined. The behavior of the chain-binomial process is probabilistically linked to the mean field equation. As a result of this link, we were able to show that the mean extinction time of the epidemic increases at least exponentially as the population size grows. We also present simulation results for the process to validate our analytical findings. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Hurdles in clinical implementation of academic advanced therapy medicinal products: A national evaluation.

    Science.gov (United States)

    de Wilde, Sofieke; Veltrop-Duits, Louise; Hoozemans-Strik, Merel; Ras, Thirza; Blom-Veenman, Janine; Guchelaar, Henk-Jan; Zandvliet, Maarten; Meij, Pauline

    2016-06-01

    Since the implementation of the European Union (EU) regulation for advanced therapy medicinal products (ATMPs) in 2009, only six ATMPs achieved marketing authorization approval in the EU. Recognizing the major developments in the ATMP field, starting mostly in academic institutions, we investigated which hurdles were experienced in the whole pathway of ATMP development towards clinical care. Quality interviews were executed with different stakeholders in The Netherlands involved in the ATMP development field, e.g. academic research groups, national authorities and patient organizations. Based on the hurdles mentioned in the interviews, questionnaires were subsequently sent to the academic principal investigators (PIs) and ATMP good manufacturing practice (GMP) facility managers to quantify these hurdles. Besides the familiar regulatory routes of marketing authorization (MA) and hospital exemption (HE), a part of the academic PIs perceived that ATMPs should become available by the Tissues and Cells Directive or did not anticipate on the next development steps towards implementation of their ATMP towards regular clinical care. The main hurdles identified were: inadequate financial support, rapidly evolving field, study-related problems, lacking regulatory knowledge, lack of collaborations and responsibility issues. Creating an academic environment stimulating and planning ATMP development and licensing as well as investing in expanding the relevant regulatory knowledge in academic institutions seems a prerequisite to develop ATMPs from bench to patient. Copyright © 2016 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  5. Pythagoras, Binomial, and de Moivre Revisited Through Differential Equations

    OpenAIRE

    Singh, Jitender; Bajaj, Renu

    2018-01-01

    The classical Pythagoras theorem, binomial theorem, de Moivre's formula, and numerous other deductions are made using the uniqueness theorem for the initial value problems in linear ordinary differential equations.

  6. Development of shelf stable pork sausages using hurdle technology and their quality at ambient temperature (37±1°C) storage.

    Science.gov (United States)

    Thomas, R; Anjaneyulu, A S R; Kondaiah, N

    2008-05-01

    Shelf stable pork sausages were developed using hurdle technology and their quality was evaluated during ambient temperature (37±1°C) storage. Hurdles incorporated were low pH, low water activity, vacuum packaging and post package reheating. Dipping in potassium sorbate solution prior to vacuum packaging was also studied. Reheating increased the pH of the sausages by 0.17units as against 0.11units in controls. Incorporation of hurdles significantly decreased emulsion stability, cooking yield, moisture and fat percent, yellowness and hardness, while increasing the protein percent and redness. Hurdle treatment reduced quality deterioration during storage as indicated by pH, TBARS and tyrosine values. About 1 log reduction in total plate count was observed with the different hurdles as were reductions in the coliform, anaerobic, lactobacilli and Staphylococcus aureus counts. pH, a(w) and reheating hurdles inhibited yeast and mold growth up to day 3, while additional dipping in 1% potassium sorbate solution inhibited their growth throughout the 9 days storage. Despite low initial sensory appeal, the hurdle treated sausages had an overall acceptability in the range 'very good' to 'good' up to day 6.

  7. Some normed binomial difference sequence spaces related to the [Formula: see text] spaces.

    Science.gov (United States)

    Song, Meimei; Meng, Jian

    2017-01-01

    The aim of this paper is to introduce the normed binomial sequence spaces [Formula: see text] by combining the binomial transformation and difference operator, where [Formula: see text]. We prove that these spaces are linearly isomorphic to the spaces [Formula: see text] and [Formula: see text], respectively. Furthermore, we compute Schauder bases and the α -, β - and γ -duals of these sequence spaces.

  8. Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials

    Energy Technology Data Exchange (ETDEWEB)

    Postler, Thomas S.; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sbina; Benkő, Mária; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Rémi; Clegg, Christopher S.; Collins, Peter L.; Juan Carlos, De La Torre; Derisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Dürrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sébastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balázs; Hewson, Roger; Horie, Masayuki; Jiāng, Dàohóng; Kobinger, Gary; Kondo, Hideki; Kropinski, Andrew M.; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady R.; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; PaWeska, Janusz T.; Peters, Clarence J.; Radoshitzky, Sheli R.; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfaçon, Hélène; Salvato, Maria S.; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark D.; Stone, David M.; Takada, Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, Noël; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Viktor E.; Wahl-Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield, Anna E.; Zerbini, F. Murilo; Kuhn, Jens H.

    2016-10-22

    Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment. [Arenaviridae; binomials; ICTV; International Committee on Taxonomy of Viruses; Mononegavirales; virus nomenclature; virus taxonomy.

  9. Revealing Word Order: Using Serial Position in Binomials to Predict Properties of the Speaker

    Science.gov (United States)

    Iliev, Rumen; Smirnova, Anastasia

    2016-01-01

    Three studies test the link between word order in binomials and psychological and demographic characteristics of a speaker. While linguists have already suggested that psychological, cultural and societal factors are important in choosing word order in binomials, the vast majority of relevant research was focused on general factors and on broadly…

  10. Selecting Tools to Model Integer and Binomial Multiplication

    Science.gov (United States)

    Pratt, Sarah Smitherman; Eddy, Colleen M.

    2017-01-01

    Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…

  11. Discrimination of numerical proportions: A comparison of binomial and Gaussian models.

    Science.gov (United States)

    Raidvee, Aire; Lember, Jüri; Allik, Jüri

    2017-01-01

    Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.

  12. Extra-binomial variation approach for analysis of pooled DNA sequencing data

    Science.gov (United States)

    Wallace, Chris

    2012-01-01

    Motivation: The invention of next-generation sequencing technology has made it possible to study the rare variants that are more likely to pinpoint causal disease genes. To make such experiments financially viable, DNA samples from several subjects are often pooled before sequencing. This induces large between-pool variation which, together with other sources of experimental error, creates over-dispersed data. Statistical analysis of pooled sequencing data needs to appropriately model this additional variance to avoid inflating the false-positive rate. Results: We propose a new statistical method based on an extra-binomial model to address the over-dispersion and apply it to pooled case-control data. We demonstrate that our model provides a better fit to the data than either a standard binomial model or a traditional extra-binomial model proposed by Williams and can analyse both rare and common variants with lower or more variable pool depths compared to the other methods. Availability: Package ‘extraBinomial’ is on http://cran.r-project.org/ Contact: chris.wallace@cimr.cam.ac.uk Supplementary information: Supplementary data are available at Bioinformatics Online. PMID:22976083

  13. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  14. A flexible mixed-effect negative binomial regression model for detecting unusual increases in MRI lesion counts in individual multiple sclerosis patients.

    Science.gov (United States)

    Kondo, Yumi; Zhao, Yinshan; Petkau, John

    2015-06-15

    We develop a new modeling approach to enhance a recently proposed method to detect increases of contrast-enhancing lesions (CELs) on repeated magnetic resonance imaging, which have been used as an indicator for potential adverse events in multiple sclerosis clinical trials. The method signals patients with unusual increases in CEL activity by estimating the probability of observing CEL counts as large as those observed on a patient's recent scans conditional on the patient's CEL counts on previous scans. This conditional probability index (CPI), computed based on a mixed-effect negative binomial regression model, can vary substantially depending on the choice of distribution for the patient-specific random effects. Therefore, we relax this parametric assumption to model the random effects with an infinite mixture of beta distributions, using the Dirichlet process, which effectively allows any form of distribution. To our knowledge, no previous literature considers a mixed-effect regression for longitudinal count variables where the random effect is modeled with a Dirichlet process mixture. As our inference is in the Bayesian framework, we adopt a meta-analytic approach to develop an informative prior based on previous clinical trials. This is particularly helpful at the early stages of trials when less data are available. Our enhanced method is illustrated with CEL data from 10 previous multiple sclerosis clinical trials. Our simulation study shows that our procedure estimates the CPI more accurately than parametric alternatives when the patient-specific random effect distribution is misspecified and that an informative prior improves the accuracy of the CPI estimates. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  16. Jump-and-return sandwiches: A new family of binomial-like selective inversion sequences with improved performance

    Science.gov (United States)

    Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.

    2018-03-01

    A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.

  17. Fat suppression in MR imaging with binomial pulse sequences

    International Nuclear Information System (INIS)

    Baudovin, C.J.; Bryant, D.J.; Bydder, G.M.; Young, I.R.

    1989-01-01

    This paper reports on a study to develop pulse sequences allowing suppression of fat signal on MR images without eliminating signal from other tissues with short T1. They have developed such a technique involving selective excitation of protons in water, based on a binomial pulse sequence. Imaging is performed at 0.15 T. Careful shimming is performed to maximize separation of fat and water peaks. A spin-echo 1,500/80 sequence is used, employing 90 degrees pulse with transit frequency optimized for water with null excitation of 20 H offset, followed by a section-selective 180 degrees pulse. With use of the binomial sequence for imagining, reduction in fat signal is seen on images of the pelvis and legs of volunteers. Patient studies show dramatic improvement in visualization of prostatic carcinoma compared with standard sequences

  18. A Neutrosophic Binomial Factorial Theorem with their Refrains

    Directory of Open Access Journals (Sweden)

    Huda E. Khalid

    2016-12-01

    Full Text Available The Neutrosophic Precalculus and the Neutrosophic Calculus can be developed in many ways, depending on the types of indeterminacy one has and on the method used to deal with such indeterminacy. This article is innovative since the form of neutrosophic binomial factorial theorem was constructed in addition to its refrains.

  19. NEWTONP - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.

  20. Comparison of multinomial and binomial proportion methods for analysis of multinomial count data.

    Science.gov (United States)

    Galyean, M L; Wester, D B

    2010-10-01

    Simulation methods were used to generate 1,000 experiments, each with 3 treatments and 10 experimental units/treatment, in completely randomized (CRD) and randomized complete block designs. Data were counts in 3 ordered or 4 nominal categories from multinomial distributions. For the 3-category analyses, category probabilities were 0.6, 0.3, and 0.1, respectively, for 2 of the treatments, and 0.5, 0.35, and 0.15 for the third treatment. In the 4-category analysis (CRD only), probabilities were 0.3, 0.3, 0.2, and 0.2 for treatments 1 and 2 vs. 0.4, 0.4, 0.1, and 0.1 for treatment 3. The 3-category data were analyzed with generalized linear mixed models as an ordered multinomial distribution with a cumulative logit link or by regrouping the data (e.g., counts in 1 category/sum of counts in all categories), followed by analysis of single categories as binomial proportions. Similarly, the 4-category data were analyzed as a nominal multinomial distribution with a glogit link or by grouping data as binomial proportions. For the 3-category CRD analyses, empirically determined type I error rates based on pair-wise comparisons (F- and Wald chi(2) tests) did not differ between multinomial and individual binomial category analyses with 10 (P = 0.38 to 0.60) or 50 (P = 0.19 to 0.67) sampling units/experimental unit. When analyzed as binomial proportions, power estimates varied among categories, with analysis of the category with the greatest counts yielding power similar to the multinomial analysis. Agreement between methods (percentage of experiments with the same results for the overall test for treatment effects) varied considerably among categories analyzed and sampling unit scenarios for the 3-category CRD analyses. Power (F-test) was 24.3, 49.1, 66.9, 83.5, 86.8, and 99.7% for 10, 20, 30, 40, 50, and 100 sampling units/experimental unit for the 3-category multinomial CRD analyses. Results with randomized complete block design simulations were similar to those with the CRD

  1. Determination of finite-difference weights using scaled binomial windows

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    The finite-difference method evaluates a derivative through a weighted summation of function values from neighboring grid nodes. Conventional finite-difference weights can be calculated either from Taylor series expansions or by Lagrange interpolation polynomials. The finite-difference method can be interpreted as a truncated convolutional counterpart of the pseudospectral method in the space domain. For this reason, we also can derive finite-difference operators by truncating the convolution series of the pseudospectral method. Various truncation windows can be employed for this purpose and they result in finite-difference operators with different dispersion properties. We found that there exists two families of scaled binomial windows that can be used to derive conventional finite-difference operators analytically. With a minor change, these scaled binomial windows can also be used to derive optimized finite-difference operators with enhanced dispersion properties. © 2012 Society of Exploration Geophysicists.

  2. Determination of finite-difference weights using scaled binomial windows

    KAUST Repository

    Chu, Chunlei

    2012-05-01

    The finite-difference method evaluates a derivative through a weighted summation of function values from neighboring grid nodes. Conventional finite-difference weights can be calculated either from Taylor series expansions or by Lagrange interpolation polynomials. The finite-difference method can be interpreted as a truncated convolutional counterpart of the pseudospectral method in the space domain. For this reason, we also can derive finite-difference operators by truncating the convolution series of the pseudospectral method. Various truncation windows can be employed for this purpose and they result in finite-difference operators with different dispersion properties. We found that there exists two families of scaled binomial windows that can be used to derive conventional finite-difference operators analytically. With a minor change, these scaled binomial windows can also be used to derive optimized finite-difference operators with enhanced dispersion properties. © 2012 Society of Exploration Geophysicists.

  3. Probability as a conceptual hurdle to understanding one-dimensional quantum scattering and tunnelling

    International Nuclear Information System (INIS)

    Domert, Daniel; Linder, Cedric; Ingerman, Ake

    2005-01-01

    This paper draws on part of a larger project looking at university students' learning difficulties associated with quantum mechanics. Here an unexpected and interesting aspect was brought to the fore while students were discussing a computer simulation of one-dimensional quantum scattering and tunnelling. In these explanations the most dominant conceptual hurdle that emerged in the students' explanations was centred around the notion of probability. To explore this further, categories of description of the variation in the understanding of probability were constituted. The analysis reported is done in terms of the various facets of probability encountered in the simulation and characterizes dynamics of this conceptual hurdle to appropriate understanding of the scattering and tunnelling process. Pedagogical implications are discussed

  4. A Bayesian Approach to Functional Mixed Effect Modeling for Longitudinal Data with Binomial Outcomes

    Science.gov (United States)

    Kliethermes, Stephanie; Oleson, Jacob

    2014-01-01

    Longitudinal growth patterns are routinely seen in medical studies where individual and population growth is followed over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear, quadratic); however, these relationships may not accurately capture growth over time. Functional mixed effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well-developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible and thus estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. PMID:24723495

  5. Standardized binomial models for risk or prevalence ratios and differences.

    Science.gov (United States)

    Richardson, David B; Kinlaw, Alan C; MacLehose, Richard F; Cole, Stephen R

    2015-10-01

    Epidemiologists often analyse binary outcomes in cohort and cross-sectional studies using multivariable logistic regression models, yielding estimates of adjusted odds ratios. It is widely known that the odds ratio closely approximates the risk or prevalence ratio when the outcome is rare, and it does not do so when the outcome is common. Consequently, investigators may decide to directly estimate the risk or prevalence ratio using a log binomial regression model. We describe the use of a marginal structural binomial regression model to estimate standardized risk or prevalence ratios and differences. We illustrate the proposed approach using data from a cohort study of coronary heart disease status in Evans County, Georgia, USA. The approach reduces problems with model convergence typical of log binomial regression by shifting all explanatory variables except the exposures of primary interest from the linear predictor of the outcome regression model to a model for the standardization weights. The approach also facilitates evaluation of departures from additivity in the joint effects of two exposures. Epidemiologists should consider reporting standardized risk or prevalence ratios and differences in cohort and cross-sectional studies. These are readily-obtained using the SAS, Stata and R statistical software packages. The proposed approach estimates the exposure effect in the total population. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  6. Genome-enabled predictions for binomial traits in sugar beet populations.

    Science.gov (United States)

    Biscarini, Filippo; Stevanato, Piergiorgio; Broccanello, Chiara; Stella, Alessandra; Saccomani, Massimo

    2014-07-22

    Genomic information can be used to predict not only continuous but also categorical (e.g. binomial) traits. Several traits of interest in human medicine and agriculture present a discrete distribution of phenotypes (e.g. disease status). Root vigor in sugar beet (B. vulgaris) is an example of binomial trait of agronomic importance. In this paper, a panel of 192 SNPs (single nucleotide polymorphisms) was used to genotype 124 sugar beet individual plants from 18 lines, and to classify them as showing "high" or "low" root vigor. A threshold model was used to fit the relationship between binomial root vigor and SNP genotypes, through the matrix of genomic relationships between individuals in a genomic BLUP (G-BLUP) approach. From a 5-fold cross-validation scheme, 500 testing subsets were generated. The estimated average cross-validation error rate was 0.000731 (0.073%). Only 9 out of 12326 test observations (500 replicates for an average test set size of 24.65) were misclassified. The estimated prediction accuracy was quite high. Such accurate predictions may be related to the high estimated heritability for root vigor (0.783) and to the few genes with large effect underlying the trait. Despite the sparse SNP panel, there was sufficient within-scaffold LD where SNPs with large effect on root vigor were located to allow for genome-enabled predictions to work.

  7. Avoiding negative populations in explicit Poisson tau-leaping.

    Science.gov (United States)

    Cao, Yang; Gillespie, Daniel T; Petzold, Linda R

    2005-08-01

    The explicit tau-leaping procedure attempts to speed up the stochastic simulation of a chemically reacting system by approximating the number of firings of each reaction channel during a chosen time increment tau as a Poisson random variable. Since the Poisson random variable can have arbitrarily large sample values, there is always the possibility that this procedure will cause one or more reaction channels to fire so many times during tau that the population of some reactant species will be driven negative. Two recent papers have shown how that unacceptable occurrence can be avoided by replacing the Poisson random variables with binomial random variables, whose values are naturally bounded. This paper describes a modified Poisson tau-leaping procedure that also avoids negative populations, but is easier to implement than the binomial procedure. The new Poisson procedure also introduces a second control parameter, whose value essentially dials the procedure from the original Poisson tau-leaping at one extreme to the exact stochastic simulation algorithm at the other; therefore, the modified Poisson procedure will generally be more accurate than the original Poisson procedure.

  8. Calculation of generalized secant integral using binomial coefficients

    International Nuclear Information System (INIS)

    Guseinov, I.I.; Mamedov, B.A.

    2004-01-01

    A single series expansion relation is derived for the generalized secant (GS) integral in terms of binomial coefficients, exponential integrals and incomplete gamma functions. The convergence of the series is tested by the concrete cases of parameters. The formulas given in this study for the evaluation of GS integral show good rate of convergence and numerical stability

  9. Currency lookback options and observation frequency: A binomial approach

    NARCIS (Netherlands)

    T.H.F. Cheuk; A.C.F. Vorst (Ton)

    1997-01-01

    textabstractIn the last decade, interest in exotic options has been growing, especially in the over-the-counter currency market. In this paper we consider Iookback currency options, which are path-dependent. We show that a one-state variable binomial model for currency Iookback options can

  10. CONFLICTS AND OPERATIONAL HURDLES IN THE MANAGEMENT OF INNOVATIVE PROJECTS: AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    José Varela Donato

    2013-12-01

    Full Text Available This article aims at describing how conflicts and operational hurdles manifest themselves in the management of innovative projects in a development bank (DB. The research characterizes itself as a qualitative and exploratory approach; the data collection consisted of interviews with consultants and project managers and by documents on management projects in Development Bank; the interviews have undergone the content analysis by utilizing the Atlas.ti. The research revealed that the project management in its new implemented model faces difficulties similar to those observed in traditional hierarchies. It was observed that conflicts regarding interests, values, psychological aspects and operational hurdles are intrinsic to the life cycle of innovative projects, which implementation requires a lot of political capacity from their leaders to be effective.

  11. A Bayesian approach to functional mixed-effects modeling for longitudinal data with binomial outcomes.

    Science.gov (United States)

    Kliethermes, Stephanie; Oleson, Jacob

    2014-08-15

    Longitudinal growth patterns are routinely seen in medical studies where individual growth and population growth are followed up over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear and quadratic); however, these relationships may not accurately capture growth over time. Functional mixed-effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible, and thus, estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Sustainable energy policy for Asia: Mitigating systemic hurdles in a highly dense city

    International Nuclear Information System (INIS)

    Ng, Artie W.; Nathwani, Jatin

    2010-01-01

    Greenhouse gas emission (GHG) has been increasingly a sensitive issue that is across border and impacting global public interests. While the use of renewable energy technology is perceived as a means to enable delivery of emission-free solutions, its penetration into the energy market has not been timely and significant enough as projected in prior studies. This article aims to illustrate some of the critical hurdles as the policy makers start formulating environmentally friendly energy consumption means for the public in Asian economies. In particular, through analyzing the characteristics in the case of Hong Kong, the authors unveil the challenges for this highly dense city to reach a landscape of alternative energy resources for its transition into a sustainable economy. Education and engagement with the public about a sustainable future, alignment of stakeholders' economic interests and absorption capacity of emerging technologies are argued as the three main challenges and initiatives in mitigating the underlying systemic hurdles that remain to be overcome. Observing the current responses to the externalities by the policy makers in Hong Kong, this study articulates the critical challenges to mitigate these specific systemic hurdles embedded in the existing infrastructure of a highly dense city. Possible mitigating measures to enable deployment of integrative sustainable energy solutions in dealing with climate change are discussed. (author)

  13. Binomial confidence intervals for testing non-inferiority or superiority: a practitioner's dilemma.

    Science.gov (United States)

    Pradhan, Vivek; Evans, John C; Banerjee, Tathagata

    2016-08-01

    In testing for non-inferiority or superiority in a single arm study, the confidence interval of a single binomial proportion is frequently used. A number of such intervals are proposed in the literature and implemented in standard software packages. Unfortunately, use of different intervals leads to conflicting conclusions. Practitioners thus face a serious dilemma in deciding which one to depend on. Is there a way to resolve this dilemma? We address this question by investigating the performances of ten commonly used intervals of a single binomial proportion, in the light of two criteria, viz., coverage and expected length of the interval. © The Author(s) 2013.

  14. Kinetic and kinematic analysis of hurdle clearance of an African and ...

    African Journals Online (AJOL)

    The results showed a difference in the centre of mass displacement at hurdle clearance and velocity-parameters in both the take-off and the landing phases. When comparing R.G to C.J, the latter had a smaller vertical displacement and a longer horizontal displacement, in addition to, a greater horizontal velocity along with ...

  15. A semiparametric negative binomial generalized linear model for modeling over-dispersed count data with a heavy tail: Characteristics and applications to crash data.

    Science.gov (United States)

    Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy

    2016-06-01

    Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Beta-binomial model for meta-analysis of odds ratios.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena

    2017-05-20

    In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. Difference of Sums Containing Products of Binomial Coefficients and Their Logarithms

    National Research Council Canada - National Science Library

    Miller, Allen R; Moskowitz, Ira S

    2005-01-01

    Properties of the difference of two sums containing products of binomial coefficients and their logarithms which arise in the application of Shannon's information theory to a certain class of covert channels are deduced...

  18. Difference of Sums Containing Products of Binomial Coefficients and their Logarithms

    National Research Council Canada - National Science Library

    Miller, Allen R; Moskowitz, Ira S

    2004-01-01

    Properties of the difference of two sums containing products of binomial coefficients and their logarithms which arise in the application of Shannon's information theory to a certain class of covert channels are deduced...

  19. Using the β-binomial distribution to characterize forest health

    Science.gov (United States)

    S.J. Zarnoch; R.L. Anderson; R.M. Sheffield

    1995-01-01

    The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...

  20. Negative binomial distribution fits to multiplicity distributions is restricted δη intervals from central O+Cu collisions at 14.6A GeV/c and their implication for open-quotes Intermittencyclose quotes

    International Nuclear Information System (INIS)

    Tannenbaum, M.J.

    1993-01-01

    Experience in analyzing the data from Light and Heavy Ion Collisions in terms of distributions rather than moments suggests that conventional fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD). Multiplicity distributions were obtained for central 16 O+Cu collisions in bins of δη= 0.1,0.2, 0.3 .... 0.5,1.0, where the bin of 1.0 covers 1.2 < η < 2.2 in the laboratory. NBD fits were performed to these distributions with excellent results in all δη bins. The κ parameter of the NBD fit increases linearly with the δη interval, which is a totally unexpected and particularly striking result. Due to the well known property of the NBD under convolution, this result indicates that the multiplicity distributions in adjacent bins of pseudorapidity δη ∼ 0.1 are largely statistically independent. The relationship to 2-particle correlations and open-quotes Intermittencyclose quotes will be discussed

  1. Generation of the reciprocal-binomial state for optical fields

    International Nuclear Information System (INIS)

    Valverde, C.; Avelar, A.T.; Baseia, B.; Malbouisson, J.M.C.

    2003-01-01

    We compare the efficiencies of two interesting schemes to generate truncated states of the light field in running modes, namely the 'quantum scissors' and the 'beam-splitter array' schemes. The latter is applied to create the reciprocal-binomial state as a travelling wave, required to implement recent experimental proposals of phase-distribution determination and of quantum lithography

  2. A Bayesian non-inferiority test for two independent binomial proportions.

    Science.gov (United States)

    Kawasaki, Yohei; Miyaoka, Etsuo

    2013-01-01

    In drug development, non-inferiority tests are often employed to determine the difference between two independent binomial proportions. Many test statistics for non-inferiority are based on the frequentist framework. However, research on non-inferiority in the Bayesian framework is limited. In this paper, we suggest a new Bayesian index τ = P(π₁  > π₂-Δ₀|X₁, X₂), where X₁ and X₂ denote binomial random variables for trials n1 and n₂, and parameters π₁ and π₂ , respectively, and the non-inferiority margin is Δ₀> 0. We show two calculation methods for τ, an approximate method that uses normal approximation and an exact method that uses an exact posterior PDF. We compare the approximate probability with the exact probability for τ. Finally, we present the results of actual clinical trials to show the utility of index τ. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Detection of progression of glaucomatous visual field damage using the point-wise method with the binomial test.

    Science.gov (United States)

    Karakawa, Ayako; Murata, Hiroshi; Hirasawa, Hiroyo; Mayama, Chihiro; Asaoka, Ryo

    2013-01-01

    To compare the performance of newly proposed point-wise linear regression (PLR) with the binomial test (binomial PLR) against mean deviation (MD) trend analysis and permutation analyses of PLR (PoPLR), in detecting global visual field (VF) progression in glaucoma. 15 VFs (Humphrey Field Analyzer, SITA standard, 24-2) were collected from 96 eyes of 59 open angle glaucoma patients (6.0 ± 1.5 [mean ± standard deviation] years). Using the total deviation of each point on the 2(nd) to 16(th) VFs (VF2-16), linear regression analysis was carried out. The numbers of VF test points with a significant trend at various probability levels (pbinomial test (one-side). A VF series was defined as "significant" if the median p-value from the binomial test was binomial PLR method (0.14 to 0.86) was significantly higher than MD trend analysis (0.04 to 0.89) and PoPLR (0.09 to 0.93). The PIS of the proposed method (0.0 to 0.17) was significantly lower than the MD approach (0.0 to 0.67) and PoPLR (0.07 to 0.33). The PBNS of the three approaches were not significantly different. The binomial BLR method gives more consistent results than MD trend analysis and PoPLR, hence it will be helpful as a tool to 'flag' possible VF deterioration.

  4. Hits per trial: Basic analysis of binomial data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-09-01

    This report presents simple statistical methods for analyzing binomial data, such as the number of failures in some number of demands. It gives point estimates, confidence intervals, and Bayesian intervals for the failure probability. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the failure probability varies randomly. Examples and SAS programs are given

  5. Hits per trial: Basic analysis of binomial data

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.

    1994-09-01

    This report presents simple statistical methods for analyzing binomial data, such as the number of failures in some number of demands. It gives point estimates, confidence intervals, and Bayesian intervals for the failure probability. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the failure probability varies randomly. Examples and SAS programs are given.

  6. Analysis of railroad tank car releases using a generalized binomial model.

    Science.gov (United States)

    Liu, Xiang; Hong, Yili

    2015-11-01

    The United States is experiencing an unprecedented boom in shale oil production, leading to a dramatic growth in petroleum crude oil traffic by rail. In 2014, U.S. railroads carried over 500,000 tank carloads of petroleum crude oil, up from 9500 in 2008 (a 5300% increase). In light of continual growth in crude oil by rail, there is an urgent national need to manage this emerging risk. This need has been underscored in the wake of several recent crude oil release incidents. In contrast to highway transport, which usually involves a tank trailer, a crude oil train can carry a large number of tank cars, having the potential for a large, multiple-tank-car release incident. Previous studies exclusively assumed that railroad tank car releases in the same train accident are mutually independent, thereby estimating the number of tank cars releasing given the total number of tank cars derailed based on a binomial model. This paper specifically accounts for dependent tank car releases within a train accident. We estimate the number of tank cars releasing given the number of tank cars derailed based on a generalized binomial model. The generalized binomial model provides a significantly better description for the empirical tank car accident data through our numerical case study. This research aims to provide a new methodology and new insights regarding the further development of risk management strategies for improving railroad crude oil transportation safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Total Quality Management in Higher Education: Clearing the Hurdles. A Survey on Strategies for Implementing Quality Management Practices in Higher Education. A GOAL/QPC Application Report.

    Science.gov (United States)

    Seymour, Daniel

    Based on a survey of Quality Management (QM) practitioners at 21 colleges, this study presents the 10 most difficult implementation hurdles to QM in higher education and a set of hurdle-clearing strategies. The hurdles are: (1) lack of time to implement QM; (2) perception that QM is something for janitorial and housing staffs but not applicable to…

  8. Driving force of organic fertilizer use in Central Rift Valley of Ethiopia: Independent double hurdle approach

    Directory of Open Access Journals (Sweden)

    Terefe Aemro T.

    2016-01-01

    Full Text Available The objective of this study was to identify the important factors that influence both adoption and level of use of organic fertilizer among smallholder farmers in the Central Rift Valley of Ethiopia using a primary data collected from 161 sample respondents. An independent double hurdle model was used to address the objectives of the study on the assumption that adoption and level of organic fertilizer use by are two independent decisions influenced by different factors. Empirical estimates of the first hurdle reveals that literacy status of the head, livestock holding, frequency of extension contact, distance to market and slope of the plot are statistically significant decision variables that affect the probability of adopting organic fertilizer. Meanwhile, estimates of the second hurdle revealed that, the extent of use of organic fertilizer was determined by livestock holding, access to credit distance to the market and slope of plot. This indicates that factors that affect adoption are not necessarily the same as those that influence intensity. Therefore, it is important to consider both stages in evaluating strategies aimed at promoting the adoption and use of organic fertilizer.

  9. Mechanisms of adaptation to intensive loads of 400 meters’ hurdles runners at stage of initial basic training

    Directory of Open Access Journals (Sweden)

    A.S. Rovniy

    2015-08-01

    Full Text Available Purpose: is study of adaptation mechanisms of 400 meter’ hurdles-runners to intensive physical loads. Material: in the research 13 - 400 meters’ hurdles-runners and 13 - 400 meters’ runners participated. Results: it was found that physiological cost of sportsmen’s special workability has fragmentary character. We presented results of physiological and bio-chemical adaptation mechanisms to dozed work. The received results have no confident distinctions and can not objectively characterize mechanisms of sportsmen’s special workability. We did not detect definite differences in indicators of mechanisms, ensuring sportsmen’s special workability under dozed loads. We found, that level of anaerobic glycolysis is an objective criterion of 400 meter’ hurdles-runners’ special workability. It was shown that for determination of functional potentials for such kind of functioning it is necessary to apply special loads. Conclusions: the received results deepen information about mechanisms of adaptation to specific competition functioning. Correct approaches to processing and analysis of the research’s results permit to more specifically determine sportsmen’s functional potentials in different kinds of competition functioning.

  10. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

    Science.gov (United States)

    Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

    2014-07-30

    Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Metaprop: a Stata command to perform meta-analysis of binomial data.

    Science.gov (United States)

    Nyaga, Victoria N; Arbyn, Marc; Aerts, Marc

    2014-01-01

    Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.

  12. Jumping the PBL Implementation Hurdle: Supporting the Efforts of K-12 Teachers

    Science.gov (United States)

    Ertmer, Peggy A.; Simons, Krista D.

    2006-01-01

    While problem-based learning (PBL) has a relatively long history of successful use in medical and pre-professional schools, it has yet to be widely adopted by K--12 teachers. This may be due, in part, to the numerous challenges teachers experience when implementing PBL. In this paper, we describe specific hurdles that teachers are likely to…

  13. Hurdles in Basic Science Translation

    Directory of Open Access Journals (Sweden)

    Christina J. Perry

    2017-07-01

    Full Text Available In the past century there have been incredible advances in the field of medical research, but what hinders translation of this knowledge into effective treatment for human disease? There is an increasing focus on the failure of many research breakthroughs to be translated through the clinical trial process and into medical practice. In this mini review, we will consider some of the reasons that findings in basic medical research fail to become translated through clinical trials and into basic medical practices. We focus in particular on the way that human disease is modeled, the understanding we have of how our targets behave in vivo, and also some of the issues surrounding reproducibility of basic research findings. We will also look at some of the ways that have been proposed for overcoming these issues. It appears that there needs to be a cultural shift in the way we fund, publish and recognize quality control in scientific research. Although this is a daunting proposition, we hope that with increasing awareness and focus on research translation and the hurdles that impede it, the field of medical research will continue to inform and improve medical practice across the world.

  14. Possibility and challenges of conversion of current virus species names to Linnaean binomials

    Science.gov (United States)

    Thomas, Postler; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sina; Benko, Maria; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Remi; Clegg, Christopher S.; Collins, Peter L.; De la Torre, Juan Carlos; DeRisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Durrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sebastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balazs; Hewson, Roger; Horie, Masayuki; Jiang, Daohong; Kobinger, Gary P.; Kondo, Hideki; Kropinski, Andrew; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; Paweska, Janusz T.; Peters, C.J.; Radoshitzky, Sheli; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfacon, Helene; Salvato , Maria; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark; Stone, D.M.; Takada , Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, N.; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Victor E.; Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield , Anna E.; Zerbini, Francisco Murilo; Kuhn, Jens H.

    2017-01-01

    Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment.

  15. Modeling and Predistortion of Envelope Tracking Power Amplifiers using a Memory Binomial Model

    DEFF Research Database (Denmark)

    Tafuri, Felice Francesco; Sira, Daniel; Larsen, Torben

    2013-01-01

    . The model definition is based on binomial series, hence the name of memory binomial model (MBM). The MBM is here applied to measured data-sets acquired from an ET measurement set-up. When used as a PA model the MBM showed an NMSE (Normalized Mean Squared Error) as low as −40dB and an ACEPR (Adjacent Channel...... Error Power Ratio) below −51 dB. The simulated predistortion results showed that the MBM can improve the compensation of distortion in the adjacent channel of 5.8 dB and 5.7 dB compared to a memory polynomial predistorter (MPPD). The predistortion performance in the time domain showed an NMSE...

  16. Adaptive multiresolution Hermite-Binomial filters for image edge and texture analysis

    NARCIS (Netherlands)

    Gu, Y.H.; Katsaggelos, A.K.

    1994-01-01

    A new multiresolution image analysis approach using adaptive Hermite-Binomial filters is presented in this paper. According to the local image structural and textural properties, the analysis filter kernels are made adaptive both in their scales and orders. Applications of such an adaptive filtering

  17. Fermat’s Little Theorem via Divisibility of Newton’s Binomial

    Directory of Open Access Journals (Sweden)

    Ziobro Rafał

    2015-09-01

    Full Text Available Solving equations in integers is an important part of the number theory [29]. In many cases it can be conducted by the factorization of equation’s elements, such as the Newton’s binomial. The article introduces several simple formulas, which may facilitate this process. Some of them are taken from relevant books [28], [14].

  18. Learning Binomial Probability Concepts with Simulation, Random Numbers and a Spreadsheet

    Science.gov (United States)

    Rochowicz, John A., Jr.

    2005-01-01

    This paper introduces the reader to the concepts of binomial probability and simulation. A spreadsheet is used to illustrate these concepts. Random number generators are great technological tools for demonstrating the concepts of probability. Ideas of approximation, estimation, and mathematical usefulness provide numerous ways of learning…

  19. PREDICTIVE CONTRIBUTION OF MORPHOLOGICAL CHARACTERISTICS AND MOTOR ABILITIES ON THE RESULT OF RUNNING THE 60m HURDLES IN BOYS AGED 12 - 13 YEARS

    Directory of Open Access Journals (Sweden)

    Zana Bujak

    2014-06-01

    Full Text Available The subject of this study is to determine predictive contributions of morphological characteristics and motor abilities on the 60m hurdles, with an aim to form a group of easily applicable field tests so as to identify boys who are talented in hurdl e racing . The subject sample of this study was comprised of 60 boys aged 12 - 13. The variable sample consisted of a 60m hurdles criterion variable and a set of 13 p re dictor variables comprising of morphological characteristics, speed - strength abilities and the subjects' coordina tion qualities . Applying the regression analysis , the predictive contribution of a complete variable s et of morpholog ical characteristics and motor abilities was determined as an above average statistical significance, influencing 60m hurdle outcome. The greatest individual statistically significant predictive contribution was achieved by the variables of speed - strength quality assessment: 20m flying start r ace result with a standing long jump; and only one variable from the field of morphological characteristics: the shin length. The results support the following conclusion: the two specific variables of speed - strength quality, and 20m flying start race results along with standing long jump , can be relevant predictors of successful outcome in hurdle races .

  20. Joint Analysis of Binomial and Continuous Traits with a Recursive Model

    DEFF Research Database (Denmark)

    Varona, Louis; Sorensen, Daniel

    2014-01-01

    This work presents a model for the joint analysis of a binomial and a Gaussian trait using a recursive parametrization that leads to a computationally efficient implementation. The model is illustrated in an analysis of mortality and litter size in two breeds of Danish pigs, Landrace and Yorkshir...

  1. Binomial Coefficients Modulo a Prime--A Visualization Approach to Undergraduate Research

    Science.gov (United States)

    Bardzell, Michael; Poimenidou, Eirini

    2011-01-01

    In this article we present, as a case study, results of undergraduate research involving binomial coefficients modulo a prime "p." We will discuss how undergraduates were involved in the project, even with a minimal mathematical background beforehand. There are two main avenues of exploration described to discover these binomial…

  2. A binomial random sum of present value models in investment analysis

    OpenAIRE

    Βουδούρη, Αγγελική; Ντζιαχρήστος, Ευάγγελος

    1997-01-01

    Stochastic present value models have been widely adopted in financial theory and practice and play a very important role in capital budgeting and profit planning. The purpose of this paper is to introduce a binomial random sum of stochastic present value models and offer an application in investment analysis.

  3. The three hurdles of tax planning: How business context, aims of tax planning, and tax manager power affect tax

    OpenAIRE

    Feller, Anna; Schanz, Deborah

    2014-01-01

    The question of why some companies pay more taxes than others is a widely investigated topic of interest. One of the famous suspect explanations is a phenomenon called tax avoidance. We develop a holistic theoretical concept of influences on corporate tax planning through a series of 19 in-depth German tax expert interviews. Our findings show that three distinct hurdles in the tax planning process can explain different levels of tax expense across companies. Those three hurdles are which tax ...

  4. Four hurdles for conservation on private land: the case of the golden lion tamarin, Atlantic Forest, Brazil.

    Directory of Open Access Journals (Sweden)

    Ralf Christopher Buckley

    2015-08-01

    Full Text Available Many threatened species worldwide rely on patches of remnant vegetation in private landholdings. To establish private reserves that contribute effectively to conservation involves a wide range of complex and interacting ecological, legal, social and financial factors. These can be seen as a series of successive hurdles, each with multiple bars, which must all be surmounted. The golden lion tamarin, Leontopithecus rosalia, is restricted to the Atlantic Forest biome in the state of Rio de Janeiro, Brazil. This forest is largely cleared. There are many small remnant patches on private lands, able to support tamarins. Local NGO’s have successfully used limited funds to contribute to tamarin conservation in a highly cost effective way. We examined the mechanisms by analysing documents and interviewing landholders and other stakeholders. We found that the local NGOs successfully identified landholdings where ecological, legal, social and some financial hurdles had already been crossed, and helped landholders over the final financial hurdle by funding critical cost components. This cost <5% of the price of outright land purchase. This approach is scaleable for golden lion tamarin elsewhere within the Atlantic Forest biome, and applicable for other species and ecosystems worldwide.

  5. Using the Binomial Series to Prove the Arithmetic Mean-Geometric Mean Inequality

    Science.gov (United States)

    Persky, Ronald L.

    2003-01-01

    In 1968, Leon Gerber compared (1 + x)[superscript a] to its kth partial sum as a binomial series. His result is stated and, as an application of this result, a proof of the arithmetic mean-geometric mean inequality is presented.

  6. Extension of Space Food Shelf Life Through Hurdle Approach

    Science.gov (United States)

    Cooper, M. R.; Sirmons, T. A.; Froio-Blumsack, D.; Mohr, L.; Young, M.; Douglas, G. L.

    2018-01-01

    The processed and prepackaged space food system is the main source of crew nutrition, and hence central to astronaut health and performance. Unfortunately, space food quality and nutrition degrade to unacceptable levels in two to three years with current food stabilization technologies. Future exploration missions will require a food system that remains safe, acceptable and nutritious through five years of storage within vehicle resource constraints. The potential of stabilization technologies (alternative storage temperatures, processing, formulation, ingredient source, packaging, and preparation procedures), when combined in hurdle approach, to mitigate quality and nutritional degradation is being assessed. Sixteen representative foods from the International Space Station food system were chosen for production and analysis and will be evaluated initially and at one, three, and five years with potential for analysis at seven years if necessary. Analysis includes changes in color, texture, nutrition, sensory quality, and rehydration ratio when applicable. The food samples will be stored at -20 C, 4 C, and 21 C. Select food samples will also be evaluated at -80 C to determine the impacts of ultra-cold storage after one and five years. Packaging film barrier properties and mechanical integrity will be assessed before and after processing and storage. At the study conclusion, if tested hurdles are adequate, formulation, processing, and storage combinations will be uniquely identified for processed food matrices to achieve a five-year shelf life. This study will provide one of the most comprehensive investigations of long duration food stability ever completed, and the achievement of extended food system stability will have profound impacts to health and performance for spaceflight crews and for relief efforts and military applications on Earth.

  7. Comparison of beta-binomial regression model approaches to analyze health-related quality of life data.

    Science.gov (United States)

    Najera-Zuloaga, Josu; Lee, Dae-Jin; Arostegui, Inmaculada

    2017-01-01

    Health-related quality of life has become an increasingly important indicator of health status in clinical trials and epidemiological research. Moreover, the study of the relationship of health-related quality of life with patients and disease characteristics has become one of the primary aims of many health-related quality of life studies. Health-related quality of life scores are usually assumed to be distributed as binomial random variables and often highly skewed. The use of the beta-binomial distribution in the regression context has been proposed to model such data; however, the beta-binomial regression has been performed by means of two different approaches in the literature: (i) beta-binomial distribution with a logistic link; and (ii) hierarchical generalized linear models. None of the existing literature in the analysis of health-related quality of life survey data has performed a comparison of both approaches in terms of adequacy and regression parameter interpretation context. This paper is motivated by the analysis of a real data application of health-related quality of life outcomes in patients with Chronic Obstructive Pulmonary Disease, where the use of both approaches yields to contradictory results in terms of covariate effects significance and consequently the interpretation of the most relevant factors in health-related quality of life. We present an explanation of the results in both methodologies through a simulation study and address the need to apply the proper approach in the analysis of health-related quality of life survey data for practitioners, providing an R package.

  8. Binomial and enumerative sampling of Tetranychus urticae (Acari: Tetranychidae) on peppermint in California.

    Science.gov (United States)

    Tollerup, Kris E; Marcum, Daniel; Wilson, Rob; Godfrey, Larry

    2013-08-01

    The two-spotted spider mite, Tetranychus urticae Koch, is an economic pest on peppermint [Mentha x piperita (L.), 'Black Mitcham'] grown in California. A sampling plan for T. urticae was developed under Pacific Northwest conditions in the early 1980s and has been used by California growers since approximately 1998. This sampling plan, however, is cumbersome and a poor predictor of T. urticae densities in California. Between June and August, the numbers of immature and adult T. urticae were counted on leaves at three commercial peppermint fields (sites) in 2010 and a single field in 2011. In each of seven locations per site, 45 leaves were sampled, that is, 9 leaves per five stems. Leaf samples were stratified by collecting three leaves from the top, middle, and bottom strata per stem. The on-plant distribution of T. urticae did not significantly differ among the stem strata through the growing season. Binomial and enumerative sampling plans were developed using generic Taylor's power law coefficient values. The best fit of our data for binomial sampling occurred using a tally threshold of T = 0. The optimum number of leaves required for T urticae at the critical density of five mites per leaf was 20 for the binomial and 23 for the enumerative sampling plans, respectively. Sampling models were validated using Resampling for Validation of Sampling Plan Software.

  9. Data analysis using the Binomial Failure Rate common cause model

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1983-09-01

    This report explains how to use the Binomial Failure Rate (BFR) method to estimate common cause failure rates. The entire method is described, beginning with the conceptual model, and covering practical issues of data preparation, treatment of variation in the failure rates, Bayesian estimation of the quantities of interest, checking the model assumptions for lack of fit to the data, and the ultimate application of the answers

  10. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  11. Topology of unitary groups and the prime orders of binomial coefficients

    Science.gov (United States)

    Duan, HaiBao; Lin, XianZu

    2017-09-01

    Let $c:SU(n)\\rightarrow PSU(n)=SU(n)/\\mathbb{Z}_{n}$ be the quotient map of the special unitary group $SU(n)$ by its center subgroup $\\mathbb{Z}_{n}$. We determine the induced homomorphism $c^{\\ast}:$ $H^{\\ast}(PSU(n))\\rightarrow H^{\\ast}(SU(n))$ on cohomologies by computing with the prime orders of binomial coefficients

  12. Poissonian and binomial models in radionuclide metrology by liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Malonda, A.

    1990-01-01

    Binomial and Poissonian models developed for calculating the counting efficiency from a free parameter is analysed in this paper. This model have been applied to liquid scintillator counting systems with two or three photomultipliers. It is mathematically demostrated that both models are equivalent and that the counting efficiencies calculated either from one or the other model are identical. (Author)

  13. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    Science.gov (United States)

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  14. Entanglement properties between two atoms in the binomial optical field interacting with two entangled atoms

    International Nuclear Information System (INIS)

    Liu Tang-Kun; Zhang Kang-Long; Tao Yu; Shan Chuan-Jia; Liu Ji-Bing

    2016-01-01

    The temporal evolution of the degree of entanglement between two atoms in a system of the binomial optical field interacting with two arbitrary entangled atoms is investigated. The influence of the strength of the dipole–dipole interaction between two atoms, probabilities of the Bernoulli trial, and particle number of the binomial optical field on the temporal evolution of the atomic entanglement are discussed. The result shows that the two atoms are always in the entanglement state. Moreover, if and only if the two atoms are initially in the maximally entangled state, the entanglement evolution is not affected by the parameters, and the degree of entanglement is always kept as 1. (paper)

  15. Forward selection two sample binomial test

    Science.gov (United States)

    Wong, Kam-Fai; Wong, Weng-Kee; Lin, Miao-Shan

    2016-01-01

    Fisher’s exact test (FET) is a conditional method that is frequently used to analyze data in a 2 × 2 table for small samples. This test is conservative and attempts have been made to modify the test to make it less conservative. For example, Crans and Shuster (2008) proposed adding more points in the rejection region to make the test more powerful. We provide another way to modify the test to make it less conservative by using two independent binomial distributions as the reference distribution for the test statistic. We compare our new test with several methods and show that our test has advantages over existing methods in terms of control of the type 1 and type 2 errors. We reanalyze results from an oncology trial using our proposed method and our software which is freely available to the reader. PMID:27335577

  16. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  17. Computational results on the compound binomial risk model with nonhomogeneous claim occurrences

    NARCIS (Netherlands)

    Tuncel, A.; Tank, F.

    2013-01-01

    The aim of this paper is to give a recursive formula for non-ruin (survival) probability when the claim occurrences are nonhomogeneous in the compound binomial risk model. We give recursive formulas for non-ruin (survival) probability and for distribution of the total number of claims under the

  18. Lactic Acid Bacteria Selection for Biopreservation as a Part of Hurdle Technology Approach Applied on Seafood

    Directory of Open Access Journals (Sweden)

    Norman Wiernasz

    2017-05-01

    Full Text Available As fragile food commodities, microbial, and organoleptic qualities of fishery and seafood can quickly deteriorate. In this context, microbial quality and security improvement during the whole food processing chain (from catch to plate, using hurdle technology, a combination of mild preserving technologies such as biopreservation, modified atmosphere packaging, and superchilling, are of great interest. As natural flora and antimicrobial metabolites producers, lactic acid bacteria (LAB are commonly studied for food biopreservation. Thirty-five LAB known to possess interesting antimicrobial activity were selected for their potential application as bioprotective agents as a part of hurdle technology applied to fishery products. The selection approach was based on seven criteria including antimicrobial activity, alteration potential, tolerance to chitosan coating, and superchilling process, cross inhibition, biogenic amines production (histamine, tyramine, and antibiotics resistance. Antimicrobial activity was assessed against six common spoiling bacteria in fishery products (Shewanella baltica, Photobacterium phosphoreum, Brochothrix thermosphacta, Lactobacillus sakei, Hafnia alvei, Serratia proteamaculans and one pathogenic bacterium (Listeria monocytogenes in co-culture inhibitory assays miniaturized in 96-well microtiter plates. Antimicrobial activity and spoilage evaluation, both performed in cod and salmon juice, highlighted the existence of sensory signatures and inhibition profiles, which seem to be species related. Finally, six LAB with no unusual antibiotics resistance profile nor histamine production ability were selected as bioprotective agents for further in situ inhibitory assays in cod and salmon based products, alone or in combination with other hurdles (chitosan, modified atmosphere packing, and superchilling.

  19. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  20. Computation of Clebsch-Gordan and Gaunt coefficients using binomial coefficients

    International Nuclear Information System (INIS)

    Guseinov, I.I.; Oezmen, A.; Atav, Ue

    1995-01-01

    Using binomial coefficients the Clebsch-Gordan and Gaunt coefficients were calculated for extremely large quantum numbers. The main advantage of this approach is directly calculating these coefficients, instead of using recursion relations. Accuracy of the results is quite high for quantum numbers l 1 , and l 2 up to 100. Despite direct calculation, the CPU times are found comparable with those given in the related literature. 11 refs., 1 fig., 2 tabs

  1. Low reheating temperatures in monomial and binomial inflationary models

    International Nuclear Information System (INIS)

    Rehagen, Thomas; Gelmini, Graciela B.

    2015-01-01

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied ϕ 2 inflationary potential is no longer favored by current CMB data, as well as ϕ p with p>2, a ϕ 1 potential and canonical reheating (w re =0) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n s , implies an upper bound on the reheating temperature of T re ≲6×10 10 GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a ϕ 1 potential. We find that as a subdominant ϕ 2 term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T re =4 MeV is excluded by the Planck 2015 68% confidence limit

  2. A distribución binomial como ferramenta na resolución de problemas de xenética

    OpenAIRE

    Ron Pedreira, Antonio Miguel de; Martínez, Ana María

    1999-01-01

    La distribución binomial presenta una amplia gama de campos de aplicación debido a que en situaciones cotidianas, se presenta con elevada frecuencia algún tipo de situación basada en dos hechos diferentes, alternativos, excluyentes y con probablilidades que suman 1 (cien por cien), es decir, el hecho cierto. En cuanto a la genética, pueden encontrarse supuestos que permitan la aplicación de la distribución binomial en ámbitos de la genética molecular, mendeliana, cuantitativa y genética de po...

  3. Application of a binomial cusum control chart to monitor one drinking water indicator

    Directory of Open Access Journals (Sweden)

    Elisa Henning

    2014-02-01

    Full Text Available The aim of this study is to analyze the use of a binomial cumulative sum chart (CUSUM to monitor the presence of total coliforms, biological indicators of quality of water supplies in water treatment processes. The sample series were monthly taken from a water treatment plant and were analyzed from 2007 to 2009. The statistical treatment of the data was performed using GNU R, and routines were created for the approximation of the upper limit of the binomial CUSUM chart. Furthermore, a comparative study was conducted to investigate whether there is a significant difference in sensitivity between the use of CUSUM and the traditional Shewhart chart, the most commonly used chart in process monitoring. The results obtained demonstrate that this study was essential for making the right choice in selecting a chart for the statistical analysis of this process.

  4. Identifying economic hurdles to early adoption of preventative practices: The case of trunk diseases in California winegrape vineyards

    Directory of Open Access Journals (Sweden)

    Jonathan Kaplan

    2016-12-01

    Full Text Available Despite the high likelihood of infection and substantial yield losses from trunk diseases, many California practitioners wait to adopt field-tested, preventative practices (delayed pruning, double pruning, and application of pruning-wound protectants until after disease symptoms appear in the vineyard at around 10 years old. We evaluate net benefits from adoption of these practices before symptoms appear in young Cabernet Sauvignon vineyards and after they become apparent in mature vineyards to identify economic hurdles to early adoption. We simulate winegrape production in select counties of California and find widespread benefits from early adoption, increasing vineyard profitable lifespans, in some cases, by close to 50%. However, hurdles may result from uncertainty about the cost and returns from adoption, labor constraints, long time lags in benefits from early adoption, growers’ perceived probabilities of infection, and their discount rate. Development of extension resources communicating benefits and potential hurdles to growers likely reduces uncertainty, increasing early adoption. Improvements in efficacy of preventative practices, perhaps by detecting when pathogen spores are released into the vineyard, will increase early adoption. Lastly, practice cost reductions will increase early adoption too, especially when the time it takes for adoption to payoff and infection uncertainty are influential in adoption decisions.

  5. Comparison and Field Validation of Binomial Sampling Plans for Oligonychus perseae (Acari: Tetranychidae) on Hass Avocado in Southern California.

    Science.gov (United States)

    Lara, Jesus R; Hoddle, Mark S

    2015-08-01

    Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Determining order-up-to levels under periodic review for compound binomial (intermittent) demand

    NARCIS (Netherlands)

    Teunter, R. H.; Syntetos, A. A.; Babai, M. Z.

    2010-01-01

    We propose a new method for determining order-up-to levels for intermittent demand items in a periodic review system. Contrary to existing methods, we exploit the intermittent character of demand by modelling lead time demand as a compound binomial process. in an extensive numerical study using

  8. FDA Guidance on Biosimilar Interchangeability Elicits Diverse Views: Current and Potential Marketers Complain About Too-High Hurdles.

    Science.gov (United States)

    Barlas, Stephen

    2017-08-01

    Pharmaceutical industry sectors are at odds as the Food and Drug Administration seeks to define "interchangeability" for biosimilars. The battle lines vary by topic, but biosimilar marketers, health plans, and drugstores are generally urging lower hurdles.

  9. Environmental, Spatial, and Sociodemographic Factors Associated with Nonfatal Injuries in Indonesia

    Directory of Open Access Journals (Sweden)

    Sri Irianti

    2017-01-01

    Full Text Available Background. The determinants of injuries and their reoccurrence in Indonesia are not well understood, despite their importance in the prevention of injuries. Therefore, this study seeks to investigate the environmental, spatial, and sociodemographic factors associated with the reoccurrence of injuries among Indonesian people. Methods. Data from the 2013 round of the Indonesia Baseline Health Research (IBHR 2013 were analysed using a two-part hurdle regression model. A logit regression model was chosen for the zero-hurdle part, while a zero-truncated negative binomial regression model was selected for the counts part. Odds ratio (OR and incidence rate ratio (IRR were the measures of association, respectively. Results. The results suggest that living in a household with distant drinking water source, residing in slum areas, residing in Eastern Indonesia, having low educational attainment, being men, and being poorer are positively related to the likelihood of experiencing injury. Moreover, being a farmer or fishermen, having low educational attainment, and being men are positively associated with the frequency of injuries. Conclusion. This study would be useful to prioritise injury prevention programs in Indonesia based on the environmental, spatial, and sociodemographic characteristics.

  10. Generalized harmonic, cyclotomic, and binomial sums, their polylogarithms and special numbers

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Bluemlein, J. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-10-15

    A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.

  11. Generalized harmonic, cyclotomic, and binomial sums, their polylogarithms and special numbers

    International Nuclear Information System (INIS)

    Ablinger, J.; Schneider, C.; Bluemlein, J.

    2013-10-01

    A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.

  12. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  13. Extension of shelf-life of Chickoo slices using hurdle technology

    International Nuclear Information System (INIS)

    Shirodkar, S.; Behere, A.G.; Padwal-Desai, S.R.; Lele, S.S.; Pai, J.S.

    2001-01-01

    An attempt has been made to evolve a protocol to prepare shelf-stable high moisture chickoo slices using Hurdle Technology. The process is based on a slight reduction of water activity (0.98?0.94) by osmosis with a 70 deg Brix sucrose syrup, lowering of pH (5.75 to 4.46) by addition of citric acid, addition of KMS and then subjecting the resulting slices to different doses of gamma radiation ranging from 0.25-1 kGy. Radiation processed chickoo slices remained acceptable for 8 weeks at sub-room temperature (10 ± 2 degC) and for 3 weeks at ambient temperature (28 ± 2 degC) when evaluated by a taste-test panel. (author)

  14. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    Science.gov (United States)

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  15. Estimation of component failure probability from masked binomial system testing data

    International Nuclear Information System (INIS)

    Tan Zhibin

    2005-01-01

    The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization

  16. Constrained Dynamic Optimality and Binomial Terminal Wealth

    DEFF Research Database (Denmark)

    Pedersen, J. L.; Peskir, G.

    2018-01-01

    with interest rate $r \\in {R}$). Letting $P_{t,x}$ denote a probability measure under which $X^u$ takes value $x$ at time $t,$ we study the dynamic version of the nonlinear optimal control problem $\\inf_u\\, Var{t,X_t^u}(X_T^u)$ where the infimum is taken over admissible controls $u$ subject to $X_t^u \\ge e...... a martingale method combined with Lagrange multipliers, we derive the dynamically optimal control $u_*^d$ in closed form and prove that the dynamically optimal terminal wealth $X_T^d$ can only take two values $g$ and $\\beta$. This binomial nature of the dynamically optimal strategy stands in sharp contrast...... with other known portfolio selection strategies encountered in the literature. A direct comparison shows that the dynamically optimal (time-consistent) strategy outperforms the statically optimal (time-inconsistent) strategy in the problem....

  17. Modeling factors influencing the demand for emergency department services in ontario: a comparison of methods

    Directory of Open Access Journals (Sweden)

    Meaney Christopher

    2011-08-01

    investigating predictors of increased emergency department utilization. Six different multiple regression models for count data were fitted to assess the influence of predictors on demand for emergency department services, including: Poisson, Negative Binomial, Zero-Inflated Poisson, Zero-Inflated Negative Binomial, Hurdle Poisson, and Hurdle Negative Binomial. Comparison of competing models was assessed by the Vuong test statistic. Results The CCHS cycle 2.1 respondents were a roughly equal mix of males (50.4% and females (49.6%. The majority (86.2% were young-middle aged adults between the ages of 20-64, living in predominantly urban environments (85.9%, with mid-high household incomes (92.2% and well-educated, receiving at least a high-school diploma (84.1%. Many participants reported no chronic disease (51.9%, fell into a small number (0-5 of ambulatory diagnostic groups (62.3%, and perceived their health status as good/excellent (88.1%; however, were projected to have high Resource Utilization Band levels of health resource utilization (68.2%. These factors were largely stable for CCHS cycle 3.1 respondents. Factors influencing demand for emergency department services varied according to the severity of triage scores at initial presentation. For example, although a non-significant predictor of the odds of emergency department utilization in high severity cases, access to a primary care physician was a statistically significant predictor of the likelihood of emergency department utilization (OR: 0.69; 95% CI OR: 0.63-0.75 and the rate of emergency department utilization (RR: 0.57; 95% CI RR: 0.50-0.66 in low severity cases. Conclusion Using a theoretically appropriate hurdle negative binomial regression model this unique study illustrates that access to a primary care physician is an important predictor of both the odds and rate of emergency department utilization in Ontario. Restructuring primary care services, with aims of increasing access to undersupplied populations

  18. Modeling factors influencing the demand for emergency department services in Ontario: a comparison of methods.

    Science.gov (United States)

    Moineddin, Rahim; Meaney, Christopher; Agha, Mohammad; Zagorski, Brandon; Glazier, Richard Henry

    2011-08-19

    department utilization. Six different multiple regression models for count data were fitted to assess the influence of predictors on demand for emergency department services, including: Poisson, Negative Binomial, Zero-Inflated Poisson, Zero-Inflated Negative Binomial, Hurdle Poisson, and Hurdle Negative Binomial. Comparison of competing models was assessed by the Vuong test statistic. The CCHS cycle 2.1 respondents were a roughly equal mix of males (50.4%) and females (49.6%). The majority (86.2%) were young-middle aged adults between the ages of 20-64, living in predominantly urban environments (85.9%), with mid-high household incomes (92.2%) and well-educated, receiving at least a high-school diploma (84.1%). Many participants reported no chronic disease (51.9%), fell into a small number (0-5) of ambulatory diagnostic groups (62.3%), and perceived their health status as good/excellent (88.1%); however, were projected to have high Resource Utilization Band levels of health resource utilization (68.2%). These factors were largely stable for CCHS cycle 3.1 respondents. Factors influencing demand for emergency department services varied according to the severity of triage scores at initial presentation. For example, although a non-significant predictor of the odds of emergency department utilization in high severity cases, access to a primary care physician was a statistically significant predictor of the likelihood of emergency department utilization (OR: 0.69; 95% CI OR: 0.63-0.75) and the rate of emergency department utilization (RR: 0.57; 95% CI RR: 0.50-0.66) in low severity cases. Using a theoretically appropriate hurdle negative binomial regression model this unique study illustrates that access to a primary care physician is an important predictor of both the odds and rate of emergency department utilization in Ontario. Restructuring primary care services, with aims of increasing access to undersupplied populations may result in decreased emergency department

  19. A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.

    Science.gov (United States)

    Ferrari, Alberto; Comelli, Mario

    2016-12-01

    In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  1. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  2. Examining Teachers' Hurdles to `Science for All'

    Science.gov (United States)

    Southerland, Sherry; Gallard, Alejandro; Callihan, Laurie

    2011-11-01

    The goal of this research is to identify science teachers' beliefs and conceptions that play an important role in shaping their understandings of and attempts to enact inclusive science teaching practices. We examined the work products, both informal (online discussions, email exchanges) and formal (papers, unit plans, peer reviews), of 14 teachers enrolled in a master's degree course focused on diversity in science teaching and learning. These emerging understandings were member-checked via a series of interviews with a subset of these teachers. Our analysis was conducted in two stages: (1) describing the difficulties the teachers identified for themselves in their attempts to teach science to a wide range of students in their classes and (2) analyzing these self-identified barriers for underlying beliefs and conceptions that serve to prohibit or allow for the teachers' understanding and enactment of equitable science instruction. The teachers' self-identified barriers were grouped into three categories: students, broader social infrastructure, and self. The more fundamental barriers identified included teacher beliefs about the ethnocentrism of the mainstream, essentialism/individualism, and beliefs about the meritocracy of schooling. The implications of these hurdles for science teacher education are discussed.

  3. Tohoku Women's Hurdling Project: Science Angels (abstract)

    Science.gov (United States)

    Mizuki, Kotoe; Watanabe, Mayuko

    2009-04-01

    Tohoku University was the first National University to admit three women students in Japan in 1913. To support the university's traditional ``open-door'' policy, various projects have been promoted throughout the university since its foundation. A government plan, the Third-Stage Basic Plan for Science and Technology, aims to increase the women scientist ratio up to 25% nationwide. In order to achieve this goal, the Tohoku Women's Hurdling Project, funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), was adopted in 2006. This project is threefold: support for child/family, improvement of facilities, and support for the next generation, which includes our Science Angels program. ``Science Angels'' are women PhD students appointed by the university president, with the mission to form a strong support system among each other and to become role-models to inspire younger students who want to become researchers. Currently, 50 women graduate students of the natural sciences are Science Angels and are encouraged to design and deliver lectures in their areas of specialty at their alma maters. Up to now, 12 lectures have been delivered and science events for children in our community have been held-all with great success.

  4. The option to expand a project: its assessment with the binomial options pricing model

    Directory of Open Access Journals (Sweden)

    Salvador Cruz Rambaud

    Full Text Available Traditional methods of investment appraisal, like the Net Present Value, are not able to include the value of the operational flexibility of the project. In this paper, real options, and more specifically the option to expand, are assumed to be included in the project information in addition to the expected cash flows. Thus, to calculate the total value of the project, we are going to apply the methodology of the Net Present Value to the different scenarios derived from the existence of the real option to expand. Taking into account the analogy between real and financial options, the value of including an option to expand is explored by using the binomial options pricing model. In this way, estimating the value of the option to expand is a tool which facilitates the control of the uncertainty element implicit in the project. Keywords: Real options, Option to expand, Binomial options pricing model, Investment project appraisal

  5. Nested (inverse) binomial sums and new iterated integrals for massive Feynman diagrams

    International Nuclear Information System (INIS)

    Ablinger, Jakob; Schneider, Carsten; Bluemlein, Johannes; Raab, Clemens G.

    2014-07-01

    Nested sums containing binomial coefficients occur in the computation of massive operatormatrix elements. Their associated iterated integrals lead to alphabets including radicals, for which we determined a suitable basis. We discuss algorithms for converting between sum and integral representations, mainly relying on the Mellin transform. To aid the conversion we worked out dedicated rewrite rules, based on which also some general patterns emerging in the process can be obtained.

  6. Nested (inverse) binomial sums and new iterated integrals for massive Feynman diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Bluemlein, Johannes; Raab, Clemens G. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2014-07-15

    Nested sums containing binomial coefficients occur in the computation of massive operatormatrix elements. Their associated iterated integrals lead to alphabets including radicals, for which we determined a suitable basis. We discuss algorithms for converting between sum and integral representations, mainly relying on the Mellin transform. To aid the conversion we worked out dedicated rewrite rules, based on which also some general patterns emerging in the process can be obtained.

  7. Some findings on zero-inflated and hurdle poisson models for disease mapping.

    Science.gov (United States)

    Corpas-Burgos, Francisca; García-Donato, Gonzalo; Martinez-Beneito, Miguel A

    2018-05-27

    Zero excess in the study of geographically referenced mortality data sets has been the focus of considerable attention in the literature, with zero-inflation being the most common procedure to handle this lack of fit. Although hurdle models have also been used in disease mapping studies, their use is more rare. We show in this paper that models using particular treatments of zero excesses are often required for achieving appropriate fits in regular mortality studies since, otherwise, geographical units with low expected counts are oversmoothed. However, as also shown, an indiscriminate treatment of zero excess may be unnecessary and has a problematic implementation. In this regard, we find that naive zero-inflation and hurdle models, without an explicit modeling of the probabilities of zeroes, do not fix zero excesses problems well enough and are clearly unsatisfactory. Results sharply suggest the need for an explicit modeling of the probabilities that should vary across areal units. Unfortunately, these more flexible modeling strategies can easily lead to improper posterior distributions as we prove in several theoretical results. Those procedures have been repeatedly used in the disease mapping literature, and one should bear these issues in mind in order to propose valid models. We finally propose several valid modeling alternatives according to the results mentioned that are suitable for fitting zero excesses. We show that those proposals fix zero excesses problems and correct the mentioned oversmoothing of risks in low populated units depicting geographic patterns more suited to the data. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Combinatorial interpretations of binomial coefficient analogues related to Lucas sequences

    OpenAIRE

    Sagan, Bruce; Savage, Carla

    2009-01-01

    Let s and t be variables. Define polynomials {n} in s, t by {0}=0, {1}=1, and {n}=s{n-1}+t{n-2} for n >= 2. If s, t are integers then the corresponding sequence of integers is called a Lucas sequence. Define an analogue of the binomial coefficients by C{n,k}={n}!/({k}!{n-k}!) where {n}!={1}{2}...{n}. It is easy to see that C{n,k} is a polynomial in s and t. The purpose of this note is to give two combinatorial interpretations for this polynomial in terms of statistics on integer partitions in...

  9. “Micro” Phraseology in Action: A Look at Fixed Binomials

    Directory of Open Access Journals (Sweden)

    Dušan Gabrovšek

    2011-05-01

    Full Text Available Multiword items in English are a motley crew, as they are not only numerous but also structurally, semantically, and functionally diverse. The paper offers a fresh look at fixed binomials, an intriguing and unexpectedly heterogeneous phraseological type prototypically consisting of two lexical components with the coordinating conjunction and – less commonly but, or, (neither/ (nor – acting as the connecting element, as e.g. in body and soul, slowly but surely, sooner or later, neither fish nor fowl. In particular, their idiomaticity and lexicographical significance are highlighted, while the cross-linguistic perspective is only outlined.

  10. Extending the Binomial Checkpointing Technique for Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    2016-10-10

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.

  11. A Record Holder Female Athlete’s Train ing Loads and its Distribution in Women 400 meter Hurdle Running

    Directory of Open Access Journals (Sweden)

    Sibel DÜNDAR

    2015-07-01

    Full Text Available The purpose of this study was to investigate the effects on 400 mt characteristics of women training load used in the hurdle runnig , the distribution and performance according to the period. At this study female athlete’s, who won second place at the European Champion Clubs' Cup Competitions and had a record - breaking six times in a country, five - year training loads and degrees were investigated. In this study, athlete’s degrees an d training loads were analyzed between the years of 1982 - 1987. Atlete’s best 100m,200m,400m,400m hurdle degrees and some training loads (sprint, andurance, weight lifting were assessed by the means and satandart deviations. Athlete’s sprint and training l oads mean and standard deviation values were calculated as 100 m ( 11.96 ± 0.17 , 200m ( 25.0 ± 0.38 , 400m ( 56.07 ± 0.91 and 400 m hurdle running ( 60.38 ± 1.88 and for the training loads of 0 - 150 m sprint ( 66.85 ± 18.95 m, endurance ( 2 02.55 ± 57.56 miles and weightlifting( 151.88 ± 68.2 tons. It was observed that percentage of change of the value of training loads and the running performance changes were not increase at the same rate. Regarding the relationship between the best run ning degree and training load, it was found a significant relationship between only 60 m sprint and 150 - 450 m sprint degrees.

  12. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  13. Spiritual and ceremonial plants in North America: an assessment of Moerman's ethnobotanical database comparing Residual, Binomial, Bayesian and Imprecise Dirichlet Model (IDM) analysis.

    Science.gov (United States)

    Turi, Christina E; Murch, Susan J

    2013-07-09

    Ethnobotanical research and the study of plants used for rituals, ceremonies and to connect with the spirit world have led to the discovery of many novel psychoactive compounds such as nicotine, caffeine, and cocaine. In North America, spiritual and ceremonial uses of plants are well documented and can be accessed online via the University of Michigan's Native American Ethnobotany Database. The objective of the study was to compare Residual, Bayesian, Binomial and Imprecise Dirichlet Model (IDM) analyses of ritual, ceremonial and spiritual plants in Moerman's ethnobotanical database and to identify genera that may be good candidates for the discovery of novel psychoactive compounds. The database was queried with the following format "Family Name AND Ceremonial OR Spiritual" for 263 North American botanical families. Spiritual and ceremonial flora consisted of 86 families with 517 species belonging to 292 genera. Spiritual taxa were then grouped further into ceremonial medicines and items categories. Residual, Bayesian, Binomial and IDM analysis were performed to identify over and under-utilized families. The 4 statistical approaches were in good agreement when identifying under-utilized families but large families (>393 species) were underemphasized by Binomial, Bayesian and IDM approaches for over-utilization. Residual, Binomial, and IDM analysis identified similar families as over-utilized in the medium (92-392 species) and small (<92 species) classes. The families Apiaceae, Asteraceae, Ericacea, Pinaceae and Salicaceae were identified as significantly over-utilized as ceremonial medicines in medium and large sized families. Analysis of genera within the Apiaceae and Asteraceae suggest that the genus Ligusticum and Artemisia are good candidates for facilitating the discovery of novel psychoactive compounds. The 4 statistical approaches were not consistent in the selection of over-utilization of flora. Residual analysis revealed overall trends that were supported

  14. [The reentrant binomial model of nuclear anomalies growth in rhabdomyosarcoma RA-23 cell populations under increasing doze of rare ionizing radiation].

    Science.gov (United States)

    Alekseeva, N P; Alekseev, A O; Vakhtin, Iu B; Kravtsov, V Iu; Kuzovatov, S N; Skorikova, T I

    2008-01-01

    Distributions of nuclear morphology anomalies in transplantable rabdomiosarcoma RA-23 cell populations were investigated under effect of ionizing radiation from 0 to 45 Gy. Internuclear bridges, nuclear protrusions and dumbbell-shaped nuclei were accepted for morphological anomalies. Empirical distributions of the number of anomalies per 100 nuclei were used. The adequate model of reentrant binomial distribution has been found. The sum of binomial random variables with binomial number of summands has such distribution. Averages of these random variables were named, accordingly, internal and external average reentrant components. Their maximum likelihood estimations were received. Statistical properties of these estimations were investigated by means of statistical modeling. It has been received that at equally significant correlation between the radiation dose and the average of nuclear anomalies in cell populations after two-three cellular cycles from the moment of irradiation in vivo the irradiation doze significantly correlates with internal average reentrant component, and in remote descendants of cell transplants irradiated in vitro - with external one.

  15. Cost-offsets of prescription drug expenditures: data analysis via a copula-based bivariate dynamic hurdle model.

    Science.gov (United States)

    Deb, Partha; Trivedi, Pravin K; Zimmer, David M

    2014-10-01

    In this paper, we estimate a copula-based bivariate dynamic hurdle model of prescription drug and nondrug expenditures to test the cost-offset hypothesis, which posits that increased expenditures on prescription drugs are offset by reductions in other nondrug expenditures. We apply the proposed methodology to data from the Medical Expenditure Panel Survey, which have the following features: (i) the observed bivariate outcomes are a mixture of zeros and continuously measured positives; (ii) both the zero and positive outcomes show state dependence and inter-temporal interdependence; and (iii) the zeros and the positives display contemporaneous association. The point mass at zero is accommodated using a hurdle or a two-part approach. The copula-based approach to generating joint distributions is appealing because the contemporaneous association involves asymmetric dependence. The paper studies samples categorized by four health conditions: arthritis, diabetes, heart disease, and mental illness. There is evidence of greater than dollar-for-dollar cost-offsets of expenditures on prescribed drugs for relatively low levels of spending on drugs and less than dollar-for-dollar cost-offsets at higher levels of drug expenditures. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Child Schooling in Ethiopia: The Role of Maternal Autonomy.

    Science.gov (United States)

    Gebremedhin, Tesfaye Alemayehu; Mohanty, Itismita

    2016-01-01

    This paper examines the effects of maternal autonomy on child schooling outcomes in Ethiopia using a nationally representative Ethiopian Demographic and Health survey for 2011. The empirical strategy uses a Hurdle Negative Binomial Regression model to estimate years of schooling. An ordered probit model is also estimated to examine age grade distortion using a trichotomous dependent variable that captures three states of child schooling. The large sample size and the range of questions available in this dataset allow us to explore the influence of individual and household level social, economic and cultural factors on child schooling. The analysis finds statistically significant effects of maternal autonomy variables on child schooling in Ethiopia. The roles of maternal autonomy and other household-level factors on child schooling are important issues in Ethiopia, where health and education outcomes are poor for large segments of the population.

  17. Multilevel binomial logistic prediction model for malignant pulmonary nodules based on texture features of CT image

    International Nuclear Information System (INIS)

    Wang Huan; Guo Xiuhua; Jia Zhongwei; Li Hongkai; Liang Zhigang; Li Kuncheng; He Qian

    2010-01-01

    Purpose: To introduce multilevel binomial logistic prediction model-based computer-aided diagnostic (CAD) method of small solitary pulmonary nodules (SPNs) diagnosis by combining patient and image characteristics by textural features of CT image. Materials and methods: Describe fourteen gray level co-occurrence matrix textural features obtained from 2171 benign and malignant small solitary pulmonary nodules, which belongs to 185 patients. Multilevel binomial logistic model is applied to gain these initial insights. Results: Five texture features, including Inertia, Entropy, Correlation, Difference-mean, Sum-Entropy, and age of patients own aggregating character on patient-level, which are statistically different (P < 0.05) between benign and malignant small solitary pulmonary nodules. Conclusion: Some gray level co-occurrence matrix textural features are efficiently descriptive features of CT image of small solitary pulmonary nodules, which can profit diagnosis of earlier period lung cancer if combined patient-level characteristics to some extent.

  18. Constructing Binomial Trees Via Random Maps for Analysis of Financial Assets

    Directory of Open Access Journals (Sweden)

    Antonio Airton Carneiro de Freitas

    2010-04-01

    Full Text Available Random maps can be constructed from a priori knowledge of the financial assets. It is also addressed the reverse problem, i.e. from a function of an empirical stationary probability density function we set up a random map that naturally leads to an implied binomial tree, allowing the adjustment of models, including the ability to incorporate jumps. An applica- tion related to the options market is presented. It is emphasized that the quality of the model to incorporate a priori knowledge of the financial asset may be affected, for example, by the skewed vision of the analyst.

  19. Binomial tree method for pricing a regime-switching volatility stock loans

    Science.gov (United States)

    Putri, Endah R. M.; Zamani, Muhammad S.; Utomo, Daryono B.

    2018-03-01

    Binomial model with regime switching may represents the price of stock loan which follows the stochastic process. Stock loan is one of alternative that appeal investors to get the liquidity without selling the stock. The stock loan mechanism resembles that of American call option when someone can exercise any time during the contract period. From the resembles both of mechanism, determination price of stock loan can be interpreted from the model of American call option. The simulation result shows the behavior of the price of stock loan under a regime-switching with respect to various interest rate and maturity.

  20. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  1. Beta-Binomial Model for the Detection of Rare Mutations in Pooled Next-Generation Sequencing Experiments.

    Science.gov (United States)

    Jakaitiene, Audrone; Avino, Mariano; Guarracino, Mario Rosario

    2017-04-01

    Against diminishing costs, next-generation sequencing (NGS) still remains expensive for studies with a large number of individuals. As cost saving, sequencing genome of pools containing multiple samples might be used. Currently, there are many software available for the detection of single-nucleotide polymorphisms (SNPs). Sensitivity and specificity depend on the model used and data analyzed, indicating that all software have space for improvement. We use beta-binomial model to detect rare mutations in untagged pooled NGS experiments. We propose a multireference framework for pooled data with ability being specific up to two patients affected by neuromuscular disorders (NMD). We assessed the results comparing with The Genome Analysis Toolkit (GATK), CRISP, SNVer, and FreeBayes. Our results show that the multireference approach applying beta-binomial model is accurate in predicting rare mutations at 0.01 fraction. Finally, we explored the concordance of mutations between the model and software, checking their involvement in any NMD-related gene. We detected seven novel SNPs, for which the functional analysis produced enriched terms related to locomotion and musculature.

  2. THERAPEUTIC ANTISENSE OLIGONUCLEOTIDES AGAINST CANCER: HURDLING TO THE CLINIC

    Directory of Open Access Journals (Sweden)

    Pedro Miguel Duarte Moreno

    2014-10-01

    Full Text Available Under clinical development since the early 90’s and with two successfully approved drugs (Fomivirsen and Mipomersen, oligonucleotide-based therapeutics have not yet delivered a clinical drug to the market in the cancer field. Whilst many pre-clinical data has been generated, a lack of understanding still exists on how to efficiently tackle all the different challenges presented for cancer targeting in a clinical setting. Namely, effective drug vectorization, careful choice of target gene or synergistic multi-gene targeting are surely decisive, while caution must be exerted to avoid potential toxic, often misleading off-target-effects. Here a brief overview will be given on the nucleic acid chemistry advances that established oligonucleotide technologies as a promising therapeutic alternative and ongoing cancer related clinical trials. Special attention will be given towards a perspective on the hurdles encountered specifically in the cancer field by this class of therapeutic oligonucleotides and a view on possible avenues for success is presented, with particular focus on the contribution from nanotechnology to the field.

  3. Covering Resilience: A Recent Development for Binomial Checkpointing

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    2016-09-12

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, required, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algorithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We describe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding implementation and discuss first numerical results.

  4. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  5. Ophthalmic Start-Up Chief Executive Officers' Perceptions of Development Hurdles.

    Science.gov (United States)

    Stewart, William C; Nelson, Lindsay A; Kruft, Bonnie; Stewart, Jeanette A

    2018-01-01

    To identify current challenges facing ophthalmic pharmaceutical start-ups in developing new products. Surveys were distributed to the chief executive officer (CEO) or president of ophthalmic start-ups. The survey attracted 24 responses from 78 surveys distributed (31%). The CEOs stated that a lack of financial capital (n = 18, 75%), FDA regulations (n = 6, 25%), and failure to meet clinical endpoints (n = 6, 25%) were their greatest development hurdles. Risk aversion to medicines in early development (n = 18, 75%), mergers and acquisitions reducing corporate choice for licensing agreements (n = 7, 29%), the emergence of large pharmaceutical-based venture capital funding groups (n = 12, 50%), and the failure of many large pharmaceutical companies to develop their own medicines (n = 10, 42%) were noted as recent prominent trends affecting fundraising. The study suggests that development funding, regulatory burden, and meeting clinical endpoints are the greatest development challenges faced by ophthalmic start-up CEOs. © 2017 S. Karger AG, Basel.

  6. A Bayesian equivalency test for two independent binomial proportions.

    Science.gov (United States)

    Kawasaki, Yohei; Shimokawa, Asanao; Yamada, Hiroshi; Miyaoka, Etsuo

    2016-01-01

    In clinical trials, it is often necessary to perform an equivalence study. The equivalence study requires actively denoting equivalence between two different drugs or treatments. Since it is not possible to assert equivalence that is not rejected by a superiority test, statistical methods known as equivalency tests have been suggested. These methods for equivalency tests are based on the frequency framework; however, there are few such methods in the Bayesian framework. Hence, this article proposes a new index that suggests the equivalency of binomial proportions, which is constructed based on the Bayesian framework. In this study, we provide two methods for calculating the index and compare the probabilities that have been calculated by these two calculation methods. Moreover, we apply this index to the results of actual clinical trials to demonstrate the utility of the index.

  7. Binomial mitotic segregation of MYCN-carrying double minutes in neuroblastoma illustrates the role of randomness in oncogene amplification.

    Directory of Open Access Journals (Sweden)

    Gisela Lundberg

    2008-08-01

    Full Text Available Amplification of the oncogene MYCN in double minutes (DMs is a common finding in neuroblastoma (NB. Because DMs lack centromeric sequences it has been unclear how NB cells retain and amplify extrachromosomal MYCN copies during tumour development.We show that MYCN-carrying DMs in NB cells translocate from the nuclear interior to the periphery of the condensing chromatin at transition from interphase to prophase and are preferentially located adjacent to the telomere repeat sequences of the chromosomes throughout cell division. However, DM segregation was not affected by disruption of the telosome nucleoprotein complex and DMs readily migrated from human to murine chromatin in human/mouse cell hybrids, indicating that they do not bind to specific positional elements in human chromosomes. Scoring DM copy-numbers in ana/telophase cells revealed that DM segregation could be closely approximated by a binomial random distribution. Colony-forming assay demonstrated a strong growth-advantage for NB cells with high DM (MYCN copy-numbers, compared to NB cells with lower copy-numbers. In fact, the overall distribution of DMs in growing NB cell populations could be readily reproduced by a mathematical model assuming binomial segregation at cell division combined with a proliferative advantage for cells with high DM copy-numbers.Binomial segregation at cell division explains the high degree of MYCN copy-number variability in NB. Our findings also provide a proof-of-principle for oncogene amplification through creation of genetic diversity by random events followed by Darwinian selection.

  8. Trojan Horse Antibiotics-A Novel Way to Circumvent Gram-Negative Bacterial Resistance?

    Science.gov (United States)

    Tillotson, Glenn S

    2016-01-01

    Antibiotic resistance has been emerged as a major global health problem. In particular, gram-negative species pose a significant clinical challenge as bacteria develop or acquire more resistance mechanisms. Often, these bacteria possess multiple resistance mechanisms, thus nullifying most of the major classes of drugs. Novel approaches to this issue are urgently required. However, the challenges of developing new agents are immense. Introducing novel agents is fraught with hurdles, thus adapting known antibiotic classes by altering their chemical structure could be a way forward. A chemical addition to existing antibiotics known as a siderophore could be a solution to the gram-negative resistance issue. Siderophore molecules rely on the bacterial innate need for iron ions and thus can utilize a Trojan Horse approach to gain access to the bacterial cell. The current approaches to using this potential method are reviewed.

  9. Marginalized multilevel hurdle and zero-inflated models for overdispersed and correlated count data with excess zeros.

    Science.gov (United States)

    Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert

    2014-11-10

    Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is

  10. The Sequential Probability Ratio Test: An efficient alternative to exact binomial testing for Clean Water Act 303(d) evaluation.

    Science.gov (United States)

    Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry

    2017-05-01

    The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. An algorithm for sequential tail value at risk for path-independent payoffs in a binomial tree

    NARCIS (Netherlands)

    Roorda, Berend

    2010-01-01

    We present an algorithm that determines Sequential Tail Value at Risk (STVaR) for path-independent payoffs in a binomial tree. STVaR is a dynamic version of Tail-Value-at-Risk (TVaR) characterized by the property that risk levels at any moment must be in the range of risk levels later on. The

  12. Meta-analysis for diagnostic accuracy studies: a new statistical model using beta-binomial distributions and bivariate copulas.

    Science.gov (United States)

    Kuss, Oliver; Hoyer, Annika; Solms, Alexander

    2014-01-15

    There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta-analysis for diagnostic accuracy studies. This model uses beta-binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta-analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Planning Training Loads for the 400 M Hurdles in Three-Month Mesocycles using Artificial Neural Networks.

    Science.gov (United States)

    Przednowek, Krzysztof; Iskra, Janusz; Wiktorowicz, Krzysztof; Krzeszowski, Tomasz; Maszczyk, Adam

    2017-12-01

    This paper presents a novel approach to planning training loads in hurdling using artificial neural networks. The neural models performed the task of generating loads for athletes' training for the 400 meters hurdles. All the models were calculated based on the training data of 21 Polish National Team hurdlers, aged 22.25 ± 1.96, competing between 1989 and 2012. The analysis included 144 training plans that represented different stages in the annual training cycle. The main contribution of this paper is to develop neural models for planning training loads for the entire career of a typical hurdler. In the models, 29 variables were used, where four characterized the runner and 25 described the training process. Two artificial neural networks were used: a multi-layer perceptron and a network with radial basis functions. To assess the quality of the models, the leave-one-out cross-validation method was used in which the Normalized Root Mean Squared Error was calculated. The analysis shows that the method generating the smallest error was the radial basis function network with nine neurons in the hidden layer. Most of the calculated training loads demonstrated a non-linear relationship across the entire competitive period. The resulting model can be used as a tool to assist a coach in planning training loads during a selected training period.

  14. Planning Training Loads for The 400 M Hurdles in Three-Month Mesocycles Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Przednowek Krzysztof

    2017-12-01

    Full Text Available This paper presents a novel approach to planning training loads in hurdling using artificial neural networks. The neural models performed the task of generating loads for athletes’ training for the 400 meters hurdles. All the models were calculated based on the training data of 21 Polish National Team hurdlers, aged 22.25 ± 1.96, competing between 1989 and 2012. The analysis included 144 training plans that represented different stages in the annual training cycle. The main contribution of this paper is to develop neural models for planning training loads for the entire career of a typical hurdler. In the models, 29 variables were used, where four characterized the runner and 25 described the training process. Two artificial neural networks were used: a multi-layer perceptron and a network with radial basis functions. To assess the quality of the models, the leave-one-out cross-validation method was used in which the Normalized Root Mean Squared Error was calculated. The analysis shows that the method generating the smallest error was the radial basis function network with nine neurons in the hidden layer. Most of the calculated training loads demonstrated a non-linear relationship across the entire competitive period. The resulting model can be used as a tool to assist a coach in planning training loads during a selected training period.

  15. The Explicit Identities for Spectral Norms of Circulant-Type Matrices Involving Binomial Coefficients and Harmonic Numbers

    Directory of Open Access Journals (Sweden)

    Jianwei Zhou

    2014-01-01

    Full Text Available The explicit formulae of spectral norms for circulant-type matrices are investigated; the matrices are circulant matrix, skew-circulant matrix, and g-circulant matrix, respectively. The entries are products of binomial coefficients with harmonic numbers. Explicit identities for these spectral norms are obtained. Employing these approaches, some numerical tests are listed to verify the results.

  16. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  17. Use of the Beta-Binomial Model for Central Statistical Monitoring of Multicenter Clinical Trials

    OpenAIRE

    Desmet, Lieven; Venet, David; Doffagne, Erik; Timmermans, Catherine; Legrand, Catherine; Burzykowski, Tomasz; Buyse, Marc

    2017-01-01

    As part of central statistical monitoring of multicenter clinical trial data, we propose a procedure based on the beta-binomial distribution for the detection of centers with atypical values for the probability of some event. The procedure makes no assumptions about the typical event proportion and uses the event counts from all centers to derive a reference model. The procedure is shown through simulations to have high sensitivity and high specificity if the contamination rate is small and t...

  18. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  19. Negative Binomial charts for monitoring high-quality processes

    NARCIS (Netherlands)

    Albers, Willem/Wim

    Good control charts for high quality processes are often based on the number of successes between failures. Geometric charts are simplest in this respect, but slow in recognizing moderately increased failure rates p. Improvement can be achieved by waiting until r > 1 failures have occurred, i.e. by

  20. Evaluation of Single or Double Hurdle Sanitizer Applications in Simulated Field or Packing Shed Operations for Cantaloupes Contaminated with Listeria monocytogenes

    Directory of Open Access Journals (Sweden)

    Cathy C. Webb

    2015-04-01

    Full Text Available Listeria monocytogenes contamination of cantaloupes has become a serious concern as contaminated cantaloupes led to a deadly outbreak in the United States in 2011. To reduce cross-contamination between cantaloupes and to reduce resident populations on contaminated melons, application of sanitizers in packing shed wash water is recommended. The sanitizing agent of 5% levulinic acid and 2% sodium dodecyl sulfate (SDS applied as a single hurdle in either a simulated dump or dip treatment significantly reduced L. monocytogenes to lower levels at the stem scar compared to a simulated dump treatment employing 200 ppm chlorine; however pathogen reductions on the rind tissue were not significantly different. Double hurdle approaches employing two sequential packing plant treatments with different sanitizers revealed decreased reduction of L. monocytogenes at the stem scar. In contrast, application of sanitizers both in the field and at the packing plant led to greater L. monocytogenes population reductions than if sanitizers were only applied at the packing plant.

  1. Modeling random telegraph signal noise in CMOS image sensor under low light based on binomial distribution

    International Nuclear Information System (INIS)

    Zhang Yu; Wang Guangyi; Lu Xinmiao; Hu Yongcai; Xu Jiangtao

    2016-01-01

    The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. (paper)

  2. A Genetic Analysis of Mortality in Pigs

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel

    2010-01-01

    to investigate whether there is support for genetic variation for mortality and to study the quality of fit and predictive properties of the various models. In both breeds, the model that provided the best fit to the data was the standard binomial hierarchical model. The model that performed best in terms......An analysis of mortality is undertaken in two breeds of pigs: Danish Landrace and Yorkshire. Zero-inflated and standard versions of hierarchical Poisson, binomial, and negative binomial Bayesian models were fitted using Markov chain Monte Carlo (MCMC). The objectives of the study were...... of the ability to predict the distribution of stillbirths was the hierarchical zero-inflated negative binomial model. The best fit of the binomial hierarchical model and of the zero-inflated hierarchical negative binomial model was obtained when genetic variation was included as a parameter. For the hierarchical...

  3. Semiparametric Allelic Tests for Mapping Multiple Phenotypes: Binomial Regression and Mahalanobis Distance.

    Science.gov (United States)

    Majumdar, Arunabha; Witte, John S; Ghosh, Saurabh

    2015-12-01

    Binary phenotypes commonly arise due to multiple underlying quantitative precursors and genetic variants may impact multiple traits in a pleiotropic manner. Hence, simultaneously analyzing such correlated traits may be more powerful than analyzing individual traits. Various genotype-level methods, e.g., MultiPhen (O'Reilly et al. []), have been developed to identify genetic factors underlying a multivariate phenotype. For univariate phenotypes, the usefulness and applicability of allele-level tests have been investigated. The test of allele frequency difference among cases and controls is commonly used for mapping case-control association. However, allelic methods for multivariate association mapping have not been studied much. In this article, we explore two allelic tests of multivariate association: one using a Binomial regression model based on inverted regression of genotype on phenotype (Binomial regression-based Association of Multivariate Phenotypes [BAMP]), and the other employing the Mahalanobis distance between two sample means of the multivariate phenotype vector for two alleles at a single-nucleotide polymorphism (Distance-based Association of Multivariate Phenotypes [DAMP]). These methods can incorporate both discrete and continuous phenotypes. Some theoretical properties for BAMP are studied. Using simulations, the power of the methods for detecting multivariate association is compared with the genotype-level test MultiPhen's. The allelic tests yield marginally higher power than MultiPhen for multivariate phenotypes. For one/two binary traits under recessive mode of inheritance, allelic tests are found to be substantially more powerful. All three tests are applied to two different real data and the results offer some support for the simulation study. We propose a hybrid approach for testing multivariate association that implements MultiPhen when Hardy-Weinberg Equilibrium (HWE) is violated and BAMP otherwise, because the allelic approaches assume HWE

  4. Prostate cancer metastasis-driving genes: hurdles and potential approaches in their identification

    Directory of Open Access Journals (Sweden)

    Yan Ting Chiang

    2014-08-01

    Full Text Available Metastatic prostate cancer is currently incurable. Metastasis is thought to result from changes in the expression of specific metastasis-driving genes in nonmetastatic prostate cancer tissue, leading to a cascade of activated downstream genes that set the metastatic process in motion. Such genes could potentially serve as effective therapeutic targets for improved management of the disease. They could be identified by comparative analysis of gene expression profiles of patient-derived metastatic and nonmetastatic prostate cancer tissues to pinpoint genes showing altered expression, followed by determining whether silencing of such genes can lead to inhibition of metastatic properties. Various hurdles encountered in this approach are discussed, including (i the need for clinically relevant, nonmetastatic and metastatic prostate cancer tissues such as xenografts of patients' prostate cancers developed via subrenal capsule grafting technology and (ii limitations in the currently available methodology for identification of master regulatory genes.

  5. Criterios sobre el uso de la distribución normal para aproximar la distribución binomial

    OpenAIRE

    Ortiz Pinilla, Jorge; Castro, Amparo; Neira, Tito; Torres, Pedro; Castañeda, Javier

    2012-01-01

    Las dos reglas empíricas más conocidas para aceptar la aproximación normal de la distribución binomial carecen de regularidad en el control del margen de error cometido al utilizar la aproximación normal. Se propone un criterio y algunas fórmulas para controlarlo cerca de algunos valores escogidos para el error máximo.

  6. Hurdling barriers through market uncertainty: Case studies ininnovative technology adoption

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Christopher T.; Radspieler Jr., Anthony; Payne, Jack

    2002-08-18

    The crisis atmosphere surrounding electricity availability in California during the summer of 2001 produced two distinct phenomena in commercial energy consumption decision-making: desires to guarantee energy availability while blackouts were still widely anticipated, and desires to avoid or mitigate significant price increases when higher commercial electricity tariffs took effect. The climate of increased consideration of these factors seems to have led, in some cases, to greater willingness on the part of business decision-makers to consider highly innovative technologies. This paper examines three case studies of innovative technology adoption: retrofit of time-and-temperature signs on an office building; installation of fuel cells to supply power, heating, and cooling to the same building; and installation of a gas-fired heat pump at a microbrewery. We examine the decision process that led to adoption of these technologies. In each case, specific constraints had made more conventional energy-efficient technologies inapplicable. We examine how these barriers to technology adoption developed over time, how the California energy decision-making climate combined with the characteristics of these innovative technologies to overcome the barriers, and what the implications of hurdling these barriers are for future energy decisions within the firms.

  7. Effects of sonication and ultraviolet-C treatment as a hurdle concept on quality attributes of Chokanan mango (Mangifera indica L.) juice.

    Science.gov (United States)

    Santhirasegaram, Vicknesha; Razali, Zuliana; Somasundram, Chandran

    2015-04-01

    The growing demand for fresh-like food products has encouraged the development of hurdle technology of non-thermal processing. In this study, freshly squeezed Chokanan mango juice was treated by paired combinations of sonication (for 15 and 30 min at 25 ℃, 40 kHz frequency) and UV-C treatment (for 15 and 30 min at 25 ℃). Selected physicochemical properties, antioxidant activities, microbial inactivation and other quality parameters of combined treated juice were compared to conventional thermal treatment (at 90 ℃ for 60 s). After thermal and combined treatment, no significant changes occurred in physicochemical properties. A significant increase in extractability of carotenoids (15%), polyphenols (37%), flavonoids (35%) and enhancement in antioxidant capacity was observed after combined treatment. Thermal and combined treatment exhibited significant reduction in microbial load. Results obtained support the use of sonication and UV-C in a hurdle technology to improve the quality of Chokanan mango juice along with safety standards. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Addressing the negative impact of scholarship on dental education.

    Science.gov (United States)

    Mackenzie, R S

    1984-09-01

    Defined broadly, scholarship is the essence of academic and professional life. In several ways, however, scholarship as defined, perceived, and applied within the university has a negative impact on dental education. When scholarship is defined in terms of numbers of publications, faculty efforts are turned away from other important forms of scholarship. The review process for publication quality is unreliable, and the focus on numbers of publications encourages multiple authorship and papers of less practical significance. The proposed solution of nontenure tracks for clinicians creates its own difficulties. Broadening the definition of scholarship will encourage better clinical teaching, clinical judgment, and clinical assessment of student performance, and will result in more satisfied teachers, students, and alumni, and ultimately in better health care through improved judgments and decision processes. The perception that scholarship is a meaningless university hurdle for clinicians must be dispelled.

  9. Interrelationships Between Receiver/Relative Operating Characteristics Display, Binomial, Logit, and Bayes' Rule Probability of Detection Methodologies

    Science.gov (United States)

    Generazio, Edward R.

    2014-01-01

    Unknown risks are introduced into failure critical systems when probability of detection (POD) capabilities are accepted without a complete understanding of the statistical method applied and the interpretation of the statistical results. The presence of this risk in the nondestructive evaluation (NDE) community is revealed in common statements about POD. These statements are often interpreted in a variety of ways and therefore, the very existence of the statements identifies the need for a more comprehensive understanding of POD methodologies. Statistical methodologies have data requirements to be met, procedures to be followed, and requirements for validation or demonstration of adequacy of the POD estimates. Risks are further enhanced due to the wide range of statistical methodologies used for determining the POD capability. Receiver/Relative Operating Characteristics (ROC) Display, simple binomial, logistic regression, and Bayes' rule POD methodologies are widely used in determining POD capability. This work focuses on Hit-Miss data to reveal the framework of the interrelationships between Receiver/Relative Operating Characteristics Display, simple binomial, logistic regression, and Bayes' Rule methodologies as they are applied to POD. Knowledge of these interrelationships leads to an intuitive and global understanding of the statistical data, procedural and validation requirements for establishing credible POD estimates.

  10. Multifactorial effects of ambient temperature, precipitation, farm management, and environmental factors determine the level of generic Escherichia coli contamination on preharvested spinach.

    Science.gov (United States)

    Park, Sangshin; Navratil, Sarah; Gregory, Ashley; Bauer, Arin; Srinath, Indumathi; Szonyi, Barbara; Nightingale, Kendra; Anciso, Juan; Jun, Mikyoung; Han, Daikwon; Lawhon, Sara; Ivanek, Renata

    2015-04-01

    A repeated cross-sectional study was conducted to identify farm management, environment, weather, and landscape factors that predict the count of generic Escherichia coli on spinach at the preharvest level. E. coli was enumerated for 955 spinach samples collected on 12 farms in Texas and Colorado between 2010 and 2012. Farm management and environmental characteristics were surveyed using a questionnaire. Weather and landscape data were obtained from National Resources Information databases. A two-part mixed-effect negative binomial hurdle model, consisting of a logistic and zero-truncated negative binomial part with farm and date as random effects, was used to identify factors affecting E. coli counts on spinach. Results indicated that the odds of a contamination event (non-zero versus zero counts) vary by state (odds ratio [OR] = 108.1). Odds of contamination decreased with implementation of hygiene practices (OR = 0.06) and increased with an increasing average precipitation amount (mm) in the past 29 days (OR = 3.5) and the application of manure (OR = 52.2). On contaminated spinach, E. coli counts increased with the average precipitation amount over the past 29 days. The relationship between E. coli count and the average maximum daily temperature over the 9 days prior to sampling followed a quadratic function with the highest bacterial count at around 24°C. These findings indicate that the odds of a contamination event in spinach are determined by farm management, environment, and weather factors. However, once the contamination event has occurred, the count of E. coli on spinach is determined by weather only. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  11. proportion: A comprehensive R package for inference on single Binomial proportion and Bayesian computations

    Directory of Open Access Journals (Sweden)

    M. Subbiah

    2017-01-01

    Full Text Available Extensive statistical practice has shown the importance and relevance of the inferential problem of estimating probability parameters in a binomial experiment; especially on the issues of competing intervals from frequentist, Bayesian, and Bootstrap approaches. The package written in the free R environment and presented in this paper tries to take care of the issues just highlighted, by pooling a number of widely available and well-performing methods and apporting on them essential variations. A wide range of functions helps users with differing skills to estimate, evaluate, summarize, numerically and graphically, various measures adopting either the frequentist or the Bayesian paradigm.

  12. Clearing up the hazy road from bench to bedside: A framework for integrating the fourth hurdle into translational medicine

    Directory of Open Access Journals (Sweden)

    John Jürgen H

    2008-09-01

    Full Text Available Abstract Background New products evolving from research and development can only be translated to medical practice on a large scale if they are reimbursed by third-party payers. Yet the decision processes regarding reimbursement are highly complex and internationally heterogeneous. This study develops a process-oriented framework for monitoring these so-called fourth hurdle procedures in the context of product development from bench to bedside. The framework is suitable both for new drugs and other medical technologies. Methods The study is based on expert interviews and literature searches, as well as an analysis of 47 websites of coverage decision-makers in England, Germany and the USA. Results Eight key steps for monitoring fourth hurdle procedures from a company perspective were determined: entering the scope of a healthcare payer; trigger of decision process; assessment; appraisal; setting level of reimbursement; establishing rules for service provision; formal and informal participation; and publication of the decision and supplementary information. Details are given for the English National Institute for Health and Clinical Excellence, the German Federal Joint Committee, Medicare's National and Local Coverage Determinations, and for Blue Cross Blue Shield companies. Conclusion Coverage determination decisions for new procedures tend to be less formalized than for novel drugs. The analysis of coverage procedures and requirements shows that the proof of patient benefit is essential. Cost-effectiveness is likely to gain importance in future.

  13. A comparison of LMC and SDL complexity measures on binomial distributions

    Science.gov (United States)

    Piqueira, José Roberto C.

    2016-02-01

    The concept of complexity has been widely discussed in the last forty years, with a lot of thinking contributions coming from all areas of the human knowledge, including Philosophy, Linguistics, History, Biology, Physics, Chemistry and many others, with mathematicians trying to give a rigorous view of it. In this sense, thermodynamics meets information theory and, by using the entropy definition, López-Ruiz, Mancini and Calbet proposed a definition for complexity that is referred as LMC measure. Shiner, Davison and Landsberg, by slightly changing the LMC definition, proposed the SDL measure and the both, LMC and SDL, are satisfactory to measure complexity for a lot of problems. Here, SDL and LMC measures are applied to the case of a binomial probability distribution, trying to clarify how the length of the data set implies complexity and how the success probability of the repeated trials determines how complex the whole set is.

  14. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  15. A Flexible, Efficient Binomial Mixed Model for Identifying Differential DNA Methylation in Bisulfite Sequencing Data

    Science.gov (United States)

    Lea, Amanda J.

    2015-01-01

    Identifying sources of variation in DNA methylation levels is important for understanding gene regulation. Recently, bisulfite sequencing has become a popular tool for investigating DNA methylation levels. However, modeling bisulfite sequencing data is complicated by dramatic variation in coverage across sites and individual samples, and because of the computational challenges of controlling for genetic covariance in count data. To address these challenges, we present a binomial mixed model and an efficient, sampling-based algorithm (MACAU: Mixed model association for count data via data augmentation) for approximate parameter estimation and p-value computation. This framework allows us to simultaneously account for both the over-dispersed, count-based nature of bisulfite sequencing data, as well as genetic relatedness among individuals. Using simulations and two real data sets (whole genome bisulfite sequencing (WGBS) data from Arabidopsis thaliana and reduced representation bisulfite sequencing (RRBS) data from baboons), we show that our method provides well-calibrated test statistics in the presence of population structure. Further, it improves power to detect differentially methylated sites: in the RRBS data set, MACAU detected 1.6-fold more age-associated CpG sites than a beta-binomial model (the next best approach). Changes in these sites are consistent with known age-related shifts in DNA methylation levels, and are enriched near genes that are differentially expressed with age in the same population. Taken together, our results indicate that MACAU is an efficient, effective tool for analyzing bisulfite sequencing data, with particular salience to analyses of structured populations. MACAU is freely available at www.xzlab.org/software.html. PMID:26599596

  16. Climate change induced occupational stress and reported morbidity among cocoa farmers in South-Western Nigeria

    Directory of Open Access Journals (Sweden)

    Abayomi Samuel Oyekale

    2015-05-01

    Full Text Available Introduction and objective. Climate change is one of the major development hurdles in many developing countries. The health outcome of farm households are related to climate change, which is related to several external and internal health-related issues, such as management of occupational stressors. This study seeks, inter alia, to determine the climate related occupational stress and factors influencing reported sick times among cocoa farmers. Material and Method. Data were collected from selected cocoa farmers in South-Western Nigeria. Descriptive statistics and Negative Binomial regression were used for data analyses. Results. The results showed that cocoa farmers were ageing, and that the majority had cultivating cocoa for most of their years of farming. Cocoa was the primary crop for the majority of the farmers, while 92.00% of the farmers in Osun state owned the cultivated cocoa farms. The forms of reported climate change induced occupational stresses were increase in pest infestation (74.5% in Ekiti state, difficulties in weed control (82.1% in Ekiti state, missing regular times scheduled for spraying cocoa pods (45.7% in Ondo state, inability to spray cocoa effectively (58.5% in Ondo state, and reduction in cocoa yield (71.7% in Ekiti state. The Negative Binomial regression results showed that the age of farmers (0.0103, their education (-0.0226, years of cocoa farming (-0.0112, malaria infection (0.4901, missed spraying (0.5061, re-spraying of cocoa (0.2630, reduction in cocoa yield (0.20154, contact with extension (0.2411 and residence in Ondo state (-0.2311 were statistically significant (p<0.05. Conclusion. Climate change influences the farm operations of cocoa farmers with resultant occupational stresses. Efforts to assist cocoa farmers should include, among others, provision of weather forecasts and some form of insurance.

  17. Load training athletes specializing in race to 100 meter hurdles in its annual training cycle

    Directory of Open Access Journals (Sweden)

    Radosław Muszkieta

    2016-10-01

    Full Text Available The aim of the study was to provide an annual macrocycle preparation of athletes specializing in the 100 meters hurdles. A detailed discussion of the structure and analyze loads in individual periods of the annual training cycle. And the analysis and discussion of the variables of training athletes Marlene Morton, who allowed her to win the bronze medal Polish Youth Championships in Bialystok. The study was conducted based on a detailed analysis of the training diary kept by the athlete, the coach and the daily interview with the coach conducting Arthur Kohutek. They were presented various training periods and analyzed periods and sub-periods. In the summary they presented the conclusions that can be helpful when planning the burden for athletes who specialize in racing gossip.

  18. Confidence limits for parameters of Poisson and binomial distributions

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-04-01

    The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

  19. Non-binomial distribution of palladium Kα Ln X-ray satellites emitted after excitation by 16O ions

    International Nuclear Information System (INIS)

    Rymuza, P.; Sujkowski, Z.; Carlen, M.; Dousse, J.C.; Gasser, M.; Kern, J.; Perny, B.; Rheme, C.

    1988-02-01

    The palladium K α L n X-ray satellites spectrum emitted after excitation by 5.4 MeV/u 16 O ions has been measured. The distribution of the satellites yield is found to be significantly narrower than the binomial one. The deviations can be accounted for by assuming that the L-shell ionization is due to two uncorrelated processes: the direct ionization by impact and the electron capture to the K-shell of the projectile. 12 refs., 1 fig., 1 tab. (author)

  20. Hurdle technology applied to prickly pear beverages for inhibiting Saccharomyces cerevisiae and Escherichia coli.

    Science.gov (United States)

    García-García, R; Escobedo-Avellaneda, Z; Tejada-Ortigoza, V; Martín-Belloso, O; Valdez-Fragoso, A; Welti-Chanes, J

    2015-06-01

    The effect of pH reduction (from 6·30-6·45 to 4·22-4·46) and the addition of antimicrobial compounds (sodium benzoate and potassium sorbate) on the inhibition of Saccharomyces cerevisiae and Escherichia coli in prickly pear beverages formulated with the pulp and peel of Villanueva (V, Opuntia albicarpa) and Rojo Vigor (RV, Opuntia ficus-indica) varieties during 14 days of storage at 25°C, was evaluated. RV variety presented the highest microbial inhibition. By combining pH reduction and preservatives, reductions of 6·2-log10 and 2·3-log10 for E. coli and S. cerevisiae were achieved respectively. Due to the low reduction of S. cerevisiae, pulsed electric fields (PEF) (11-15 μs/25-50 Hz/27-36 kV cm(-1)) was applied as another preservation factor. The combination of preservatives, pH reduction and PEF at 13-15 μs/25-50 Hz for V variety, and 11 μs/50 Hz, 13-15 μs/25-50 Hz for RV, had a synergistic effect on S. cerevisiae inhibition, achieving at least 3·4-log10 of microbial reduction immediately after processing, and more than 5-log10 at fourth day of storage at 25°C maintained this reduction during 21 days of storage (P > 0·05). Hurdle technology using PEF in combination with other factors is adequate to maintain stable prickly pear beverages during 21 days/25°C. Significance and impact of the study: Prickly pear is a fruit with functional value, with high content of nutraceuticals and antioxidant activity. Functional beverages formulated with the pulp and peel of this fruit represent an alternative for its consumption. Escherichia coli and Saccharomyces cerevisiae are micro-organisms that typically affect fruit beverage quality and safety. The food industry is looking for processing technologies that maintain quality without compromising safety. Hurdle technology, including pulsed electric fields (PEF) could be an option to achieve this. The combination of PEF, pH reduction and preservatives is an alternative to obtain safe and minimally processed

  1. Resources predicting positive and negative affect during the experience of stress: a study of older Asian Indian immigrants in the United States.

    Science.gov (United States)

    Diwan, Sadhna; Jonnalagadda, Satya S; Balaswamy, Shantha

    2004-10-01

    Using the life stress model of psychological well-being, in this study we examined risks and resources predicting the occurrence of both positive and negative affect among older Asian Indian immigrants who experienced stressful life events. We collected data through a telephone survey of 226 respondents (aged 50 years and older) in the Southeastern United States. We used hierarchical, negative binomial regression analyses to examine correlates of positive and negative affect. Different coping resources influenced positive and negative affect when stressful life events were controlled for. Being female was a common risk factor for poorer positive and increased negative affect. Satisfaction with friendships and a cultural or ethnic identity that is either bicultural or more American were predictive of greater positive affect. Greater religiosity and increased mastery were resources predicting less negative affect. Cognitive and structural interventions that increase opportunities for social integration, increasing mastery, and addressing spiritual concerns are discussed as ways of coping with stress to improve the well-being of individuals in this immigrant community.

  2. Temporary disaster debris management site identification using binomial cluster analysis and GIS.

    Science.gov (United States)

    Grzeda, Stanislaw; Mazzuchi, Thomas A; Sarkani, Shahram

    2014-04-01

    An essential component of disaster planning and preparation is the identification and selection of temporary disaster debris management sites (DMS). However, since DMS identification is a complex process involving numerous variable constraints, many regional, county and municipal jurisdictions initiate this process during the post-disaster response and recovery phases, typically a period of severely stressed resources. Hence, a pre-disaster approach in identifying the most likely sites based on the number of locational constraints would significantly contribute to disaster debris management planning. As disasters vary in their nature, location and extent, an effective approach must facilitate scalability, flexibility and adaptability to variable local requirements, while also being generalisable to other regions and geographical extents. This study demonstrates the use of binomial cluster analysis in potential DMS identification in a case study conducted in Hamilton County, Indiana. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.

  3. Entanglement and Other Nonclassical Properties of Two Two-Level Atoms Interacting with a Two-Mode Binomial Field: Constant and Intensity-Dependent Coupling Regimes

    International Nuclear Information System (INIS)

    Tavassoly, M.K.; Hekmatara, H.

    2015-01-01

    In this paper, we consider the interaction between two two-level atoms and a two-mode binomial field with a general intensity-dependent coupling regime. The outlined dynamical problem has explicit analytical solution, by which we can evaluate a few of its physical features of interest. To achieve the purpose of the paper, after choosing a particular nonlinearity function, we investigate the quantum statistics, atomic population inversion and at last the linear entropy of the atom-field system which is a good measure for the degree of entanglement. In detail, the effects of binomial field parameters, in addition to different initial atomic states on the temporal behavior of the mentioned quantities have been analyzed. The results show that, the values of binomial field parameters and the initial state of the two atoms influence on the nonclassical effects in the obtained states through which one can tune the nonclassicality criteria appropriately. Setting intensity-dependent coupling function equal to 1 reduces the results to the constant coupling case. By comparing the latter case with the nonlinear regime, we will observe that the nonlinearity disappears the pattern of collapse-revival phenomenon in the evolution of Mandel parameter and population inversion (which can be seen in the linear case with constant coupling), however, more typical collapse-revivals will be appeared for the cross-correlation function in the nonlinear case. Finally, in both linear and nonlinear regime, the entropy remains less than (but close to) 0.5. In other words the particular chosen nonlinearity does not critically affect on the entropy of the system. (paper)

  4. A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys.

    Directory of Open Access Journals (Sweden)

    James A Wiley

    Full Text Available We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.

  5. The binomial work-health in the transit of Curitiba city.

    Science.gov (United States)

    Tokars, Eunice; Moro, Antonio Renato Pereira; Cruz, Roberto Moraes

    2012-01-01

    The working activity in traffic of the big cities complex interacts with the environment is often in unsafe and unhealthy imbalance favoring the binomial work - health. The aim of this paper was to analyze the relationship between work and health of taxi drivers in Curitiba, Brazil. This cross-sectional observational study with 206 individuals used a questionnaire on the organization's profile and perception of the environment and direct observation of work. It was found that the majority are male, aged between 26 and 49 years and has a high school degree. They are sedentary, like making a journey from 8 to 12 hours. They consider a stressful profession, related low back pain and are concerned about safety and accidents. 40% are smokers and consume alcoholic drink and 65% do not have or do not use devices of comfort. Risk factors present in the daily taxi constraints cause physical, cognitive and organizational and can affect your performance. It is concluded that the taxi drivers must change the unhealthy lifestyle, requiring a more efficient management of government authorities for this work is healthy and safe for all involved.

  6. Dynamic prediction of cumulative incidence functions by direct binomial regression.

    Science.gov (United States)

    Grand, Mia K; de Witte, Theo J M; Putter, Hein

    2018-03-25

    In recent years there have been a series of advances in the field of dynamic prediction. Among those is the development of methods for dynamic prediction of the cumulative incidence function in a competing risk setting. These models enable the predictions to be updated as time progresses and more information becomes available, for example when a patient comes back for a follow-up visit after completing a year of treatment, the risk of death, and adverse events may have changed since treatment initiation. One approach to model the cumulative incidence function in competing risks is by direct binomial regression, where right censoring of the event times is handled by inverse probability of censoring weights. We extend the approach by combining it with landmarking to enable dynamic prediction of the cumulative incidence function. The proposed models are very flexible, as they allow the covariates to have complex time-varying effects, and we illustrate how to investigate possible time-varying structures using Wald tests. The models are fitted using generalized estimating equations. The method is applied to bone marrow transplant data and the performance is investigated in a simulation study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Falls in Swedish hurdle and steeplechase racing and the condition of the track surface

    DEFF Research Database (Denmark)

    Gottlieb-Vedi, M.; Pipper, Christian Bressen

    2015-01-01

    Falls in National Hunt races is a tragic part of the sport. The present study focuses on the relation between racing track conditions and the number of falls in Swedish jump racing. The assumption was that more horses fell on heavy or soft going than on good or firm going. Results from all jump...... races at Täby Racecourse (1992-2001) were recorded. Parameters registered were: type and number of races, racing surface and condition, total time to finish the race, number of starting horses and number of falls. In this period 212 races, corresponding to 1,556 horse starts, were registered. Falls were...... registered in 42 races and in total 61 horses fell. The fall frequency on horse level was significantly higher in steeplechases than in hurdle races (odds ratio =3.69; 95% confidence interval (CI) = 1.99-6.85). For the steeplechases recorded in this study, significantly more falls were seen in long distance...

  8. The Compound Binomial Risk Model with Randomly Charging Premiums and Paying Dividends to Shareholders

    Directory of Open Access Journals (Sweden)

    Xiong Wang

    2013-01-01

    Full Text Available Based on characteristics of the nonlife joint-stock insurance company, this paper presents a compound binomial risk model that randomizes the premium income on unit time and sets the threshold for paying dividends to shareholders. In this model, the insurance company obtains the insurance policy in unit time with probability and pays dividends to shareholders with probability when the surplus is no less than . We then derive the recursive formulas of the expected discounted penalty function and the asymptotic estimate for it. And we will derive the recursive formulas and asymptotic estimates for the ruin probability and the distribution function of the deficit at ruin. The numerical examples have been shown to illustrate the accuracy of the asymptotic estimations.

  9. About ATMPs, SOPs and GMP: The Hurdles to Produce Novel Skin Grafts for Clinical Use.

    Science.gov (United States)

    Hartmann-Fritsch, Fabienne; Marino, Daniela; Reichmann, Ernst

    2016-09-01

    The treatment of severe full-thickness skin defects represents a significant and common clinical problem worldwide. A bio-engineered autologous skin substitute would significantly reduce the problems observed with today's gold standard. Within 15 years of research, the Tissue Biology Research Unit of the University Children's Hospital Zurich has developed autologous tissue-engineered skin grafts based on collagen type I hydrogels. Those products are considered as advanced therapy medicinal products (ATMPs) and are routinely produced for clinical trials in a clean room facility following the guidelines for good manufacturing practice (GMP). This article focuses on hurdles observed for the translation of ATMPs from research into the GMP environment and clinical application. Personalized medicine in the field of rare diseases has great potential. However, ATMPs are mainly developed and promoted by academia, hospitals, and small companies, which face many obstacles such as high financial burdens.

  10. Asimetría y curtosis en el modelo binomial para valorar opciones reales: caso de aplicación para empresas de base tecnológica

    Directory of Open Access Journals (Sweden)

    Gastón Silverio Milanesi

    2013-01-01

    Full Text Available El trabajo propone un modelo de valoración de opciones reales con base en el modelo binomial utilizando la transformación de Edgeworth (Rubinstein, 1998 para incorporar momentos estocásticos de orden superior, especialmente para ciertos tipos de organizaciones, como empresas de base tecnológica, donde no se dispone de cartera de activos financieros gemelos, comparables de mercado y procesos estocásticos no gaussianos. Primero, se presenta el desarrollo formal del modelo, luego su aplicación sobre la valuación de spin-off tecnológico universitario, sensibilizando asimetría-curtosis y exponiendo el impacto en el valor del proyecto. Finalmente, se concluye sobre limitaciones y ventajas de la propuesta de valoración que resume la simplicidad del modelo binomial e incorporando momentos de orden superior en subyacentes con procesos no normales.

  11. Asimetría y curtosis en el modelo binomial para valorar opciones reales: caso de aplicación para empresas de base tecnológica

    Directory of Open Access Journals (Sweden)

    Gastón Silverio Milanesi

    2013-07-01

    Full Text Available El trabajo propone un modelo de valoración de opciones reales con base en el modelo binomial utilizando la transformación de Edgeworth (Rubinstein, 1998 para incorporar momentos estocásticos de orden supe- rior, especialmente para ciertos tipos de organizaciones, como empresas de base tecnológica, donde no se dispone de cartera de activos financieros gemelos, comparables de mercado y procesos estocásticos no gaussianos. Primero, se presenta el desarrollo formal del modelo, luego su aplicación sobre la valuación de spin-off tecnológico universitario, sensibilizando asimetría-curtosis y exponiendo el impacto en el valor del proyecto. Finalmente, se concluye sobre limitaciones y ventajas de la propuesta de valoración que resume la simplicidad del modelo binomial e incorporando momentos de orden superior en subyacentes con pro- cesos no normales.

  12. Hurdles overcome in technology transfer for AIET and Positive outcome in Indian patients

    Directory of Open Access Journals (Sweden)

    Dedeepiya V

    2012-01-01

    Full Text Available Introduction Cell based immunotherapies have been in practice in Japan for the past two decades with established clinical trials on its efficacy in both solid tumours and hematological malignancies including gastric cancer, ovarian cancer , lung cancer and liver cancer. [1,2,3,4] In India, NCRM has been providing Autologous Immune Enhancement Therapy (AIET using autologous Natural Killer (NK cells and activated T Lymphocytes for Cancer since 2005 following the established protocols practiced by the Biotherapy Institute of Japan. Significant outcome achieved after AIET in advanced pancreatic cancer, Acute Myeloid leukemia (AML in Indian patients have already been reported. [5, 6] Here we report our experience in few more patients and present the hurdles overcome and lessons learned in translating the technology from Japan to India Case Details: Case 1: A 54 year-old female presented with Stage IV recurrent ovarian malignancy in 2010 with a history of previous surgery and chemotherapy for ovarian malignancy in June 2009. The CA-125 level of 243 U/ml. CT scan revealed lesions in the liver, spleen, along the greater curvature of body of stomach and in the perisplenic region, between the medial aspect of liver and stomach and in the right inguinal region. She was suggested six cycles of chemotherapy with Doxorubicin (50 mg and Carboplatin (450 mg along with AIET. After proper informed consent, the peripheral blood was withdrawn and the in vitro expansion of the NK cells, activated T Lymphocytes from the peripheral blood was performed using the protocol reported earlier. [7] Average cell count after the in vitro expansion was 1.2 X 108 cells. Six transfusions of the in vitro expanded NK cells and activated T lymphocytes were administered following which the CA-125 decreased to 4.7 U/mL. CT scan taken in December 2010 showed a regression of the lesions in the spleen and perisplenic peritoneal deposits, stable hepatic lesions and resolution of

  13. The impact of social grant dependency on smallholder maize producers’ market participation in South Africa: Application of the double-hurdle model

    Directory of Open Access Journals (Sweden)

    Sikhulumile Sinyolo

    2017-05-01

    Full Text Available Background: Social grants have become an increasingly popular means of improving the welfare of poor households in South Africa and beyond. While the goals of these transfers are to alleviate current poverty as well as to improve human capital capacity, they also have unintended effects, positive or negative, on beneficiary households. A question that has not been adequately addressed in the literature is the role that social grants play in the efforts to commercialise smallholder farming. Aim: The aim of this study was to examine the impact of social grant dependency on the incentives of smallholder maize producers to participate in the market. Setting: The study was done in the rural areas of four districts (Harry Gwala, Umzinyathi, Umkhanyakude and Uthukela in the KwaZulu-Natal province, South Africa. Methods: The study adopted a quantitative research design. A total of 984 households were randomly selected from the four districts, of which 774 had planted maize in the previous season. The analysis was done on the 774 farmers who had planted maize. The double-hurdle model was used for statistical analysis. Results: The results show a negative association between social grant dependency and market participation, suggesting that social grant-dependent households are more subsistent, producing less marketable surplus. Moreover, households with access to social grants sold less quantities of maize in the market, indicating reduced selling incentives. Conclusion: The study indicates that social grants reduce the incentives of smallholder farmers to commercialise their production activities. The results suggest that, while policies aimed at reducing transaction costs would increase smallholder market participation, attention should be paid on how to reduce social grants’ dis-incentive effects. To reduce spill over effects to unintended household members, the study recommends offering part of the grant as ‘in-kind support’, which is

  14. A novel series of conferences tackling the hurdles confronting the translation of novel cancer immunotherapies

    Directory of Open Access Journals (Sweden)

    Bot Adrian

    2012-11-01

    Full Text Available Abstract While there has been significant progress in advancing novel immune therapies to the bedside, much more needs to be done to fully tap into the potential of the immune system. It has become increasingly clear that besides practical and operational challenges, the heterogeneity of cancer and the limited efficacy profile of current immunotherapy platforms are the two main hurdles. Nevertheless, the promising clinical data of several approaches point to a roadmap that carries the promise to significantly advance cancer immunotherapy. A new annual series sponsored by Arrowhead Publishers and Conferences aims at bringing together scientific and business leadership from academia and industry, to identify, share and discuss most current priorities in research and translation of novel immune interventions. This Editorial provides highlights of the first event held earlier this year and outlines the focus of the second meeting to be held in 2013 that will be dedicated to stem cells and immunotherapy.

  15. Higher order antibunching in intermediate states

    International Nuclear Information System (INIS)

    Verma, Amit; Sharma, Navneet K.; Pathak, Anirban

    2008-01-01

    Since the introduction of binomial state as an intermediate state, different intermediate states have been proposed. Different nonclassical effects have also been reported in these intermediate states. But till now higher order antibunching is predicted in only one type of intermediate state, which is known as shadowed negative binomial state. Recently we have shown that the higher order antibunching is not a rare phenomenon [P. Gupta, P. Pandey, A. Pathak, J. Phys. B 39 (2006) 1137]. To establish our earlier claim further, here we have shown that the higher order antibunching can be seen in different intermediate states, such as binomial state, reciprocal binomial state, hypergeometric state, generalized binomial state, negative binomial state and photon added coherent state. We have studied the possibility of observing the higher order subpoissonian photon statistics in different limits of intermediate states. The effects of different control parameters on the depth of non classicality have also been studied in this connection and it has been shown that the depth of nonclassicality can be tuned by controlling various physical parameters

  16. Semantic Web for Chemical Genomics – need, how to, and hurdles

    Directory of Open Access Journals (Sweden)

    Talapady Bhat

    2007-08-01

    Full Text Available Semantic Web has been often suggested as the information technology solution to the growing problem in managing the millions of data points generated by modern science such as nanotechnology and high through-put screening for drugs. However, the progress towards this vision envisaged by the W3C has been very limited. Here we discuss –some of the obstacles to the realization of this vision and we make some suggestions as to how one may overcome some of these hurdles? Here we discuss some of these issues and present thoughts on an alternative method to Semantic Web that is less drastic in requirements. This method does not require the use of RDF and Protege, and it works in an environment currently used by the chemical and biological database providers. In our method one attempts to use as many components as possible from the tools already used by the database providers and one brings in far fewer new tools and techniques compared to the method that use RDF or Protégé. Our method uses a standard database environment and web tools rather than the RDF and Protégé to manage user interface and the data is held in a database rather than using RDF. This method shifts the task of building Semantic knowledge-base and ontology from RDF and Protégé to a SQL based database environment.

  17. Hurdles in tissue engineering/regenerative medicine product commercialization: a pilot survey of governmental funding agencies and the financial industry.

    Science.gov (United States)

    Bertram, Timothy A; Tentoff, Edward; Johnson, Peter C; Tawil, Bill; Van Dyke, Mark; Hellman, Kiki B

    2012-11-01

    The Tissue Engineering and Regenerative Medicine International Society of the Americas (TERMIS-AM) Industry Committee conducted a semiquantitative opinion survey in 2010 to delineate potential hurdles to commercialization perceived by the TERMIS constituency groups that participate in the stream of technology commercialization (academia, start-up companies, development-stage companies, and established companies). A significant hurdle identified consistently by each group was access to capital for advancing potential technologies into development pathways leading to commercialization. A follow-on survey was developed by the TERMIS-AM Industry Committee to evaluate the financial industry's perspectives on investing in regenerative medical technologies. The survey, composed of 15 questions, was developed and provided to 37 investment organizations in one of three sectors (governmental, private, and public investors). The survey was anonymous and confidential with sector designation the only identifying feature of each respondent's organization. Approximately 80% of the survey was composed of respondents from the public (n=14) and private (n=15) sectors. Each respondent represents one investment organization with the potential of multiple participants participating to form the organization's response. The remaining organizations represented governmental agencies (n=8). Results from this survey indicate that a high percentage ($2MM into regenerative medical companies at the different stages of a company's life cycle. Investors recognized major hurdles to this emerging industry, including regulatory pathway, clinical translation, and reimbursement of these new products. Investments in regenerative technologies have been cyclical over the past 10-15 years, but investors recognized a 1-5-year investment period before the exit via Merger and Acquisition (M&A). Investors considered musculoskeletal products and their top technology choice with companies in the clinical stage

  18. Assessing the Option to Abandon an Investment Project by the Binomial Options Pricing Model

    Directory of Open Access Journals (Sweden)

    Salvador Cruz Rambaud

    2016-01-01

    Full Text Available Usually, traditional methods for investment project appraisal such as the net present value (hereinafter NPV do not incorporate in their values the operational flexibility offered by including a real option included in the project. In this paper, real options, and more specifically the option to abandon, are analysed as a complement to cash flow sequence which quantifies the project. In this way, by considering the existing analogy with financial options, a mathematical expression is derived by using the binomial options pricing model. This methodology provides the value of the option to abandon the project within one, two, and in general n periods. Therefore, this paper aims to be a useful tool in determining the value of the option to abandon according to its residual value, thus making easier the control of the uncertainty element within the project.

  19. Efectos de la simulación computacional en la comprensión de la distribución binomial y la distribución de proporciones

    OpenAIRE

    Martínez, Johanna; Yáñez, Gabriel

    2014-01-01

    En este trabajo se presenta un proyecto de investigación que se basa en el enfoque instrumental para describir el efecto que tiene la simulación computacional en la comprensión de la distribución binomial y la distribución de proporciones.

  20. Non-uniform approximations for sums of discrete m-dependent random variables

    OpenAIRE

    Vellaisamy, P.; Cekanavicius, V.

    2013-01-01

    Non-uniform estimates are obtained for Poisson, compound Poisson, translated Poisson, negative binomial and binomial approximations to sums of of m-dependent integer-valued random variables. Estimates for Wasserstein metric also follow easily from our results. The results are then exemplified by the approximation of Poisson binomial distribution, 2-runs and $m$-dependent $(k_1,k_2)$-events.

  1. Analytic degree distributions of horizontal visibility graphs mapped from unrelated random series and multifractal binomial measures

    Science.gov (United States)

    Xie, Wen-Jie; Han, Rui-Qi; Jiang, Zhi-Qiang; Wei, Lijian; Zhou, Wei-Xing

    2017-08-01

    Complex network is not only a powerful tool for the analysis of complex system, but also a promising way to analyze time series. The algorithm of horizontal visibility graph (HVG) maps time series into graphs, whose degree distributions are numerically and analytically investigated for certain time series. We derive the degree distributions of HVGs through an iterative construction process of HVGs. The degree distributions of the HVG and the directed HVG for random series are derived to be exponential, which confirms the analytical results from other methods. We also obtained the analytical expressions of degree distributions of HVGs and in-degree and out-degree distributions of directed HVGs transformed from multifractal binomial measures, which agree excellently with numerical simulations.

  2. Application of Binomial Model and Market Asset Declaimer Methodology for Valuation of Abandon and Expand Options. The Case Study

    Directory of Open Access Journals (Sweden)

    Paweł Mielcarz

    2007-06-01

    Full Text Available The article presents a case study of valuation of real options included in a investment project. The main goal of the article is to present the calculation and methodological issues of application the methodology for real option valuation. In order to do it there are used the binomial model and Market Asset Declaimer methodology. The project presented in the article concerns the introduction of radio station to a new market. It includes two valuable real options: to abandon the project and to expand.

  3. Road Fatality Model Based on Over-Dispersion Data Along Federal Route F0050

    Directory of Open Access Journals (Sweden)

    Musa Wan Zahidah

    2017-01-01

    Full Text Available According to The World Health Ranking 2011 has ranked Malaysia as 20th in its list of countries with the most deaths caused by road accidents. Road accidents also have been identified as the prime cause of death in Malaysia after the heart disease, stroke, influenza and pneumonia. To date, previous researches from Malaysian Institute of Road Safety (MIROS have reported that averages of 18 people were killed on Malaysian road daily. There are many kinds of models that have been developed in modelling the circumstance of accidents. The most widely applied was Poisson and Negative Binomial regression models while Zeroinflated Poisson and Zero-inflated Negative Binomial are the modification of Poisson and Negative Binomials regression models. This study interested to focus on road F0050 as statistic data from Royal Malaysian Police 2014 list F0050 as one of the high accident road in Malaysia from kilometre 0 until kilometre 58. R programming was chose to analyse the relationship between road fatality and its factor (annual average daily traffic (AADT, speed, shoulder width, lane width. Negative binomial and Zero-inflated negative binomial (ZINB were shown to be preferred modelling methods for this study. Significant positive relationships were also identified between road fatality and annual average daily traffic (AADT and lane width. This relationship can be a helpful support to the decision making of accident management for road F0050.

  4. Government capacities and stakeholders: what facilitates ehealth legislation?

    Science.gov (United States)

    2014-01-01

    Background Newly established high-technology areas such as eHealth require regulations regarding the interoperability of health information infrastructures and data protection. It is argued that government capacities as well as the extent to which public and private organizations participate in policy-making determine the level of eHealth legislation. Both explanatory factors are influenced by international organizations that provide knowledge transfer and encourage private actor participation. Methods Data analysis is based on the Global Observatory for eHealth - ATLAS eHealth country profiles which summarizes eHealth policies in 114 countries. Data analysis was carried out using two-component hurdle models with a truncated Poisson model for positive counts and a hurdle component model with a binomial distribution for zero or greater counts. Results The analysis reveals that the participation of private organizations such as donors has negative effects on the level of eHealth legislation. The impact of public-private partnerships (PPPs) depends on the degree of government capacities already available and on democratic regimes. Democracies are more responsive to these new regulatory demands than autocracies. Democracies find it easier to transfer knowledge out of PPPs than autocracies. Government capacities increase the knowledge transfer effect of PPPs, thus leading to more eHealth legislation. Conclusions All international regimes – the WHO, the EU, and the OECD – promote PPPs in order to ensure the construction of a national eHealth infrastructure. This paper shows that the development of government capacities in the eHealth domain has to be given a higher priority than the establishment of PPPs, since the existence of some (initial) capacities is the sine qua non of further capacity building. PMID:24410989

  5. Problem drinking among Flemish students: beverage type, early drinking onset and negative personal & social consequences.

    Science.gov (United States)

    De Bruyn, Sara; Wouters, Edwin; Ponnet, Koen; Van Damme, Joris; Maes, Lea; Van Hal, Guido

    2018-02-12

    Although alcohol is socially accepted in most Western societies, studies are clear about its associated negative consequences, especially among university and college students. Studies on the relationship between alcohol-related consequences and both beverage type and drinking onset, however, are scarce, especially in a European context. The aim of this research was, therefore, twofold: (1) What is the relationship between beverage type and the negative consequences experienced by students? and (2) Are these consequences determined by early drinking onset? We will examine these questions within the context of a wide range of alcohol-related consequences. The analyses are based on data collected by the inter-university project 'Head in the clouds?', measuring alcohol use among students in Flanders (Belgium). In total, a large dataset consisting of information from 19,253 anonymously participating students was available. Negative consequences were measured using a shortened version of the Core Alcohol and Drug Survey (CADS_D). Data were analysed using negative binomial regression. Results vary depending on the type of alcohol-related consequences: Personal negative consequences occur frequently among daily beer drinkers. However, a high rate of social negative consequences was recorded for both daily beer drinkers and daily spirits drinkers. Finally, early drinking onset was significantly associated with both personal and social negative consequences, and this association was especially strong between beer and spirits drinking onset and social negative consequences. Numerous negative consequences, both personal and social, are related to frequent beer and spirits drinking. Our findings indicate a close association between drinking beer and personal negative consequences as well as between drinking beer and/or spirits and social negative consequences. Similarly, early drinking onset has a major influence on the rates of both personal and social negative consequences

  6. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  7. CRISPR-Cas9 for in vivo Gene Therapy: Promise and Hurdles

    Directory of Open Access Journals (Sweden)

    Wei-Jing Dai

    2016-01-01

    Full Text Available Owing to its easy-to-use and multiplexing nature, the genome editing tool CRISPR-Cas9 (clustered regularly interspaced short palindromic repeats (CRISPR associated nuclease 9 is revolutionizing many areas of medical research and one of the most amazing areas is its gene therapy potentials. Previous explorations into the therapeutic potentials of CRISPR-Cas9 were mainly conducted in vitro or in animal germlines, the translatability of which, however, is either limited (to tissues with adult stem cells amenable to culture and manipulation or currently impermissible (due to ethic concerns. Recently, important progresses have been made on this regard. Several studies have demonstrated the ability of CRISPR-Cas9 for in vivo gene therapy in adult rodent models of human genetic diseases delivered by methods that are potentially translatable to human use. Although these recent advances represent a significant step forward to the eventual application of CRISPR-Cas9 to the clinic, there are still many hurdles to overcome, such as the off-target effects of CRISPR-Cas9, efficacy of homology-directed repair, fitness of edited cells, immunogenicity of therapeutic CRISPR-Cas9 components, as well as efficiency, specificity, and translatability of in vivo delivery methods. In this article, we introduce the mechanisms and merits of CRISPR-Cas9 in genome editing, briefly retrospect the applications of CRISPR-Cas9 in gene therapy explorations and highlight recent advances, later we discuss in detail the challenges lying ahead in the way of its translatability, propose possible solutions, and future research directions.

  8. Multiple Meixner polynomials and non-Hermitian oscillator Hamiltonians

    OpenAIRE

    Ndayiragije, François; Van Assche, Walter

    2013-01-01

    Multiple Meixner polynomials are polynomials in one variable which satisfy orthogonality relations with respect to $r>1$ different negative binomial distributions (Pascal distributions). There are two kinds of multiple Meixner polynomials, depending on the selection of the parameters in the negative binomial distribution. We recall their definition and some formulas and give generating functions and explicit expressions for the coefficients in the nearest neighbor recurrence relation. Followi...

  9. Design of Ultra-Wideband Tapered Slot Antenna by Using Binomial Transformer with Corrugation

    Science.gov (United States)

    Chareonsiri, Yosita; Thaiwirot, Wanwisa; Akkaraekthalin, Prayoot

    2017-05-01

    In this paper, the tapered slot antenna (TSA) with corrugation is proposed for UWB applications. The multi-section binomial transformer is used to design taper profile of the proposed TSA that does not involve using time consuming optimization. A step-by-step procedure for synthesis of the step impedance values related with step slot widths of taper profile is presented. The smooth taper can be achieved by fitting the smoothing curve to the entire step slot. The design of TSA based on this method yields results with a quite flat gain and wide impedance bandwidth covering UWB spectrum from 3.1 GHz to 10.6 GHz. To further improve the radiation characteristics, the corrugation is added on the both edges of the proposed TSA. The effects of different corrugation shapes on the improvement of antenna gain and front-to-back ratio (F-to-B ratio) are investigated. To demonstrate the validity of the design, the prototypes of TSA without and with corrugation are fabricated and measured. The results show good agreement between simulation and measurement.

  10. The multi-class binomial failure rate model for the treatment of common-cause failures

    International Nuclear Information System (INIS)

    Hauptmanns, U.

    1995-01-01

    The impact of common cause failures (CCF) on PSA results for NPPs is in sharp contrast with the limited quality which can be achieved in their assessment. This is due to the dearth of observations and cannot be remedied in the short run. Therefore the methods employed for calculating failure rates should be devised such as to make the best use of the few available observations on CCF. The Multi-Class Binomial Failure Rate (MCBFR) Model achieves this by assigning observed failures to different classes according to their technical characteristics and applying the BFR formalism to each of these. The results are hence determined by a superposition of BFR type expressions for each class, each of them with its own coupling factor. The model thus obtained flexibly reproduces the dependence of CCF rates on failure multiplicity insinuated by the observed failure multiplicities. This is demonstrated by evaluating CCFs observed for combined impulse pilot valves in German NPPs. (orig.) [de

  11. Extended moment series and the parameters of the negative binomial distribution

    International Nuclear Information System (INIS)

    Bowman, K.O.

    1984-01-01

    Recent studies indicate that, for finite sample sizes, moment estimators may be superior to maximum likelihood estimators in some regions of parameter space. In this paper a statistic based on the central moment of the sample is expanded in a Taylor series using 24 derivatives and many more terms than previous expansions. A summary algorithm is required to find meaningful approximants using the higher-order coefficients. A example is presented and a comparison between theoretical assessment and simulation results is made

  12. Statistical Methods for Unusual Count Data: Examples From Studies of Microchimerism

    Science.gov (United States)

    Guthrie, Katherine A.; Gammill, Hilary S.; Kamper-Jørgensen, Mads; Tjønneland, Anne; Gadi, Vijayakrishna K.; Nelson, J. Lee; Leisenring, Wendy

    2016-01-01

    Natural acquisition of small amounts of foreign cells or DNA, referred to as microchimerism, occurs primarily through maternal-fetal exchange during pregnancy. Microchimerism can persist long-term and has been associated with both beneficial and adverse human health outcomes. Quantitative microchimerism data present challenges for statistical analysis, including a skewed distribution, excess zero values, and occasional large values. Methods for comparing microchimerism levels across groups while controlling for covariates are not well established. We compared statistical models for quantitative microchimerism values, applied to simulated data sets and 2 observed data sets, to make recommendations for analytic practice. Modeling the level of quantitative microchimerism as a rate via Poisson or negative binomial model with the rate of detection defined as a count of microchimerism genome equivalents per total cell equivalents tested utilizes all available data and facilitates a comparison of rates between groups. We found that both the marginalized zero-inflated Poisson model and the negative binomial model can provide unbiased and consistent estimates of the overall association of exposure or study group with microchimerism detection rates. The negative binomial model remains the more accessible of these 2 approaches; thus, we conclude that the negative binomial model may be most appropriate for analyzing quantitative microchimerism data. PMID:27769989

  13. Deconfinement and nuclear collisions

    International Nuclear Information System (INIS)

    Sarma, Nataraja

    1992-01-01

    Expensive experiments to detect a deconfined parton phase have been done and are being planned. In these experiments it is hoped that nuclear collisions at relativistic energies will exhibit signals of this new phase. So far all the results may be interpreted in terms of independent nucleon-nucleon interactions. These elementary collisions at very high energies are therefore worth examination since each such collision produces a highly excited entity which emits a large number of hadrons. In the hadronic phase this results in the GS multiplicity distribution. In the parton phase, parton branching results in the popular negative binomial distribution. Though neither the GS nor the NB distribution alone agrees with the data beyond 200 GeV, it is fitted exceedingly well by a weighted sum of the two distributions. Since the negative binomial distribution arises from the branching of partons, we interpret the increase with energy of the negative binomial component in the weighted sum as the onset of a deconfined phase. The rising cross section for the negative binomial component parallels very closely the inclusive cross section for hadron jets which is also considered a consequence of partons branching. The consequences of this picture to nuclear collisions is discussed. (author). 8 refs., 9 figs., 3 tabs

  14. Hurdles to the global antipolio campaign in Pakistan: an outline of the current status and future prospects to achieve a polio free world.

    Science.gov (United States)

    Khan, Tariq; Qazi, Javaria

    2013-08-01

    The Global Polio Eradication Initiative to eradicate polio completely by the year 2000 has been successful, except for three endemic and some non-endemic countries. Pakistan, one of the three endemic polio reservoirs, is posing a serious threat to the success of the initiative. Currently, the expanded programme on immunisation has been geared to win the race over polio virus in Pakistan. After the remarkable decrease in polio cases from 198 in 2011 to only 58 in 2012, Pakistan seemed to be at the verge of success. However, hurdles continue to retard the campaign. The war against terrorism, misconceptions about polio vaccine, religious misinterpretations, frustration among vaccinators, lack of awareness, social considerations, natural calamities, inaccessibility, and inefficient vaccines and so on are continually rupturing the foundations of the worldwide initiative in the country. Weak health management is found at the hub of majority of the challenges. Stricter policies, well managed and supervised plans and strategic actions, risk analysis and enhanced communication may help giving the final punch to polio virus in the country. Analysis suggested that there is some literature available on the challenges to polio elimination, yet there is not a single publication up to date that considers all the possible hurdles in a single manuscript. This paper sorts out the breaches that hamper the goal of eliminating polio from Pakistan. We have evaluated all the possible barriers and explained them with a perspective that will help develop area specific strategies against polio virus and thus eradicate polio virus from the world.

  15. CPO e MID: alguns resultados obtidos em meninos brancos, de 8 a 12 anos DMF and RLM: some results in white children, eight to twelve years old

    Directory of Open Access Journals (Sweden)

    José Maria Pacheco de Souza

    1973-06-01

    Full Text Available São apresentados alguns dados sobre CPO e MID (Índice de ataque ao 1° molar permanente inferior direito. São estudadas as distribuições do MID e do CPO /MID = 0, ajustando-se a binomial e binomial negativa, respectivamente, tendo o teste de aderência acusado bom ajuste em 10 casos, dos 12 testados.Data on DMF and right lower first permanent molar (RLM are presented. Binomial and negative binomial distributions are fitted to the data, and the goodness of fit test indicates good fitting in 10 out of 12 cases.

  16. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    Directory of Open Access Journals (Sweden)

    Anke Hüls

    2017-05-01

    Full Text Available Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model and (ii to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate

  17. Imported dengue cases, weather variation and autochthonous dengue incidence in Cairns, Australia.

    Directory of Open Access Journals (Sweden)

    Xiaodong Huang

    Full Text Available BACKGROUND: Dengue fever (DF outbreaks often arise from imported DF cases in Cairns, Australia. Few studies have incorporated imported DF cases in the estimation of the relationship between weather variability and incidence of autochthonous DF. The study aimed to examine the impact of weather variability on autochthonous DF infection after accounting for imported DF cases and then to explore the possibility of developing an empirical forecast system. METHODOLOGY/PRINCIPAL FINDS: Data on weather variables, notified DF cases (including those acquired locally and overseas, and population size in Cairns were supplied by the Australian Bureau of Meteorology, Queensland Health, and Australian Bureau of Statistics. A time-series negative-binomial hurdle model was used to assess the effects of imported DF cases and weather variability on autochthonous DF incidence. Our results showed that monthly autochthonous DF incidences were significantly associated with monthly imported DF cases (Relative Risk (RR:1.52; 95% confidence interval (CI: 1.01-2.28, monthly minimum temperature ((oC (RR: 2.28; 95% CI: 1.77-2.93, monthly relative humidity (% (RR: 1.21; 95% CI: 1.06-1.37, monthly rainfall (mm (RR: 0.50; 95% CI: 0.31-0.81 and monthly standard deviation of daily relative humidity (% (RR: 1.27; 95% CI: 1.08-1.50. In the zero hurdle component, the occurrence of monthly autochthonous DF cases was significantly associated with monthly minimum temperature (Odds Ratio (OR: 1.64; 95% CI: 1.01-2.67. CONCLUSIONS/SIGNIFICANCE: Our research suggested that incidences of monthly autochthonous DF were strongly positively associated with monthly imported DF cases, local minimum temperature and inter-month relative humidity variability in Cairns. Moreover, DF outbreak in Cairns was driven by imported DF cases only under favourable seasons and weather conditions in the study.

  18. A binomial truncation function proposed for the second-moment approximation of tight-binding potential and application in the ternary Ni-Hf-Ti system

    International Nuclear Information System (INIS)

    Li, J H; Dai, X D; Wang, T L; Liu, B X

    2007-01-01

    We propose a two-parameter binomial truncation function for the second-moment approximation of the tight-binding (TB-SMA) interatomic potential and illustrate in detail the procedure of constructing the potentials for binary and ternary transition metal systems. For the ternary Ni-Hf-Ti system, the lattice constants, cohesion energies, elastic constants and bulk moduli of six binary compounds, i.e. L1 2 Ni 3 Hf, NiHf 3 , Ni 3 Ti, NiTi 3 , Hf 3 Ti and HfTi 3 , are firstly acquired by ab initio calculations and then employed to derive the binomial-truncated TB-SMA Ni-Hf-Ti potential. Applying the ab initio derived Ni-Hf-Ti potential, the lattice constants, cohesive energy, elastic constants and bulk moduli of another six binary compounds, i.e. D0 3 NiHf 3 , NiTi 3 HfTi 3 , and B2 NiHf, NiTi, HfTi, and two ternary compounds, i.e. C1 b NiHfTi, L2 1 Ni 2 HfTi, are calculated, respectively. It is found that, for the eight binary compounds studied, the calculated lattice constants and cohesion energies are in excellent agreement with those directly acquired from ab initio calculations and that the elastic constants and bulk moduli calculated from the potential are also qualitatively consistent with the results from ab initio calculations

  19. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. A comparison of Poisson-one-inflated power series distributions for ...

    African Journals Online (AJOL)

    A class of Poisson-one-inflated power series distributions (the binomial, the Poisson, the negative binomial, the geometric, the log-series and the misrecorded Poisson) are proposed for modeling rural out-migration at the household level. The probability mass functions of the mixture distributions are derived and fitted to the ...

  1. Modeling forest fire occurrences using count-data mixed models in Qiannan autonomous prefecture of Guizhou province in China.

    Science.gov (United States)

    Xiao, Yundan; Zhang, Xiongqing; Ji, Ping

    2015-01-01

    Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence.

  2. Estimating the burden of malaria in Senegal: Bayesian zero-inflated binomial geostatistical modeling of the MIS 2008 data.

    Directory of Open Access Journals (Sweden)

    Federica Giardina

    Full Text Available The Research Center for Human Development in Dakar (CRDH with the technical assistance of ICF Macro and the National Malaria Control Programme (NMCP conducted in 2008/2009 the Senegal Malaria Indicator Survey (SMIS, the first nationally representative household survey collecting parasitological data and malaria-related indicators. In this paper, we present spatially explicit parasitaemia risk estimates and number of infected children below 5 years. Geostatistical Zero-Inflated Binomial models (ZIB were developed to take into account the large number of zero-prevalence survey locations (70% in the data. Bayesian variable selection methods were incorporated within a geostatistical framework in order to choose the best set of environmental and climatic covariates associated with the parasitaemia risk. Model validation confirmed that the ZIB model had a better predictive ability than the standard Binomial analogue. Markov chain Monte Carlo (MCMC methods were used for inference. Several insecticide treated nets (ITN coverage indicators were calculated to assess the effectiveness of interventions. After adjusting for climatic and socio-economic factors, the presence of at least one ITN per every two household members and living in urban areas reduced the odds of parasitaemia by 86% and 81% respectively. Posterior estimates of the ORs related to the wealth index show a decreasing trend with the quintiles. Infection odds appear to be increasing with age. The population-adjusted prevalence ranges from 0.12% in Thillé-Boubacar to 13.1% in Dabo. Tambacounda has the highest population-adjusted predicted prevalence (8.08% whereas the region with the highest estimated number of infected children under the age of 5 years is Kolda (13940. The contemporary map and estimates of malaria burden identify the priority areas for future control interventions and provide baseline information for monitoring and evaluation. Zero-Inflated formulations are more appropriate

  3. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    Science.gov (United States)

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Spatial modeling of rat bites and prediction of rat infestation in Peshawar valley using binomial kriging with logistic regression.

    Science.gov (United States)

    Ali, Asad; Zaidi, Farrah; Fatima, Syeda Hira; Adnan, Muhammad; Ullah, Saleem

    2018-03-24

    In this study, we propose to develop a geostatistical computational framework to model the distribution of rat bite infestation of epidemic proportion in Peshawar valley, Pakistan. Two species Rattus norvegicus and Rattus rattus are suspected to spread the infestation. The framework combines strengths of maximum entropy algorithm and binomial kriging with logistic regression to spatially model the distribution of infestation and to determine the individual role of environmental predictors in modeling the distribution trends. Our results demonstrate the significance of a number of social and environmental factors in rat infestations such as (I) high human population density; (II) greater dispersal ability of rodents due to the availability of better connectivity routes such as roads, and (III) temperature and precipitation influencing rodent fecundity and life cycle.

  5. Statistical analysis of exacerbation rates in COPD: TRISTAN and ISOLDE revisited

    DEFF Research Database (Denmark)

    Keene, O N; Calverley, P M A; Jones, P W

    2008-01-01

    different analysis methods, we have reanalysed data from two large studies which, among other objectives, investigated the effectiveness of inhaled corticosteroids in reducing COPD exacerbation rates. Using the negative binomial model to reanalyse data from the TRISTAN and ISOLDE studies, the overall...... estimates of exacerbation rates on each treatment arm are higher and the confidence intervals for comparisons between treatments are wider, but the overall conclusions of TRISTAN and ISOLDE regarding reduction of exacerbations remain unchanged. The negative binomial approach appears to provide a better fit...

  6. Clan structure as a source of intermittency in high energy multihadron production

    International Nuclear Information System (INIS)

    Hove, L. van

    1989-01-01

    Recently the systematic study of charged hadron multiplicity distributions in high energy hadronic and leptonic collisions has revealed the general occurrence of negative binomial regularities in symmetric CM rapidity windows and of intermittency in very small windows in the central region. The negative binomial regularities imply a particular form of clustering called clan structure, which embodies all Mueller multiparticle correlations. We show that this clan structure leads to intermittency when the simplest assumptions are made on the behaviour of the clans in very small windows. (orig.)

  7. Property rating potentials and hurdles: what can be done to boost property rating in Ghana?

    Directory of Open Access Journals (Sweden)

    Elias Danyi Kuusaana

    2015-06-01

    Full Text Available Population growth in many of Africa’s towns and cities has outpaced local authority capacity to provide efficient management, infrastructure and financing. There is already debate over the capability and capacity of urban local governments to provide basic services to a growing population, due to budget constraints and inability to raise the required local-level revenue. This paper looks at how the potential of property rating can be harnessed to generate the bulk of revenue needed for local-level development, despite the huge default rates across Ghana. Focusing on Wa Municipality as a case study, the study finds that the major hurdles to property rating are poor property data systems, political interference, non-enforcement of the law, low budget deficit in financing revaluation, insufficient staffing, and insufficient technical capacity of the few staff available at the municipal valuation and rating divisions. Despite these constraints, however, field data still indicates that property rating in Ghana, and especially in Wa Municipality, can generate up to 30% of local government revenue needed. This is conditional on streamlining current challenges and improving resources for the rating and valuation units. There is extensive non-payment of property rates in Wa Municipality due to lack of awareness of the purpose of this tax, of how to pay and of the penalties for defaulting payees.

  8. Effects of gamma irradiation and silver nano particles on microbiological characteristics of saffron, using hurdle technology.

    Science.gov (United States)

    Hamid Sales, E; Motamedi Sedeh, F; Rajabifar, S

    2012-03-01

    Saffron, a plant from the Iridaceae family, is the world's most expensive spice. Gamma irradiation and silver nano particles whose uses are gradually increasing worldwide, have positive effects on preventing decay by sterilizing the microorganisms and by improving the safety without compromising the nutritional properties and sensory quality of the foods. In the present study combination effects of gamma irradiation and silver nano particles packaging on the microbial contamination of saffron were considered during storage. A combination of hurdles can ensure stability and microbial safety of foods. For this purpose, saffron samples were packaged by Poly Ethylene films that posses up to 300 ppm nano silver particles as antimicrobial agents and then irradiated in cobalt-60 irradiator (gamma cell PX30, dose rate 0.55 Gry/Sec) to 0, 1, 2,3 and 4 kGy at room temperature. The antimicrobial activities against Total Aerobic Mesophilic Bacteria, Entrobacteriace, Escherichia Coli and Clostridium Perfringines were higher in the irradiated samples, demonstrating the inhibition zone for their growth. Irradiation of the saffron samples packaged by Poly Ethylene films with nano silver particles showed the best results for decreasing microbial contamination at 2 kGy and for Poly Ethylene films without silver nano particles; it was 4 kGy.

  9. Asimetría y curtosis en el modelo binomial para valorar opciones reales: caso de aplicación para empresas de base tecnológica

    OpenAIRE

    Gastón Silverio Milanesi

    2013-01-01

    El trabajo propone un modelo de valoración de opciones reales con base en el modelo binomial utilizando la transformación de Edgeworth (Rubinstein, 1998) para incorporar momentos estocásticos de orden supe- rior, especialmente para ciertos tipos de organizaciones, como empresas de base tecnológica, donde no se dispone de cartera de activos financieros gemelos, comparables de mercado y procesos estocásticos no gaussianos. Primero, se presenta el desarrollo formal del modelo, luego su aplicació...

  10. Modelo binomial para la valoración de empresas y los efectos de la deuda: escudo fiscal y liquidación de la firma

    OpenAIRE

    Gastón Silverio Milanesi

    2014-01-01

    En este documento se propone un modelo binomial para valorar empresas, proyectando y condicionando escenarios de continuidad o liquidación de la firma. El modelo se basa en la Teoría de Opciones Reales para estimar el valor de la firma, que resulta de un balance explícito de las ventajas y riesgos de tomar deuda. El trabajo se estructura de la siguiente manera: Primeramente, la introducción y desarrollo del modelo teórico; luego se ilustra mediante un caso de aplicación, comparando los result...

  11. Using beta-binomial regression for high-precision differential methylation analysis in multifactor whole-genome bisulfite sequencing experiments

    Science.gov (United States)

    2014-01-01

    Background Whole-genome bisulfite sequencing currently provides the highest-precision view of the epigenome, with quantitative information about populations of cells down to single nucleotide resolution. Several studies have demonstrated the value of this precision: meaningful features that correlate strongly with biological functions can be found associated with only a few CpG sites. Understanding the role of DNA methylation, and more broadly the role of DNA accessibility, requires that methylation differences between populations of cells are identified with extreme precision and in complex experimental designs. Results In this work we investigated the use of beta-binomial regression as a general approach for modeling whole-genome bisulfite data to identify differentially methylated sites and genomic intervals. Conclusions The regression-based analysis can handle medium- and large-scale experiments where it becomes critical to accurately model variation in methylation levels between replicates and account for influence of various experimental factors like cell types or batch effects. PMID:24962134

  12. Hurdle technology to preserve the 'Golden' papaya post harvest quality

    International Nuclear Information System (INIS)

    Molinari, Andrea Cristina Fialho

    2007-01-01

    differences among papayas submitted to different treatments. A synergism was verified on the techniques combination, with the best results obtained from the association of CP + PEabs + gamma-irradiation at 0.4 kGy, which reached a total storage period of 35 days. Thus, that is the post harvest hurdle technology recommend for exporting 'Golden' papayas to markets with quarantine restrictions to fruit-flies (author)

  13. A new phase modulated binomial-like selective-inversion sequence for solvent signal suppression in NMR.

    Science.gov (United States)

    Chen, Johnny; Zheng, Gang; Price, William S

    2017-02-01

    A new 8-pulse Phase Modulated binomial-like selective inversion pulse sequence, dubbed '8PM', was developed by optimizing the nutation and phase angles of the constituent radio-frequency pulses so that the inversion profile resembled a target profile. Suppression profiles were obtained for both the 8PM and W5 based excitation sculpting sequences with equal inter-pulse delays. Significant distortions were observed in both profiles because of the offset effect of the radio frequency pulses. These distortions were successfully reduced by adjusting the inter-pulse delays. With adjusted inter-pulse delays, the 8PM and W5 based excitation sculpting sequences were tested on an aqueous lysozyme solution. The 8 PM based sequence provided higher suppression selectivity than the W5 based sequence. Two-dimensional nuclear Overhauser effect spectroscopy experiments were also performed on the lysozyme sample with 8PM and W5 based water signal suppression. The 8PM based suppression provided a spectrum with significantly increased (~ doubled) cross-peak intensity around the suppressed water resonance compared to the W5 based suppression. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Elementary Bayesian biostatistics

    CERN Document Server

    Moyé, Lemuel A

    2007-01-01

    PREFACEINTRODUCTIONPROLOGUE: OPENING SALVOSBASIC PROBABILITY AND BAYES THEOREMProbability's RoleObjective and Subjective Probability Relative Frequency and Collections of EventsCounting and CombinatoricsSimple Rules in ProbabilityLaw of Total Probability and Bayes TheroemCOMPOUNDING AND THE LAW OF TOTAL PROBABILITYIntroduction The Law of Total Probability: CompoundingProportions and the Binomial DistributionNegative Binomial DistributionThe Poisson ProcessThe Uniform Distribution Exponential Distribution Proble

  15. Distribution pattern of public transport passenger in Yogyakarta, Indonesia

    Science.gov (United States)

    Narendra, Alfa; Malkhamah, Siti; Sopha, Bertha Maya

    2018-03-01

    The arrival and departure distribution pattern of Trans Jogja bus passenger is one of the fundamental model for simulation. The purpose of this paper is to build models of passengers flows. This research used passengers data from January to May 2014. There is no policy that change the operation system affecting the nature of this pattern nowadays. The roads, buses, land uses, schedule, and people are relatively still the same. The data then categorized based on the direction, days, and location. Moreover, each category was fitted into some well-known discrete distributions. Those distributions are compared based on its AIC value and BIC. The chosen distribution model has the smallest AIC and BIC value and the negative binomial distribution found has the smallest AIC and BIC value. Probability mass function (PMF) plots of those models were compared to draw generic model from each categorical negative binomial distribution models. The value of accepted generic negative binomial distribution is 0.7064 and 1.4504 of mu. The minimum and maximum passenger vector value of distribution are is 0 and 41.

  16. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    Science.gov (United States)

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  17. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    Science.gov (United States)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  18. Spatial distribution of adult Anthonomus grandis Boheman (Coleoptera: Curculionidae and buds with feeding punctures on conventional and Bt cottonDistribuição espacial de adultos e botões com orifício de alimentação de Anthonomus grandis Boheman (Coleoptera: Curculionidae em algodoeiro convencional e Bt

    Directory of Open Access Journals (Sweden)

    Paulo Rogerio Beltramin da Fonseca

    2013-06-01

    the field experiment in two experimental areas; each area had 100 plots composed of seven rows, each seven metros long. Between Jan and May 2010, 16 samplings were made, in each, five plants were evaluated per plot by counting the adults and damaged squares. The dispersion indexes (ratio of the variance/mean, the Morisita index, and Exponent k of the Binomial Negative Distribution and the theoretical distribution of frequencies (Poisson, Negative Binomial and Positive Binomial were calculated. No differences between cotton genotypes were found. The spatial distribution of A. grandis adults fit the Negative Binominal (aggregate and Positive Binominal (uniform distributions, depending on the number of days after cotton emergence. The dispersion analyses of the feeding damaged squares on the Bt and conventional cotton revealed Poisson (random, Negative Binominal (aggregate and Positive Binominal (uniform distribution patterns in sequence during the crop cycle.

  19. Struktura makrocyklu treningowego w biegu na 100 m przez płotki = The structure of the training macrocycle in the 100 m hurdles

    Directory of Open Access Journals (Sweden)

    Maria Kamrowska-Nowak

    2015-12-01

    UKW w Bydgoszczy, IKF   Streszczenie Celem badań była analiza struktury czasowej oraz charakterystyka obciążeń treningowych płotkarki w rocznym cyklu, zrealizowanych przez zawodniczkę specjalizującą się w biegu na 100 m przez płotki. Materiał badawczy stanowiła dokumentacja szkolenia i obciążeń  treningowych zawodniczki S.G., mistrzyni Polski juniorek z 1999 r. oraz młodzieżowej mistrzyni Polski z 2000 r. Zebrany materiał dotyczący obciążeń treningowych skatalogowano według klasycznego podziału ćwiczeń ze względu na cechy motoryczne. Przygotowania treningowe do cyklu 1999/2000 trwały 336 dni. Zrealizowano 367 jednostek treningowych. W przygotowaniu siłowym największe obciążenie miało charakter ukierunkowany o przewadze półprzysiadów oraz podskoków i wyskoków z półprzysiadu. Siła o charakterze wszechstronnym miała mniejszy wymiar. Największe obciążenie pracą siłową miało miejsce od marca do czerwca. W okresie przygotowania ogólnego (listopad, grudzień, styczeń dominowała wytrzymałość tempowa krótka (odcinki 150 -300m, a następnie wytrzymałość szybkościowa, która ma wpływ na utrzymanie rytmu płotkowego od 7 do 10 płotka. Szybkość techniczna realizowana była od listopada do września, a najwięcej zrealizowano w miesiącach marcu i kwietniu.   Słowa kluczowe: lekkoatletyka, kobieta, bieg na 100 m przez płotki. Summary             The objective of the research was to analyze  time structure and characteristics of training loads of the hurdler specializing in 100m hurdles realized in the annual training cycle. Training records and training loads  of S.G. , the junior champion of Poland  in 1999 and the youth champion of Poland in 2000, constituted research material. Material concerning training loads was catalogued according to classical division of exercises on account of motor features.             Preparations for the training cycle 1999/2000 lasted 336 days, and 367

  20. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  1. When negation is not negation

    OpenAIRE

    Milicevic, Nataša

    2008-01-01

    In this paper I will discuss the formation of different types of yes/no questions in Serbian (examples in (1)), focusing on the syntactically and semantically puzzling example (1d), which involves the negative auxiliary inversion. Although there is a negative marker on the fronted auxiliary, the construction does not involve sentential negation. This coincides with the fact that the negative quantifying NPIs cannot be licensed. The question formation and sentential negation have similar synta...

  2. An examination of sources of sensitivity of consumer surplus estimates in travel cost models.

    Science.gov (United States)

    Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E

    2015-03-15

    We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Binomial model for measuring expected credit losses from trade receivables in non-financial sector entities

    Directory of Open Access Journals (Sweden)

    Branka Remenarić

    2018-01-01

    Full Text Available In July 2014, the International Accounting Standards Board (IASB published International Financial Reporting Standard 9 Financial Instruments (IFRS 9. This standard introduces an expected credit loss (ECL impairment model that applies to financial instruments, including trade and lease receivables. IFRS 9 applies to annual periods beginning on or after 1 January 2018 in the European Union member states. While the main reason for amending the current model was to require major banks to recognize losses in advance of a credit event occurring, this new model also applies to all receivables, including trade receivables, lease receivables, related party loan receivables in non-financial sector entities. The new impairment model is intended to result in earlier recognition of credit losses. The previous model described in International Accounting Standard 39 Financial instruments (IAS 39 was based on incurred losses. One of the major questions now is what models to use to predict expected credit losses in non-financial sector entities. The purpose of this paper is to research the application of the current impairment model, the extent to which the current impairment model can be modified to satisfy new impairment model requirements and the applicability of the binomial model for measuring expected credit losses from accounts receivable.

  4. Spatial Distribution of Eggs of Alabama argillacea Hübner and Heliothis virescens Fabricius (Lepidoptera: Noctuidae on Bt and non-BtCotton

    Directory of Open Access Journals (Sweden)

    TATIANA R. RODRIGUES

    2015-12-01

    Full Text Available ABSTRACT Among the options to control Alabama argillacea (Hübner, 1818 and Heliothis virescens (Fabricius, 1781 on cotton, insecticide spraying and biological control have been extensively used. The GM'Bt' cotton has been introduced as an extremely viable alternative, but it is yet not known how transgenic plants affect populations of organisms that are interrelated in an agroecosystem. For this reason, it is important to know how the spatial arrangement of pests and beneficial insect are affected, which may call for changes in the methods used for sampling these species. This study was conducted with the goal to investigate the pattern of spatial distribution of eggs of A. argillacea and H. virescens in DeltaOpalTM (non-Bt and DP90BTMBt cotton cultivars. Data were collected during the agricultural year 2006/2007 in two areas of 5,000 m2, located in in the district of Nova América, Caarapó municipality. In each sampling area, comprising 100 plots of 50 m2, 15 evaluations were performed on two plants per plot. The sampling consisted in counting the eggs. The aggregation index (variance/mean ratio, Morisita index and exponent k of the negative binomial distribution and chi-square fit of the observed and expected values to the theoretical frequency distribution (Poisson, Binomial and Negative Binomial Positive, showed that in both cultivars, the eggs of these species are distributed according to the aggregate distribution model, fitting the pattern of negative binomial distribution.

  5. Empirical Bayes methods in road safety research.

    NARCIS (Netherlands)

    Vogelesang, R.A.W.

    1997-01-01

    Road safety research is a wonderful combination of counting fatal accidents and using a toolkit containing prior, posterior, overdispersed Poisson, negative binomial and Gamma distributions, together with positive and negative regression effects, shrinkage estimators and fiercy debates concerning

  6. Modelling alcohol consumption during adolescence using zero inflated negative binomial and decision trees

    Directory of Open Access Journals (Sweden)

    Alfonso Palmer

    2010-07-01

    Full Text Available Alcohol is currently the most consumed substance among the Spanish adolescent population. Some of the variables that bear an influence on this consumption include ease of access, use of alcohol by friends and some personality factors. The aim of this study was to analyze and quantify the predictive value of these variables specifically on alcohol consumption in the adolescent population. The useful sample was made up of 6,145 adolescents (49.8% boys and 50.2% girls with a mean age of 15.4 years (SE= 1.2. The data were analyzed using the statistical model for a count variable and Data Mining techniques. The results show the influence of ease of access, alcohol consumption by the group of friends, and certain personality factors on alcohol intake, allowing us to quantify the intensity of this influence according to age and gender. Knowing these factors is the starting point in elaborating specific preventive actions against alcohol consumption.

  7. Multiplicity distributions in a thermodynamical model of hadron production in e{sup +}e{sup -} collisions

    Energy Technology Data Exchange (ETDEWEB)

    Becattini, F. [Florence Univ. (Italy)]|[Istituto Nazionale di Fisica Nucleare, Florence (Italy); Giovannini, A. [Turin Univ. (Italy). Ist. di Fisica Teorica]|[Istituto Nazionale di Fisica Nucleare, Turin (Italy); Lupia, S. [Max-Planck-Institut fuer Physik, Muenchen (Germany). Werner-Heisenberg-Institut

    1996-10-01

    Predictions of a thermodynamical model of hadron production for multiplicity distributions in e{sup +}e{sup -} annihilations at LEP and PEP-PETRA centre of mass energies are shown. The production process is described as a two-step process in which primary hadrons emitted from the thermal source decay into final observable particles. The final charged track multiplicity distributions turn out to be of negative binomial type and are in quite good agreement with experimental observations. The average number of clans calculated from fitted negative binomial coincides with the average number of primary hadrons predicted by the thermodynamical model, suggesting that clans should be identified with primary hadrons. (orig.)

  8. Multiplicity distributions in a thermodynamical model of hadron production in e+e- collisions

    International Nuclear Information System (INIS)

    Becattini, F.; Giovannini, A.; Lupia, S.

    1996-01-01

    Predictions of a thermodynamical model of hadron production for multiplicity distributions in e + e - annihilations at LEP and PEP-PETRA centre of mass energies are shown. The production process is described as a two-step process in which primary hadrons emitted from the thermal source decay into final observable particles. The final charged track multiplicity distributions turn out to be of negative binomial type and are in quite good agreement with experimental observations. The average number of clans calculated from fitted negative binomial coincides with the average number of primary hadrons predicted by the thermodynamical model, suggesting that clans should be identified with primary hadrons. (orig.)

  9. arXiv Describing dynamical fluctuations and genuine correlations by Weibull regularity

    CERN Document Server

    Nayak, Ranjit K.; Sarkisyan-Grinbaum, Edward K.; Tasevsky, Marek

    The Weibull parametrization of the multiplicity distribution is used to describe the multidimensional local fluctuations and genuine multiparticle correlations measured by OPAL in the large statistics $e^{+}e^{-} \\to Z^{0} \\to hadrons$ sample. The data are found to be well reproduced by the Weibull model up to higher orders. The Weibull predictions are compared to the predictions by the two other models, namely by the negative binomial and modified negative binomial distributions which mostly failed to fit the data. The Weibull regularity, which is found to reproduce the multiplicity distributions along with the genuine correlations, looks to be the optimal model to describe the multiparticle production process.

  10. Analysis of multiplicities in e+e- interactions using 2-jet rates from different jet algorithms

    International Nuclear Information System (INIS)

    Dahiya, S.; Kaur, M.; Dhamija, S.

    2002-01-01

    The shoulder structure of charged particle multiplicity distribution measured in full phase space in e + e - interactions at various c.m. energies from 91 to 189 GeV has been analysed in terms of weighted superposition of two negative binomial distributions associated with 2-jet and multi-jet production. The 2-jet rates have been obtained from various jet algorithms. This phenomenological parametrization reproduces the shoulder structure behaviour quantitatively and improves the agreement with the experimental distributions than the conventional negative binomial distribution. The analysis at the higher energies where the shoulder structure appears more prominently, is important for the understanding of underlying structure. (author)

  11. Negative Assortative Mating Based on Body Coloration in the Freshwater Platyfish (Poecillidae: Xiphophorus maculatus

    Directory of Open Access Journals (Sweden)

    Tyler E. Frankel

    2017-04-01

    Full Text Available The ability of individuals within a population to survive and thrive is highly dependent upon the maintenance of genetic variation and phenotypic diversity, thereby ensuring adaptation to dynamic environments. A fundamental method of maintaining such variation is through a negative assortative mating strategy, in which individuals would be expected to reproductively select members of the opposite sex that exhibit dissimilar phenotypes. Employing three uniform body color morphs, red, yellow and blue, of the platyfish (Xiphophorus maculatus, this study was designed to investigate whether X. maculatus females would preferentially be attracted to males exhibiting an alternative color, thereby enabling an examination of the effect of male body coloration on mate choice by adult females. Mate choice was determined based on the initial preference of each female, as well as the amount of time females spent associating with each male. Initial preferences were analyzed using a binomial distribution test, and overall preference data using Wilcoxon signed rank tests. Red females initially selected for dissimilar colored males, and spent a significantly larger amount of time associating with blue and yellow males, as did yellow females with red and blue males. Blue females initially selected and spent a significantly larger amount of time associating with red males but, interestingly, showed no selective preference between blue and yellow males. In these experimental trials, the overall strong mate selection exhibited by female platyfish for males of dissimilar coloration is suggestive of a negative assortative mating strategy and provides evidence for the maintenance of color polymorphism in nature populations.

  12. Exploring the Characteristics of Personal Victims Using the National Crime Victimization Survey

    National Research Council Canada - National Science Library

    Jairam, Shashi

    1998-01-01

    .... Two statistical methods were used to investigate these hypotheses, logistical regression for victimization prevalence, and negative binomial regression for victimization incidence and concentration...

  13. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    Science.gov (United States)

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  14. Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model

    Science.gov (United States)

    Vazifedan, Turaj; Shitan, Mahendran

    Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.

  15. FluBreaks: early epidemic detection from Google flu trends.

    Science.gov (United States)

    Pervaiz, Fahad; Pervaiz, Mansoor; Abdur Rehman, Nabeel; Saif, Umar

    2012-10-04

    The Google Flu Trends service was launched in 2008 to track changes in the volume of online search queries related to flu-like symptoms. Over the last few years, the trend data produced by this service has shown a consistent relationship with the actual number of flu reports collected by the US Centers for Disease Control and Prevention (CDC), often identifying increases in flu cases weeks in advance of CDC records. However, contrary to popular belief, Google Flu Trends is not an early epidemic detection system. Instead, it is designed as a baseline indicator of the trend, or changes, in the number of disease cases. To evaluate whether these trends can be used as a basis for an early warning system for epidemics. We present the first detailed algorithmic analysis of how Google Flu Trends can be used as a basis for building a fully automated system for early warning of epidemics in advance of methods used by the CDC. Based on our work, we present a novel early epidemic detection system, called FluBreaks (dritte.org/flubreaks), based on Google Flu Trends data. We compared the accuracy and practicality of three types of algorithms: normal distribution algorithms, Poisson distribution algorithms, and negative binomial distribution algorithms. We explored the relative merits of these methods, and related our findings to changes in Internet penetration and population size for the regions in Google Flu Trends providing data. Across our performance metrics of percentage true-positives (RTP), percentage false-positives (RFP), percentage overlap (OT), and percentage early alarms (EA), Poisson- and negative binomial-based algorithms performed better in all except RFP. Poisson-based algorithms had average values of 99%, 28%, 71%, and 76% for RTP, RFP, OT, and EA, respectively, whereas negative binomial-based algorithms had average values of 97.8%, 17.8%, 60%, and 55% for RTP, RFP, OT, and EA, respectively. Moreover, the EA was also affected by the region's population size

  16. Skin irritability to sodium lauryl sulfate is associated with increased positive patch test reactions.

    Science.gov (United States)

    Schwitulla, J; Brasch, J; Löffler, H; Schnuch, A; Geier, J; Uter, W

    2014-07-01

    As previous observations have indicated an inter-relationship between irritant and allergic skin reactions we analysed data of synchronous allergen and sodium lauryl sulfate (SLS) patch tests in terms of a relationship between SLS responsiveness and allergic patch test reactions. To analyse differences in terms of allergen-specific and overall reaction profiles between patients with vs. those without an irritant reaction to SLS. Clinical data of 26 879 patients patch tested from 2008 to 2011 by members of the Information Network of Departments of Dermatology were analysed. After descriptive analyses, including the MOAHLFA index, the positivity ratio and the reaction index, a negative binomial hurdle model was adopted to investigate the correlation between SLS reactivity and positive patch test reactions. Men, patients aged ≥ 40 years and patients with an occupational dermatitis background were over-represented in the SLS-reactive group. Patients with an irritant reaction to SLS showed a higher proportion of weak positive reactions, as well as more questionable and irritant reactions to contact allergens than patients not reactive to SLS. The risk of an additional positive patch test reaction increased by 22% for SLS-reactive patients compared with those who were SLS negative. The marked association between SLS reactivity and the number of positive reactions in patch test patients may be due to nonspecific increased skin reactivity at the moment of patch testing only. However, increased SLS reactivity could also be due to longer-lasting enhanced skin irritability, which may have promoted (poly-)sensitization. Further studies, for example with longitudinal data on patients repeatedly patch tested with SLS and contact allergens, are necessary. © 2014 British Association of Dermatologists.

  17. Influence of Flavors on the Propagation of E-Cigarette–Related Information: Social Media Study

    Science.gov (United States)

    Zhou, Jiaqi; Zeng, Daniel Dajun; Tsui, Kwok Leung

    2018-01-01

    Background Modeling the influence of e-cigarette flavors on information propagation could provide quantitative policy decision support concerning smoking initiation and contagion, as well as e-cigarette regulations. Objective The objective of this study was to characterize the influence of flavors on e-cigarette–related information propagation on social media. Methods We collected a comprehensive dataset of e-cigarette–related discussions from public Pages on Facebook. We identified 11 categories of flavors based on commonly used categorizations. Each post’s frequency of being shared served as a proxy measure of information propagation. We evaluated a set of regression models and chose the hurdle negative binomial model to characterize the influence of different flavors and nonflavor control variables on e-cigarette–related information propagation. Results We found that 5 flavors (sweet, dessert & bakery, fruits, herbs & spices, and tobacco) had significantly negative influences on e-cigarette–related information propagation, indicating the users’ tendency not to share posts related to these flavors. We did not find a positive significance of any flavors, which is contradictory to previous research. In addition, we found that a set of nonflavor–related factors were associated with information propagation. Conclusions Mentions of flavors in posts did not enhance the popularity of e-cigarette–related information. Certain flavors could even have reduced the popularity of information, indicating users’ lack of interest in flavors. Promoting e-cigarette–related information with mention of flavors is not an effective marketing approach. This study implies the potential concern of users about flavorings and suggests a need to regulate the use of flavorings in e-cigarettes. PMID:29572202

  18. Multiplicity distributions of charged hadrons produced in (anti)neutrino-deuterium charged- and neutral-current interactions

    International Nuclear Information System (INIS)

    Jongejans, B.; Tenner, A.G.; Apeldoorn, G.W. van

    1989-01-01

    Results are presented on the multiplicity distributions of charged hadrons produced in νn, νp, antiνn and antiνp charged-current interactions for the hadronic energy range 2GeV ≤ W ≤ 14GeV (corresponding approximately to the neutrino energy range 5GeV ≤ E ≤ 150GeV). The experimental distributions are analysed in terms of binomial distributions. With increasing hadronic energy it is found a smooth transition from an ordinary binomial via Poissonian to the negative binomial function. KNO scaling holds approximately for the multiplicity distribution for the whole phase space. Data on the multiplicity distributions for neutral-current interactions are also presented

  19. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  20. How to overcome hurdles in opiate substitution treatment? A qualitative study with general practitioners in Belgium.

    Science.gov (United States)

    Fraeyman, Jessica; Symons, Linda; Van Royen, Paul; Van Hal, Guido; Peremans, Lieve

    2016-06-01

    Opiate substitution treatment (OST) is the administration of opioids (methadone or buprenorphine) under medical supervision for opiate addiction. Several studies indicate a large unmet need for OST in general practice in Antwerp, Belgium. Some hurdles remain before GPs engage in OST prescribing. Formulate recommendations to increase engagement of GPs in OST, applicable to Belgium and beyond. In 2009, an exploratory qualitative research was performed using focus group discussions and interviews with GPs. During data collection and analysis, purposive sampling, open and axial coding was applied. The script was composed around the advantages, disadvantages and conditions of engaging in OST in general practice. We conducted six focus groups and two interviews, with GPs experienced in prescribing OST (n = 13), inexperienced GPs (n = 13), and physicians from addiction centres (n = 5). Overall, GPs did not seem very willing to prescribe OST for opiate users. A lack of knowledge about OST and misbehaving patients creates anxiety and makes the GPs reluctant to learn more about OST. The GPs refer to a lack of collaboration with the addiction centres and a need of support (from either addiction centres or experienced GP-colleagues for advice). Important conditions for OST are acceptance of only stable opiate users and more support in emergencies. Increasing GPs' knowledge about OST and improving collaboration with addiction centres are essential to increase the uptake of OST in general practice. Special attention could be paid to the role of more experienced colleagues who can act as advising physicians for inexperienced GPs.

  1. Multiple Meixner polynomials and non-Hermitian oscillator Hamiltonians

    International Nuclear Information System (INIS)

    Ndayiragije, F; Van Assche, W

    2013-01-01

    Multiple Meixner polynomials are polynomials in one variable which satisfy orthogonality relations with respect to r > 1 different negative binomial distributions (Pascal distributions). There are two kinds of multiple Meixner polynomials, depending on the selection of the parameters in the negative binomial distribution. We recall their definition and some formulas and give generating functions and explicit expressions for the coefficients in the nearest neighbor recurrence relation. Following a recent construction of Miki, Tsujimoto, Vinet and Zhedanov (for multiple Meixner polynomials of the first kind), we construct r > 1 non-Hermitian oscillator Hamiltonians in r dimensions which are simultaneously diagonalizable and for which the common eigenstates are expressed in terms of multiple Meixner polynomials of the second kind. (paper)

  2. Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars

    Directory of Open Access Journals (Sweden)

    N. Mielenz

    2015-01-01

    Full Text Available Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM. In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.

  3. Pilot simulation study using meat inspection data for syndromic surveillance: use of whole carcass condemnation of adult cattle to assess the performance of several algorithms for outbreak detection.

    Science.gov (United States)

    Dupuy, C; Morignat, E; Dorea, F; Ducrot, C; Calavas, D; Gay, E

    2015-09-01

    The objective of this study was to assess the performance of several algorithms for outbreak detection based on weekly proportions of whole carcass condemnations. Data from one French slaughterhouse over the 2005-2009 period were used (177 098 slaughtered cattle, 0.97% of whole carcass condemnations). The method involved three steps: (i) preparation of an outbreak-free historical baseline over 5 years, (ii) simulation of over 100 years of baseline time series with injection of artificial outbreak signals with several shapes, durations and magnitudes, and (iii) assessment of the performance (sensitivity, specificity, outbreak detection precocity) of several algorithms to detect these artificial outbreak signals. The algorithms tested included the Shewart p chart, confidence interval of the negative binomial model, the exponentially weighted moving average (EWMA); and cumulative sum (CUSUM). The highest sensitivity was obtained using a negative binomial algorithm and the highest specificity with CUSUM or EWMA. EWMA sensitivity was too low to select this algorithm for efficient outbreak detection. CUSUM's performance was complementary to the negative binomial algorithm. The use of both algorithms on real data for a prospective investigation of the whole carcass condemnation rate as a syndromic surveillance indicator could be relevant. Shewart could also be a good option considering its high sensitivity and simplicity of implementation.

  4. Multivariable analysis: a practical guide for clinicians and public health researchers

    National Research Council Canada - National Science Library

    Katz, Mitchell H

    2011-01-01

    "Now in its third edition, this highly successful text has been fully revised and updated with expanded sections on cutting-edge techniques including Poisson regression, negative binomial regression...

  5. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  6. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  7. A retrospective study evaluating the efficacy of identification and ...

    African Journals Online (AJOL)

    order to reduce morbidity and mortality. In developed ... lems such as access to healthcare, cost constraints, lack of resources .... reversed with effective fluid resuscitation. ... data association, logistical regression testing and negative binomial.

  8. Various models for pion probability distributions from heavy-ion collisions

    International Nuclear Information System (INIS)

    Mekjian, A.Z.; Mekjian, A.Z.; Schlei, B.R.; Strottman, D.; Schlei, B.R.

    1998-01-01

    Various models for pion multiplicity distributions produced in relativistic heavy ion collisions are discussed. The models include a relativistic hydrodynamic model, a thermodynamic description, an emitting source pion laser model, and a description which generates a negative binomial description. The approach developed can be used to discuss other cases which will be mentioned. The pion probability distributions for these various cases are compared. Comparison of the pion laser model and Bose-Einstein condensation in a laser trap and with the thermal model are made. The thermal model and hydrodynamic model are also used to illustrate why the number of pions never diverges and why the Bose-Einstein correction effects are relatively small. The pion emission strength η of a Poisson emitter and a critical density η c are connected in a thermal model by η/n c =e -m/T <1, and this fact reduces any Bose-Einstein correction effects in the number and number fluctuation of pions. Fluctuations can be much larger than Poisson in the pion laser model and for a negative binomial description. The clan representation of the negative binomial distribution due to Van Hove and Giovannini is discussed using the present description. Applications to CERN/NA44 and CERN/NA49 data are discussed in terms of the relativistic hydrodynamic model. copyright 1998 The American Physical Society

  9. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  10. A binomial modeling approach for upscaling colloid transport under unfavorable conditions: organic prediction of extended tailing

    Science.gov (United States)

    Hilpert, Markus; Rasmuson, Anna; Johnson, William

    2017-04-01

    Transport of colloids in saturated porous media is significantly influenced by colloidal interactions with grain surfaces. Near-surface fluid domain colloids experience relatively low fluid drag and relatively strong colloidal forces that slow their down-gradient translation relative to colloids in bulk fluid. Near surface fluid domain colloids may re-enter into the bulk fluid via diffusion (nanoparticles) or expulsion at rear flow stagnation zones, they may immobilize (attach) via strong primary minimum interactions, or they may move along a grain-to-grain contact to the near surface fluid domain of an adjacent grain. We introduce a simple model that accounts for all possible permutations of mass transfer within a dual pore and grain network. The primary phenomena thereby represented in the model are mass transfer of colloids between the bulk and near-surface fluid domains and immobilization onto grain surfaces. Colloid movement is described by a sequence of trials in a series of unit cells, and the binomial distribution is used to calculate the probabilities of each possible sequence. Pore-scale simulations provide mechanistically-determined likelihoods and timescales associated with the above pore-scale colloid mass transfer processes, whereas the network-scale model employs pore and grain topology to determine probabilities of transfer from up-gradient bulk and near-surface fluid domains to down-gradient bulk and near-surface fluid domains. Inter-grain transport of colloids in the near surface fluid domain can cause extended tailing.

  11. Influence of Flavors on the Propagation of E-Cigarette-Related Information: Social Media Study.

    Science.gov (United States)

    Zhou, Jiaqi; Zhang, Qingpeng; Zeng, Daniel Dajun; Tsui, Kwok Leung

    2018-03-23

    Modeling the influence of e-cigarette flavors on information propagation could provide quantitative policy decision support concerning smoking initiation and contagion, as well as e-cigarette regulations. The objective of this study was to characterize the influence of flavors on e-cigarette-related information propagation on social media. We collected a comprehensive dataset of e-cigarette-related discussions from public Pages on Facebook. We identified 11 categories of flavors based on commonly used categorizations. Each post's frequency of being shared served as a proxy measure of information propagation. We evaluated a set of regression models and chose the hurdle negative binomial model to characterize the influence of different flavors and nonflavor control variables on e-cigarette-related information propagation. We found that 5 flavors (sweet, dessert & bakery, fruits, herbs & spices, and tobacco) had significantly negative influences on e-cigarette-related information propagation, indicating the users' tendency not to share posts related to these flavors. We did not find a positive significance of any flavors, which is contradictory to previous research. In addition, we found that a set of nonflavor-related factors were associated with information propagation. Mentions of flavors in posts did not enhance the popularity of e-cigarette-related information. Certain flavors could even have reduced the popularity of information, indicating users' lack of interest in flavors. Promoting e-cigarette-related information with mention of flavors is not an effective marketing approach. This study implies the potential concern of users about flavorings and suggests a need to regulate the use of flavorings in e-cigarettes. ©Jiaqi Zhou, Qingpeng Zhang, Daniel Dajun Zeng, Kwok Leung Tsui. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 23.03.2018.

  12. Change-Point Methods for Overdispersed Count Data

    National Research Council Canada - National Science Library

    Wilken, Brian A

    2007-01-01

    .... Although the Poisson model is often used to model count data, the two-parameter gamma-Poisson mixture parameterization of the negative binomial distribution is often a more adequate model for overdispersed count data...

  13. Pricing American Asian options with higher moments in the underlying distribution

    Science.gov (United States)

    Lo, Keng-Hsin; Wang, Kehluh; Hsu, Ming-Feng

    2009-01-01

    We develop a modified Edgeworth binomial model with higher moment consideration for pricing American Asian options. With lognormal underlying distribution for benchmark comparison, our algorithm is as precise as that of Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] if the number of the time steps increases. If the underlying distribution displays negative skewness and leptokurtosis as often observed for stock index returns, our estimates can work better than those in Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] and are very similar to the benchmarks in Hull and White [J. Hull, A. White, Efficient procedures for valuing European and American path-dependent options, J. Derivatives 1 (Fall) (1993) 21-31]. The numerical analysis shows that our modified Edgeworth binomial model can value American Asian options with greater accuracy and speed given higher moments in their underlying distribution.

  14. Aplicação do modelo binomial na formação de preço de títulos de dívida corporativa no Brasil.

    Directory of Open Access Journals (Sweden)

    José Roberto Securato

    2008-05-01

    components such as call options, convertibility options, seniority and subordination to a Brazilian company. The major results consist of debt securities valuation and its comparison to secondary market prices in order to identify investments opportunities. The paper evaluated six debt securities which have secondary market prices and three of them presented prices above market, two of them were bellow market price and one of them had the same price of the market. The presented model and its adjustments to Brazilian market allow evaluating corporate debt securities and its components, evaluating the impact of new debt issues in the existing ones and comparing debt model and book values. Keywords: binomial model; corporate debt; debt components.

  15. Football goal distributions and extremal statistics

    Science.gov (United States)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  16. Enhancing the lethal effect of high-intensity pulsed electric field in milk by antimicrobial compounds as combined hurdles.

    Science.gov (United States)

    Sobrino-López, A; Martín-Belloso, O

    2008-05-01

    High-intensity pulsed electric field (HIPEF) is a nonthermal treatment studied for its wide antimicrobial spectrum on liquid food, including milk and dairy products. Moreover, the antimicrobial effect of HIPEF may be enhanced by combining HIPEF with other treatments as hurdles. Nisin and lysozyme are natural antimicrobial compounds that could be used in combination with HIPEF. Therefore, the purpose of this study was to determine the effect of combining HIPEF with the addition of nisin and lysozyme to milk inoculated with Staphylococcus aureus with regard to different process variables. The individual addition of nisin and lysozyme did not produce any reduction in cell population within the proposed range of concentrations, whereas their combination resulted in a pH-dependent microbial death of Staph. aureus. The addition of nisin and lysozyme to milk combined with HIPEF treatment resulted in a synergistic effect. Applying a 1,200-micros HIPEF treatment time to milk at pH 6.8 containing 1 IU/mL of nisin and 300 IU/mL of lysozyme resulted in a reduction of more than 6.2 log units of Staph. aureus. Final counts resulting from the addition of nisin and lysozyme and applying HIPEF strongly depended on both the sequence of application and the milk pH. Thus, more research is needed to elucidate the mode of action of synergism as well as the role of different process variables, although the use of HIPEF in combination with antimicrobial compounds such as nisin and lysozyme is shown to be potentially useful in processing milk and dairy products.

  17. Urban density, deprivation and road safety

    African Journals Online (AJOL)

    Kirstam

    The findings on deprivation provide new insights to rural-urban variations in ... 2000 and 2030 (World Health Organization, WHO & United Nations HABITAT, UN- ... The authors used negative binomial count models to control for a range of.

  18. A binomial modeling approach for upscaling colloid transport under unfavorable conditions: Emergent prediction of extended tailing

    Science.gov (United States)

    Hilpert, Markus; Rasmuson, Anna; Johnson, William P.

    2017-07-01

    Colloid transport in saturated porous media is significantly influenced by colloidal interactions with grain surfaces. Near-surface fluid domain colloids experience relatively low fluid drag and relatively strong colloidal forces that slow their downgradient translation relative to colloids in bulk fluid. Near-surface fluid domain colloids may reenter into the bulk fluid via diffusion (nanoparticles) or expulsion at rear flow stagnation zones, they may immobilize (attach) via primary minimum interactions, or they may move along a grain-to-grain contact to the near-surface fluid domain of an adjacent grain. We introduce a simple model that accounts for all possible permutations of mass transfer within a dual pore and grain network. The primary phenomena thereby represented in the model are mass transfer of colloids between the bulk and near-surface fluid domains and immobilization. Colloid movement is described by a Markov chain, i.e., a sequence of trials in a 1-D network of unit cells, which contain a pore and a grain. Using combinatorial analysis, which utilizes the binomial coefficient, we derive the residence time distribution, i.e., an inventory of the discrete colloid travel times through the network and of their probabilities to occur. To parameterize the network model, we performed mechanistic pore-scale simulations in a single unit cell that determined the likelihoods and timescales associated with the above colloid mass transfer processes. We found that intergrain transport of colloids in the near-surface fluid domain can cause extended tailing, which has traditionally been attributed to hydrodynamic dispersion emanating from flow tortuosity of solute trajectories.

  19. Developing drought impact functions for drought risk management

    Directory of Open Access Journals (Sweden)

    S. Bachmair

    2017-11-01

    Full Text Available Drought management frameworks are dependent on methods for monitoring and prediction, but quantifying the hazard alone is arguably not sufficient; the negative consequences that may arise from a lack of precipitation must also be predicted if droughts are to be better managed. However, the link between drought intensity, expressed by some hydrometeorological indicator, and the occurrence of drought impacts has only recently begun to be addressed. One challenge is the paucity of information on ecological and socioeconomic consequences of drought. This study tests the potential for developing empirical drought impact functions based on drought indicators (Standardized Precipitation and Standardized Precipitation Evaporation Index as predictors and text-based reports on drought impacts as a surrogate variable for drought damage. While there have been studies exploiting textual evidence of drought impacts, a systematic assessment of the effect of impact quantification method and different functional relationships for modeling drought impacts is missing. Using Southeast England as a case study we tested the potential of three different data-driven models for predicting drought impacts quantified from text-based reports: logistic regression, zero-altered negative binomial regression (hurdle model, and an ensemble regression tree approach (random forest. The logistic regression model can only be applied to a binary impact/no impact time series, whereas the other two models can additionally predict the full counts of impact occurrence at each time point. While modeling binary data results in the lowest prediction uncertainty, modeling the full counts has the advantage of also providing a measure of impact severity, and the counts were found to be reasonably predictable. However, there were noticeable differences in skill between modeling methodologies. For binary data the logistic regression and the random forest model performed similarly well based on

  20. Response of selected binomial coefficients to varying degrees of matrix sparseness and to matrices with known data interrelationships

    Science.gov (United States)

    Archer, A.W.; Maples, C.G.

    1989-01-01

    Numerous departures from ideal relationships are revealed by Monte Carlo simulations of widely accepted binomial coefficients. For example, simulations incorporating varying levels of matrix sparseness (presence of zeros indicating lack of data) and computation of expected values reveal that not only are all common coefficients influenced by zero data, but also that some coefficients do not discriminate between sparse or dense matrices (few zero data). Such coefficients computationally merge mutually shared and mutually absent information and do not exploit all the information incorporated within the standard 2 ?? 2 contingency table; therefore, the commonly used formulae for such coefficients are more complicated than the actual range of values produced. Other coefficients do differentiate between mutual presences and absences; however, a number of these coefficients do not demonstrate a linear relationship to matrix sparseness. Finally, simulations using nonrandom matrices with known degrees of row-by-row similarities signify that several coefficients either do not display a reasonable range of values or are nonlinear with respect to known relationships within the data. Analyses with nonrandom matrices yield clues as to the utility of certain coefficients for specific applications. For example, coefficients such as Jaccard, Dice, and Baroni-Urbani and Buser are useful if correction of sparseness is desired, whereas the Russell-Rao coefficient is useful when sparseness correction is not desired. ?? 1989 International Association for Mathematical Geology.

  1. Investment patterns in Dutch glasshouse horticulture

    NARCIS (Netherlands)

    Goncharova, N.

    2007-01-01

    Keywords: investment, uncertainty, investment spikes, entry, exit, duration model, GMM dynamic panel data estimator, Negative Binomial model, Heckman selection model, moving window ARIMA, Principal Component analysis, horticulture

    This thesis focuses on the analysis of investment

  2. Feasibility analysis in the expansion proposal of the nuclear power plant Laguna Verde: application of real options, binomial model; Analisis de viabilidad en la propuesta de expansion de la central nucleoelectrica Laguna Verde: aplicacion de opciones reales, modelo binomial

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez I, S.; Ortiz C, E.; Chavez M, C., E-mail: lunitza@gmail.com [UNAM, Facultad de Ingenieria, Circuito Interior, Ciudad Universitaria, 04510 Mexico D. F. (Mexico)

    2011-11-15

    At the present time, is an unquestionable fact that the nuclear electrical energy is a topic of vital importance, no more because eliminates the dependence of the hydrocarbons and is friendly with the environment, but because is also a sure and reliable energy source, and represents a viable alternative before the claims in the growing demand of electricity in Mexico. Before this panorama, was intended several scenarios to elevate the capacity of electric generation of nuclear origin with a variable participation. One of the contemplated scenarios is represented by the expansion project of the nuclear power plant Laguna Verde through the addition of a third reactor that serves as detonator of an integral program that proposes the installation of more nuclear reactors in the country. Before this possible scenario, the Federal Commission of Electricity like responsible organism of supplying energy to the population should have tools that offer it the flexibility to be adapted to the possible changes that will be presented along the project and also gives a value to the risk to future. The methodology denominated Real Options, Binomial model was proposed as an evaluation tool that allows to quantify the value of the expansion proposal, demonstrating the feasibility of the project through a periodic visualization of their evolution, all with the objective of supplying a financial analysis that serves as base and justification before the evident apogee of the nuclear energy that will be presented in future years. (Author)

  3. Negative snakes in JET: evidence for negative shear

    International Nuclear Information System (INIS)

    Gill, R.D.; Alper, B.; Edwards, A.W.

    1994-01-01

    The signature of the negative snakes from the soft X-ray cameras is very similar to the more usual snakes except that the localised region of the snake has, compared with its surroundings, decreased rather than increased emission. Circumstances where negative snakes have been seen are reviewed. The negative snake appears as a region of increased resistance and of increased impurity density. The relationship between the shear and the current perturbation is shown, and it seem probable that the magnetic shear is reversed at the point of the negative snake, i.e. that q is decreasing with radius. 6 refs., 6 figs

  4. Negative snakes in JET: evidence for negative shear

    Energy Technology Data Exchange (ETDEWEB)

    Gill, R D; Alper, B; Edwards, A W [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking; Pearson, D [Reading Univ. (United Kingdom)

    1994-07-01

    The signature of the negative snakes from the soft X-ray cameras is very similar to the more usual snakes except that the localised region of the snake has, compared with its surroundings, decreased rather than increased emission. Circumstances where negative snakes have been seen are reviewed. The negative snake appears as a region of increased resistance and of increased impurity density. The relationship between the shear and the current perturbation is shown, and it seem probable that the magnetic shear is reversed at the point of the negative snake, i.e. that q is decreasing with radius. 6 refs., 6 figs.

  5. Psychosocial work factors and sickness absence in 31 countries in Europe.

    Science.gov (United States)

    Niedhammer, Isabelle; Chastang, Jean-François; Sultan-Taïeb, Hélène; Vermeylen, Greet; Parent-Thirion, Agnès

    2013-08-01

    The studies on the associations between psychosocial work factors and sickness absence have rarely included a large number of factors and European data. The objective was to examine the associations between a large set of psychosocial work factors following well-known and emergent concepts and sickness absence in Europe. The study population consisted of 14,881 male and 14,799 female workers in 31 countries from the 2005 European Working Conditions Survey. Psychosocial work factors included the following: decision latitude, psychological demands, social support, physical violence, sexual harassment, discrimination, bullying, long working hours, shift and night work, job insecurity, job promotion and work-life imbalance. Covariates were as follows: age, occupation, economic activity, employee/self-employed status and physical, chemical, biological and biomechanical exposures. Statistical analysis was performed using multilevel negative binomial hurdle models to study the occurrence and duration of sickness absence. In the models, including all psychosocial work factors together and adjustment for covariates, high psychological demands, discrimination, bullying, low-job promotion and work-life imbalance for both genders and physical violence for women were observed as risk factors of the occurrence of sickness absence. Bullying and shift work increased the duration of absence among women. Bullying had the strongest association with sickness absence. Various psychosocial work factors were found to be associated with sickness absence. A less conservative analysis exploring each factor separately provided a still higher number of risk factors. Preventive measures should take psychosocial work environment more comprehensively into account to reduce sickness absence and improve health at work at European level.

  6. A non-linear beta-binomial regression model for mapping EORTC QLQ- C30 to the EQ-5D-3L in lung cancer patients: a comparison with existing approaches.

    Science.gov (United States)

    Khan, Iftekhar; Morris, Stephen

    2014-11-12

    The performance of the Beta Binomial (BB) model is compared with several existing models for mapping the EORTC QLQ-C30 (QLQ-C30) on to the EQ-5D-3L using data from lung cancer trials. Data from 2 separate non small cell lung cancer clinical trials (TOPICAL and SOCCAR) are used to develop and validate the BB model. Comparisons with Linear, TOBIT, Quantile, Quadratic and CLAD models are carried out. The mean prediction error, R(2), proportion predicted outside the valid range, clinical interpretation of coefficients, model fit and estimation of Quality Adjusted Life Years (QALY) are reported and compared. Monte-Carlo simulation is also used. The Beta-Binomial regression model performed 'best' among all models. For TOPICAL and SOCCAR trials, respectively, residual mean square error (RMSE) was 0.09 and 0.11; R(2) was 0.75 and 0.71; observed vs. predicted means were 0.612 vs. 0.608 and 0.750 vs. 0.749. Mean difference in QALY's (observed vs. predicted) were 0.051 vs. 0.053 and 0.164 vs. 0.162 for TOPICAL and SOCCAR respectively. Models tested on independent data show simulated 95% confidence from the BB model containing the observed mean more often (77% and 59% for TOPICAL and SOCCAR respectively) compared to the other models. All algorithms over-predict at poorer health states but the BB model was relatively better, particularly for the SOCCAR data. The BB model may offer superior predictive properties amongst mapping algorithms considered and may be more useful when predicting EQ-5D-3L at poorer health states. We recommend the algorithm derived from the TOPICAL data due to better predictive properties and less uncertainty.

  7. Overcoming the initial investment hurdle for advanced biofuels. An analysis of biofuel-related risks and their impact on project financing. Report of ELOBIO subtask 7

    International Nuclear Information System (INIS)

    Bole, T.; Londo, M.; Van Stralen, J.; Uslu, A.

    2010-04-01

    The ELOBIO research project aims to develop policies that will help achieve a higher share of biofuels in total transport fuel in a low-disturbing and sustainable way. Workpackage 7 of the ELOBIO project aims at addressing the objective of providing a reliable estimate of the potential and costs of biofuels, given the application of low-disturbing policy measures. More specifically, we seek to evaluate the impact of these biofuel policy measures on the investment climate for second-generation technologies. To this end, we try to answer several sub-questions in a following logical sequence: (1) What are the different factors that contribute to investment risk in biofuels and what are their relative contributions to overall biofuel project risk as perceived by finance providers?; (2) How do these risks translate into cost of capital for different biofuel technologies?; (3) How does cost of capital influence market penetration rates for the different technologies?; and (4) What is the best policy (or policy mix) to overcome the initial investment hurdle for advanced biofuels, thus lowering their cost of capital and achieve wider market deployment?.

  8. Flare stars and Pascal distribution

    International Nuclear Information System (INIS)

    Muradian, R.

    1994-07-01

    Observed statistics of stellar flares are described by Pascal or Negative Binomial Distribution. The analogy with other classes of chaotic production mechanisms such as hadronic particle multiplicity distributions and photoelectron counts from thermal sources is noticed. (author). 12 refs

  9. Negative ion sources

    International Nuclear Information System (INIS)

    Ishikawa, Junzo; Takagi, Toshinori

    1983-01-01

    Negative ion sources have been originally developed at the request of tandem electrostatic accelerators, and hundreds of nA to several μA negative ion current has been obtained so far for various elements. Recently, the development of large current hydrogen negative ion sources has been demanded from the standpoint of the heating by neutral particle beam injection in nuclear fusion reactors. On the other hand, the physical properties of negative ions are interesting in the thin film formation using ions. Anyway, it is the present status that the mechanism of negative ion action has not been so fully investigated as positive ions because the history of negative ion sources is short. In this report, the many mechanisms about the generation of negative ions proposed so far are described about negative ion generating mechanism, negative ion source plasma, and negative ion generation on metal surfaces. As a result, negative ion sources are roughly divided into two schemes, plasma extraction and secondary ion extraction, and the former is further classified into the PIG ion source and its variation and Duoplasmatron and its variation; while the latter into reflecting and sputtering types. In the second half of the report, the practical negative ion sources of each scheme are described. If the mechanism of negative ion generation will be investigated more in detail and the development will be continued under the unified know-how as negative ion sources in future, the development of negative ion sources with which large current can be obtained for any element is expected. (Wakatsuki, Y.)

  10. Factores determinantes de la utilización de instrumentos públicos para la gestión del riesgo en la industria vitivinícola chilena: un modelo logit binomial

    Directory of Open Access Journals (Sweden)

    Germán Lobos

    2008-12-01

    Full Text Available The main objective of this research was to identify the determining factors of the use of public instruments to manage risk in the Chilean wine industry. A binomial logistic regression model was proposed. Based on a survey of 104 viticulture and winemaking companies, a database was constructed between January and October 2007. The model was fitted using maximum likelihood estimation. The variables that turned out to be statistically significant were: risk of the wine price, availability of external consultancy and number of permanent workers. From the Public Management point of view, the main conclusion suggests that the use of public instruments could be increased if viticulturists and winemakers had more external counseling.

  11. The interplay of parental monitoring and socioeconomic status in predicting minor delinquency between and within adolescents

    NARCIS (Netherlands)

    Rekker, Roderik; Keijsers, L.G.M.T.; Branje, Susan; Koot, Hans; Meeus, W.H.J.

    This six-wave multi-informant longitudinal study on Dutch adolescents (N = 824; age 12 18) examined the interplay of socioeconomic status with parental monitoring in predicting minor delinquency. Fixed-effects negative binomial regression analyses revealed that this interplay is different within

  12. Enhanced adherence of methicillin-resistant Staphylococcus pseudintermedius sequence type 71 to canine and human corneocytes

    DEFF Research Database (Denmark)

    Latronico, Francesca; Moodley, Arshnee; Nielsen, Søren Saxmose

    2014-01-01

    characterized with respect to genetic background and cell wall-anchored protein (CWAP) gene content. Seventy-seven strain-corneocyte combinations were tested using both exponential- and stationary-phase cultures. Negative binomial regression analysis of counts of bacterial cells adhering to corneocytes revealed...

  13. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  14. Performance evaluation of generalized M-modeled atmospheric optical communications links

    DEFF Research Database (Denmark)

    Lopez-Gonzalez, Francisco J.; Garrido-Balsellss, José María; Jurado-Navas, Antonio

    2016-01-01

    , the behavior of the atmospheric optical channel is treated as a superposition of a finite number of Generalized-K distributed sub-channels, controlled by a discrete Negative-Binomial distribution dependent on the turbulence parameters. Unlike other studies, here, the closed-form mathematical expressions...

  15. A comparison of various modelling approaches applied to Cholera ...

    African Journals Online (AJOL)

    The analyses are demonstrated on data collected from Beira, Mozambique. Dynamic regression was found to be the preferred forecasting method for this data set. Keywords:Cholera, modelling, signal processing, dynamic regression, negative binomial regression, wavelet analysis, cross-wavelet analysis. ORiON Vol.

  16. The interplay of parental monitoring and socioeconomic status in predicting minor delinquency between and within adolescents

    NARCIS (Netherlands)

    Rekker, Roderik; Keijsers, Loes; Branje, Susan; Koot, Hans M.; Meeus, Wim

    2017-01-01

    This six-wave multi-informant longitudinal study on Dutch adolescents (N = 824; age 12–18) examined the interplay of socioeconomic status with parental monitoring in predicting minor delinquency. Fixed-effects negative binomial regression analyses revealed that this interplay is different within

  17. Journal of Agriculture and Social Research (JASR) Vol. 11, No. 2 ...

    African Journals Online (AJOL)

    MINA-LUAREL

    It is reported that the k-value is influenced by plot size (P < 0.05) and depends on the ... negative binomial distribution for the determination of the cluster size of Pratylenchus penetrans an ..... Review of entomology 29:321-357. Taylor, L.R. ...

  18. Impact of negation salience and cognitive resources on negation during attitude formation.

    Science.gov (United States)

    Boucher, Kathryn L; Rydell, Robert J

    2012-10-01

    Because of the increased cognitive resources required to process negations, past research has shown that explicit attitude measures are more sensitive to negations than implicit attitude measures. The current work demonstrated that the differential impact of negations on implicit and explicit attitude measures was moderated by (a) the extent to which the negation was made salient and (b) the amount of cognitive resources available during attitude formation. When negations were less visually salient, explicit but not implicit attitude measures reflected the intended valence of the negations. When negations were more visually salient, both explicit and implicit attitude measures reflected the intended valence of the negations, but only when perceivers had ample cognitive resources during encoding. Competing models of negation processing, schema-plus-tag and fusion, were examined to determine how negation salience impacts the processing of negations.

  19. Finding the Right Distribution for Highly Skewed Zero-inflated Clinical Data

    Directory of Open Access Journals (Sweden)

    Resmi Gupta

    2013-03-01

    Full Text Available Discrete, highly skewed distributions with excess numbers of zeros often result in biased estimates and misleading inferences if the zeros are not properly addressed. A clinical example of children with electrophysiologic disorders in which many of the children are treated without surgery is provided. The purpose of the current study was to identify the optimal modeling strategy for highly skewed, zeroinflated data often observed in the clinical setting by: (a simulating skewed, zero-inflated count data; (b fitting simulated data with Poisson, Negative Binomial, Zero-Inflated Poisson (ZIP and Zero-inflated Negative Binomial (ZINB models; and, (c applying the aforementioned models to actual, highlyskewed, clinical data of children with an EP disorder. The ZIP model was observed to be the optimal model based on traditional fit statistics as well as estimates of bias, mean-squared error, and coverage.  

  20. e+-e- hadronic multiplicity distributions

    International Nuclear Information System (INIS)

    Carruthers, P.; Shih, C.C.

    1986-01-01

    The 29 GeV multiplicity data have been analyzed for e + -e - → hadrons using the partially coherent laser distribution (PCLD). The latter interpolates between the negative binomial and Poisson distributions as the ratio S/N of coherent/incoherent multiplcity varies from zero to infinity. The negative binomial gives an excellent fit for rather large values of the cell parameter k. Equally good fits (for full and partial rapidity range, and for the forward/backward 2 jet correlation) are obtained for the mostly coherent (almost Poissonian) PCLD with small values of k (equal to the number of jets). The reasons for the existence of this tradeoff are explained in detail. The existence of the resulting ambiguity is traced to the insensitivity of the probability distribution to phase information in the hadronic density matrix. The study of higher order correlations (intensity interferometry) among like sign-particles is recommended to resolve this question

  1. Negative mass

    International Nuclear Information System (INIS)

    Hammond, Richard T

    2015-01-01

    Some physical aspects of negative mass are examined. Several unusual properties, such as the ability of negative mass to penetrate any armor, are analysed. Other surprising effects include the bizarre system of negative mass chasing positive mass, naked singularities and the violation of cosmic censorship, wormholes, and quantum mechanical results as well. In addition, a brief look into the implications for strings is given. (paper)

  2. Positive Effects of Negative Publicity: When Negative Reviews Increase Sales

    OpenAIRE

    Jonah Berger; Alan T. Sorensen; Scott J. Rasmussen

    2010-01-01

    Can negative information about a product increase sales, and if so, when? Although popular wisdom suggests that "any publicity is good publicity," prior research has demonstrated only downsides to negative press. Negative reviews or word of mouth, for example, have been found to hurt product evaluation and sales. Using a combination of econometric analysis and experimental methods, we unify these perspectives to delineate contexts under which negative publicity about a product will have posit...

  3. Reversing a Negative Measurement in Process with Negative Events: A Haunted Negative Measurement and the Bifurcation of Time

    CERN Document Server

    Snyder, D M

    2003-01-01

    Reversing an ordinary measurement in process (a haunted measurement) is noted and the steps involved in reversing a negative measurement in process (a haunted negative measurement) are described. In order to discuss in a thorough manner reversing an ordinary measurement in process, one has to account for how reversing a negative measurement in process would work for the same experimental setup. The reason it is necessary to know how a negative measurement in process is reversed is because for a given experimental setup there is no physical distinction between reversing a negative measurement in process and reversing an ordinary measurement in process. In the absence of the reversal of a negative measurement in process in the same experimental setup that supports the reversal of an ordinary measurement in process, the possibility exists of which-way information concerning the negative measurement that would render theoretically implausible reversing an ordinary measuremnt in process. The steps in reversing a n...

  4. Statistical interpretation of the correlations between forward and backward hadrons at collider energies

    International Nuclear Information System (INIS)

    Carruthers, P.; Shih, C.C.

    1985-01-01

    Given a multiplicity distribution belonging to the class of probability distributions which are superpositions of Poisson distributions whose two components are independently (binomially) distributed, we derive joint and conditional probabilities for the two components. Specializing to the negative binomial case, we can explain the linearity and magnitude of slope and intercept of the forward-backward correlation in a way compatible with the KNO plot for the multiplicity data provided that the final particles are produced in clusters. Generalization to allow for coherent emission allows one to put limits on the amount of coherence, a result not known from high precision fits to multiplicity. (orig.)

  5. The influence of the weather on recreational behaviour: a micro econometric approach

    NARCIS (Netherlands)

    Berkhout, P.H.G.; Brouwer, N.M.

    2005-01-01

    In this article the relationship between the weather and water-based recreational behavior is investigated at the microlevel. The relation is estimated by applying a zero-inflated negative binomial count model on a large dataset consisting of individually reported day-tripping behavior during a

  6. The El Niño Southern Oscillation index and wildfire prediction in British Columbia

    NARCIS (Netherlands)

    Xu, Zhen; Kooten, van G.C.

    2014-01-01

    This study investigates the potential to predict monthly wildfires and area burned in British Columbia's interior using El Niño Southern Oscillation (ENSO). The zero-inflated negative binomial (ZINB) and the generalized Pareto (GP) distributions are used, respectively, to account for uncertainty in

  7. Sur les estimateurs du maximum de vraisemblance dans les mod& ...

    African Journals Online (AJOL)

    Abstract. We are interested in the existence and uniqueness of maximum likelihood estimators of parameters in the two multiplicative regression models, with Poisson or negative binomial probability distributions. Following its work on the multiplicative Poisson model with two factors without repeated measures, Haberman ...

  8. Breeding sites of Culicoides midges in KwaZulu-Natal | Jenkins ...

    African Journals Online (AJOL)

    Catch numbers were correlated to site properties using the generalised linear modelling procedure on untransformed data with a negative binomial distribution and a log link function. Sites with increasing ground moisture, increasing incident radiation and increasing wetness duration were found to positively increase the ...

  9. Variation in rank abundance replicate samples and impact of clustering

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    Calculating a single-sample rank abundance curve by using the negative-binomial distribution provides a way to investigate the variability within rank abundance replicate samples and yields a measure of the degree of heterogeneity of the sampled community. The calculation of the single-sample rank

  10. Factors associated with cholera in Kenya, 2008-2013 | Cowman ...

    African Journals Online (AJOL)

    The data were analyzed using a zero-inflated negative binomial regression model. Results: Multivariate analysis indicated that the risk of cholera was associated with open defecation, use of unimproved water sources, poverty headcount ratio and the number of health facilities per 100,000 population (p < 0.05).

  11. Diel effects on bottom-trawl survey catch rates of shallow- and deep ...

    African Journals Online (AJOL)

    Fishing in depths shallower than 400 m outside daylight hours should therefore be avoided in order to reduce bias and ensure consistency in abundance estimates from surveys. Keywords: Benguela Current system, consistency of survey indices, efficiency of bottom-trawl surveys, negative binomial GAM, transect survey ...

  12. Racial/ethnic and immigrant differences in early childhood diet quality

    NARCIS (Netherlands)

    de Hoog, Marieke L. A.; Kleinman, Ken P.; Gillman, Matthew W.; Vrijkotte, Tanja G. M.; van Eijsden, Manon; Taveras, Elsie M.

    2014-01-01

    To assess racial/ethnic differences in the diet in young children and the explanatory role of maternal BMI, immigrant status and perception of child's weight. Among white, black and Hispanic 3-year-olds, we used negative binomial and linear regression to examine associations of race/ethnicity with

  13. A Binomial Modeling Approach for Upscaling Colloid Transport Under Unfavorable Attachment Conditions: Emergent Prediction of Nonmonotonic Retention Profiles

    Science.gov (United States)

    Hilpert, Markus; Johnson, William P.

    2018-01-01

    We used a recently developed simple mathematical network model to upscale pore-scale colloid transport information determined under unfavorable attachment conditions. Classical log-linear and nonmonotonic retention profiles, both well-reported under favorable and unfavorable attachment conditions, respectively, emerged from our upscaling. The primary attribute of the network is colloid transfer between bulk pore fluid, the near-surface fluid domain (NSFD), and attachment (treated as irreversible). The network model accounts for colloid transfer to the NSFD of downgradient grains and for reentrainment to bulk pore fluid via diffusion or via expulsion at rear flow stagnation zones (RFSZs). The model describes colloid transport by a sequence of random trials in a one-dimensional (1-D) network of Happel cells, which contain a grain and a pore. Using combinatorial analysis that capitalizes on the binomial coefficient, we derived from the pore-scale information the theoretical residence time distribution of colloids in the network. The transition from log-linear to nonmonotonic retention profiles occurs when the conditions underlying classical filtration theory are not fulfilled, i.e., when an NSFD colloid population is maintained. Then, nonmonotonic retention profiles result potentially both for attached and NSFD colloids. The concentration maxima shift downgradient depending on specific parameter choice. The concentration maxima were also shown to shift downgradient temporally (with continued elution) under conditions where attachment is negligible, explaining experimentally observed downgradient transport of retained concentration maxima of adhesion-deficient bacteria. For the case of zero reentrainment, we develop closed-form, analytical expressions for the shape, and the maximum of the colloid retention profile.

  14. e/sup +/ -e/sup -/ hadronic multiplicity distributions

    International Nuclear Information System (INIS)

    Carruthers, P.; Shih, C.C.

    1987-01-01

    The authors have analyzed the 29 GeV multiplicity data for e/sup +/ -e/sup -/ → hadrons using the partially coherent laser distribution (PCLD). The latter interpolates between the negative binomial and Poisson distributions as the ratio S/N of coherent/incoherent multiplicity varies from zero to infinity. The negative binomial gives an excellent fit for rather large values of the cell parameter κ. Equally good fits (for full and partial rapidity range, and for the forward/backward 2 jet correlation) are obtained for the mostly coherent (almost Poissonian) PCLD with small values of κ (equal to the number of jets). The reasons for the existence of this tradeoff are explained in detail. The existence of the resulting ambiguity is traced to the insensitivity of the probability distribution to phase information in the hadronic density matrix. They recommend the study of higher order correlations (intensity interferometry) among like-sign particles to resolve this question

  15. Predictors and outcomes of non-adherence in patients receiving maintenance hemodialysis.

    Science.gov (United States)

    Tohme, Fadi; Mor, Maria K; Pena-Polanco, Julio; Green, Jamie A; Fine, Michael J; Palevsky, Paul M; Weisbord, Steven D

    2017-08-01

    Predictors of and outcomes associated with non-adherent behavior among patients on chronic hemodialysis (HD) have been incompletely elucidated. We conducted a post hoc analysis of data from the SMILE trial to identify patient factors associated with non-adherence to dialysis-related treatments and the associations of non-adherence with clinical outcomes. We defined non-adherence as missed HD and abbreviated HD. We used negative binomial regression to model the associations of demographic and clinical factors with measures of non-adherence, and negative binomial and Cox regression to analyze the associations of non-adherence with hospitalizations and mortality, respectively. We followed 286 patients for up to 24 months. Factors independently associated with missing HD included Tuesday/Thursday/Saturday HD schedule [incident rate ratio (IRR) 1.85, p adherence to HD-related treatments, and independent associations of non-adherence with hospitalization and mortality. These findings should inform the development and implementation of interventions to improve adherence and reduce health resource utilization.

  16. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  17. Feasibility analysis in the expansion proposal of the nuclear power plant Laguna Verde: application of real options, binomial model

    International Nuclear Information System (INIS)

    Hernandez I, S.; Ortiz C, E.; Chavez M, C.

    2011-11-01

    At the present time, is an unquestionable fact that the nuclear electrical energy is a topic of vital importance, no more because eliminates the dependence of the hydrocarbons and is friendly with the environment, but because is also a sure and reliable energy source, and represents a viable alternative before the claims in the growing demand of electricity in Mexico. Before this panorama, was intended several scenarios to elevate the capacity of electric generation of nuclear origin with a variable participation. One of the contemplated scenarios is represented by the expansion project of the nuclear power plant Laguna Verde through the addition of a third reactor that serves as detonator of an integral program that proposes the installation of more nuclear reactors in the country. Before this possible scenario, the Federal Commission of Electricity like responsible organism of supplying energy to the population should have tools that offer it the flexibility to be adapted to the possible changes that will be presented along the project and also gives a value to the risk to future. The methodology denominated Real Options, Binomial model was proposed as an evaluation tool that allows to quantify the value of the expansion proposal, demonstrating the feasibility of the project through a periodic visualization of their evolution, all with the objective of supplying a financial analysis that serves as base and justification before the evident apogee of the nuclear energy that will be presented in future years. (Author)

  18. Negative liability

    NARCIS (Netherlands)

    Dari-Mattiacci, G.

    2009-01-01

    Negative and positive externalities pose symmetrical problems to social welfare. The law internalizes negative externalities by providing general tort liability rules. According to such rules, those who cause harm to others should pay compensation. In theory, in the presence of positive

  19. Estimation of heterogeneity in malaria transmission by stochastic modelling of apparent deviations from mass action kinetics

    Directory of Open Access Journals (Sweden)

    Smith Thomas A

    2008-01-01

    Full Text Available Abstract Background Quantifying heterogeneity in malaria transmission is a prerequisite for accurate predictive mathematical models, but the variance in field measurements of exposure overestimates true micro-heterogeneity because it is inflated to an uncertain extent by sampling variation. Descriptions of field data also suggest that the rate of Plasmodium falciparum infection is not proportional to the intensity of challenge by infectious vectors. This appears to violate the principle of mass action that is implied by malaria biology. Micro-heterogeneity may be the reason for this anomaly. It is proposed that the level of micro-heterogeneity can be estimated from statistical models that estimate the amount of variation in transmission most compatible with a mass-action model for the relationship of infection to exposure. Methods The relationship between the entomological inoculation rate (EIR for falciparum malaria and infection risk was reanalysed using published data for cohorts of children in Saradidi (western Kenya. Infection risk was treated as binomially distributed, and measurement-error (Poisson and negative binomial models were considered for the EIR. Models were fitted using Bayesian Markov chain Monte Carlo algorithms and model fit compared for models that assume either mass-action kinetics, facilitation, competition or saturation of the infection process with increasing EIR. Results The proportion of inocula that resulted in infection in Saradidi was inversely related to the measured intensity of challenge. Models of facilitation showed, therefore, a poor fit to the data. When sampling error in the EIR was neglected, either competition or saturation needed to be incorporated in the model in order to give a good fit. Negative binomial models for the error in exposure could achieve a comparable fit while incorporating the more parsimonious and biologically plausible mass action assumption. Models that assume negative binomial micro

  20. African Safety Promotion: A Journal of Injury and Violence ...

    African Journals Online (AJOL)

    Application of Negative Binomial Regression for Assessing Public Awareness of the Health Effects of Nicotine and Cigarettes · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. T Zewotir, S Ramroop. http://dx.doi.org/10.4314/asp.v7i1.54600 ...

  1. General Strain Theory as a Basis for the Design of School Interventions

    Science.gov (United States)

    Moon, Byongook; Morash, Merry

    2013-01-01

    The research described in this article applies general strain theory to identify possible points of intervention for reducing delinquency of students in two middle schools. Data were collected from 296 youths, and separate negative binomial regression analyses were used to identify predictors of violent, property, and status delinquency. Emotional…

  2. The Effectiveness of an Electronic Security Management System in a Privately Owned Apartment Complex

    Science.gov (United States)

    Greenberg, David F.; Roush, Jeffrey B.

    2009-01-01

    Poisson and negative binomial regression methods are used to analyze the monthly time series data to determine the effects of introducing an integrated security management system including closed-circuit television (CCTV), door alarm monitoring, proximity card access, and emergency call boxes to a large privately-owned complex of apartment…

  3. Universal description of inelastic and non(single)-diffractive multiplicity distributions in pp collisions at 250, 360 and 800 GeV/c

    International Nuclear Information System (INIS)

    Krasznovszky, S.; Wagner, I.

    1987-06-01

    A distribution function is proposed for multiplicity in accordance with the stochastic number evolution. This function gives a universal description of inelastic and nondiffractive multiplicity distributions in pp collisions at 250, 360 and 800 GeV/c. The negative binomial distribution fails in the description of inelastic data. (author)

  4. School Violence: The Role of Parental and Community Involvement

    Science.gov (United States)

    Lesneskie, Eric; Block, Steven

    2017-01-01

    This study utilizes the School Survey on Crime and Safety to identify variables that predict lower levels of violence from four domains: school security, school climate, parental involvement, and community involvement. Negative binomial regression was performed and the findings indicate that statistically significant results come from all four…

  5. What Factors Underlie Vertical and Horizontal Export Diversification

    African Journals Online (AJOL)

    Ritsumeikan Asia Pacific University

    zero-inflated negative binomial model was used to explain farmer frequency of ... 2 Ethiopian Economic Association/Ethiopian Economic Policy Research Institute, ... small coffee farmers to deal with income risk ex ante or address adverse income ..... of the self-reported data is subject to the usual caveats that apply when.

  6. Negative ... concord?

    NARCIS (Netherlands)

    Giannakidou, A

    The main claim of this paper is that a general theory of negative concord (NC) should allow for the possibility of NC involving scoping of a universal quantifier above negation. I propose that Greek NC instantiates this option. Greek n-words will be analyzed as polarity sensitive universal

  7. Polemic and Descriptive Negations

    DEFF Research Database (Denmark)

    Horslund, Camilla Søballe

    2011-01-01

    to semantics and pragmatics, negations can be used in three different ways, which gives rise to a typology of three different types of negations: 1) the descriptive negation, 2) the polemic negation, and 3) the meta-linguistic negation (Nølke 1999, 4). This typology illuminates the fact that the negation...... common in certain social context or genres, while polemic negations are more likely to come up in other genres and social settings. Previous studies have shown a relation between articulatory prominence and register, which may further inform the analysis. Hence, the paper investigates how articulatory...... prominence and register may either work in concert or oppose each other with respect to the cues they provide for the interpretation....

  8. Determinantes de la utilización de servicios de salud en Costa Rica Determinants of health care utilization in Costa Rica

    Directory of Open Access Journals (Sweden)

    Melvin Morera Salas

    2010-10-01

    Full Text Available Objetivo: Realizar una primera aproximación a los determinantes de la utilización de consultas médicas en Costa Rica. Método: Los datos proceden de la Encuesta Nacional de Salud para Costa Rica 2006. En el análisis econométrico se utilizó un modelo binomial negativo estándar ligado al enfoque de producción de salud de Grossman y un modelo en dos partes congruente con el enfoque agente-principal. Resultados: Los factores determinantes de la utilización de consultas médicas fueron el nivel educativo, el estado de salud percibida, el número de enfermedades crónicas declaradas y la región de residencia. Conclusiones: El hecho de que las variables de necesidad de salud expliquen de forma significativa la probabilidad de contacto con las consultas médicas y que, además, no se registren diferencias significativas de utilización entre quintiles de ingreso y situación de seguro, es un resultado esperable y deseable en un sistema público solidario y casi universal como el costarricense. No se obtienen resultados concluyentes de la influencia del médico en la frecuencia de utilización de las consultas que postula el modelo de agente-principal.Objective: To analyze the determinants of health care utilization (visits to the doctor in Costa Rica using an econometric approach. Methods: Data were drawn from the National Survey of Health for Costa Rica 2006. We modeled the Grossman approach to the demand for health services by using a standard negative binomial regression, and used a hurdle model for the principal-agent specification. Results: The factors determining healthcare utilization were level of education, self-assessed health, number of declared chronic diseases and geographic region of residence. Conclusion: The number of outpatient visits to the doctor depends on the proxies for medical need, but we found no multivariate association between the use of outpatient visits and income or insurance status. This result suggests that

  9. Negative Ion Density Fronts

    International Nuclear Information System (INIS)

    Igor Kaganovich

    2000-01-01

    Negative ions tend to stratify in electronegative plasmas with hot electrons (electron temperature Te much larger than ion temperature Ti, Te > Ti ). The boundary separating a plasma containing negative ions, and a plasma, without negative ions, is usually thin, so that the negative ion density falls rapidly to zero-forming a negative ion density front. We review theoretical, experimental and numerical results giving the spatio-temporal evolution of negative ion density fronts during plasma ignition, the steady state, and extinction (afterglow). During plasma ignition, negative ion fronts are the result of the break of smooth plasma density profiles during nonlinear convection. In a steady-state plasma, the fronts are boundary layers with steepening of ion density profiles due to nonlinear convection also. But during plasma extinction, the ion fronts are of a completely different nature. Negative ions diffuse freely in the plasma core (no convection), whereas the negative ion front propagates towards the chamber walls with a nearly constant velocity. The concept of fronts turns out to be very effective in analysis of plasma density profile evolution in strongly non-isothermal plasmas

  10. Negative Leadership

    Science.gov (United States)

    2013-03-01

    Negative Leadership by Colonel David M. Oberlander United States Army United States Army War...SUBTITLE Negative Leadership 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Colonel David M...Dr. Richard C. Bullis Department of Command Leadership , and Management 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  11. Author Details

    African Journals Online (AJOL)

    Zewotir, T. Vol 5, No 1 (2007) - Articles Costing national road accidents with partially complete national data: the case of Lesotho Abstract · Vol 7, No 1 (2009) - Articles Application of Negative Binomial Regression for Assessing Public Awareness of the Health Effects of Nicotine and Cigarettes Abstract PDF. ISSN: 1728- ...

  12. Movements of adult Culicoides midges around stables in KwaZulu ...

    African Journals Online (AJOL)

    The catches were identified to species level and regression analysis was performed on untransformed data which followed a negative binomial distribution with a log link function. Midges were found to frequent dung heaps and the interior of stable blocks significantly more than any other site. This occurs most markedly ...

  13. Stochastic evolutions and hadronization of highly excited hadronic matter

    International Nuclear Information System (INIS)

    Carruthers, P.

    1984-01-01

    Stochastic ingredients of high energy hadronic collisions are analyzed, with emphasis on multiplicity distributions. The conceptual simplicity of the k-cell negative binomial distribution is related to the evolution of probability distributions via the Fokker-Planck and related equations. The connection to underlying field theory ideas is sketched. 17 references

  14. Statistical considerations in NRDA studies

    International Nuclear Information System (INIS)

    Harner, E.G.; Parker, K.R.; Skalski, J.R.

    1993-01-01

    Biological, chemical, and toxicological variables are usually modeled with lognormal, Poisson, negative binomial, or binomial error distributions. Species counts and densities often have frequent zeros and overdispersion. Chemical concentrations can have frequent non-detects and a small proportion of high values. The feasibility of making adjustments to these response variables, such as zero-inflated models, are discussed. Toxicity measurements are usually modeled with the binomial distribution. A strategy for determining the most appropriate distribution is presented. Model-based methods, using concomitant variables and interactions, enhance assessment of impacts. Concomitant variable models reduce variability and also reduce bias by adjusting means to a common basis. Variable selection strategies are given for determining the most appropriate set of concomitant variables. Multi-year generalized linear models test impact-by-time interactions, possibly after adjusting for time-dependent concomitant variables. Communities are analyzed to make inferences about overall biological impact and recovery and require non-normal multivariate techniques. Partial canonical corresponding analysis is an appropriate community model for ordinating spatial and temporal shifts due to impact. The Exxon Valdez is used as a case study

  15. Binomial outcomes in dataset with some clusters of size two: can the dependence of twins be accounted for? A simulation study comparing the reliability of statistical methods based on a dataset of preterm infants.

    Science.gov (United States)

    Sauzet, Odile; Peacock, Janet L

    2017-07-20

    The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.

  16. Binomial outcomes in dataset with some clusters of size two: can the dependence of twins be accounted for? A simulation study comparing the reliability of statistical methods based on a dataset of preterm infants

    Directory of Open Access Journals (Sweden)

    Odile Sauzet

    2017-07-01

    Full Text Available Abstract Background The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Methods Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. Results The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. Conclusions This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.

  17. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    Science.gov (United States)

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  18. Author Details

    African Journals Online (AJOL)

    Ramroop, S. Vol 7, No 1 (2009) - Articles Application of Negative Binomial Regression for Assessing Public Awareness of the Health Effects of Nicotine and Cigarettes Abstract PDF. ISSN: 1728-774X. AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL ...

  19. The Influence of Television Advertisements on Promoting Calls to Telephone Quitlines

    Science.gov (United States)

    Farrelly, Matthew; Mann, Nathan; Watson, Kimberly; Pechacek, Terry

    2013-01-01

    The aim of the study was to assess the relative effectiveness of cessation, secondhand smoke and other tobacco control television advertisements in promoting quitlines in nine states from 2002 through 2005. Quarterly, the number of individuals who used quitlines per 10 000 adult smokers in a media market are measured. Negative binomial regression…

  20. In a hurdle race to the energy transition. From transformations, reforms and innovations; Im Huerdenlauf zur Energiewende. Von Transformationen, Reformen und Innovationen

    Energy Technology Data Exchange (ETDEWEB)

    Brunnengraeber, Achim; Di Nucci, Maria Rosaria (eds.) [Freie Univ. Berlin (Germany)

    2014-07-01

    The term of the energy transition will not be translated as it travels around the world. He points to the urgency of the conversion of the energy supply for electricity, heat and mobility through renewable energies. Faster than many had expected the energy revolution has reached a pace of expansion, especially in the electricity sector, which many did not expect. However, it is not a voluntary measure, but is forced by the proneness of unsustainable production methods and lifestyles. It is also not a foregone conclusion. The many new initiatives, policies and programs are in competition with an old, fossil and nuclear energy system. From the hurdles that have already been taken in this race, the pace of expansion, innovation as well as the necessary reforms and diverse challenges of the energy transition, this book is dealing. [German] Der Begriff der Energiewende wird bei seinem Lauf um die Welt nicht uebersetzt. Er weist auf die Dringlichkeit der Umstellung der Energieversorgung bei Strom, Waerme und Mobilitaet durch erneuerbare Energien hin. Schneller als von vielen erwartet hat die Energiewende vor allem im Strombereich ein Ausbautempo erreicht, mit dem viele nicht gerechnet hatten. Sie ist allerdings keine freiwillige Massnahme, sondern wird durch die Krisenhaftigkeit von nicht nachhaltigen Produktions- und Lebensweisen erzwungen. Sie ist auch kein Selbstlaeufer. Die vielfaeltigen neuen Initiativen, Massnahmen und Programme befinden sich im Wettbewerb mit einem alten, fossilen und nuklearen Energiesystem. Von den Huerden, die in diesem Wettlauf bereits genommen wurden, vom Tempo des Ausbaus, von Innovationen sowie von den notwendigen Reformen und vielfaeltigen Herausforderungen der Energiewende handelt dieses Buch.