WorldWideScience

Sample records for binomial mixture models

  1. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  2. A Binomial Mixture Model for Classification Performance: A Commentary on Waxman, Chambers, Yntema, and Gelman (1989).

    Science.gov (United States)

    Thomas, Hoben

    1989-01-01

    Individual differences in children's performance on a classification task are modeled by a two component binomial mixture distribution. The model accounts for data well, with variance accounted for ranging from 87 to 95 percent. (RJC)

  3. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  4. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  5. Chain binomial models and binomial autoregressive processes.

    Science.gov (United States)

    Weiss, Christian H; Pollett, Philip K

    2012-09-01

    We establish a connection between a class of chain-binomial models of use in ecology and epidemiology and binomial autoregressive (AR) processes. New results are obtained for the latter, including expressions for the lag-conditional distribution and related quantities. We focus on two types of chain-binomial model, extinction-colonization and colonization-extinction models, and present two approaches to parameter estimation. The asymptotic distributions of the resulting estimators are studied, as well as their finite-sample performance, and we give an application to real data. A connection is made with standard AR models, which also has implications for parameter estimation.

  6. Ecological effects of the invasive giant madagascar day gecko on endemic mauritian geckos: applications of binomial-mixture and species distribution models.

    Science.gov (United States)

    Buckland, Steeves; Cole, Nik C; Aguirre-Gutiérrez, Jesús; Gallagher, Laura E; Henshaw, Sion M; Besnard, Aurélien; Tucker, Rachel M; Bachraz, Vishnu; Ruhomaun, Kevin; Harris, Stephen

    2014-01-01

    The invasion of the giant Madagascar day gecko Phelsuma grandis has increased the threats to the four endemic Mauritian day geckos (Phelsuma spp.) that have survived on mainland Mauritius. We had two main aims: (i) to predict the spatial distribution and overlap of P. grandis and the endemic geckos at a landscape level; and (ii) to investigate the effects of P. grandis on the abundance and risks of extinction of the endemic geckos at a local scale. An ensemble forecasting approach was used to predict the spatial distribution and overlap of P. grandis and the endemic geckos. We used hierarchical binomial mixture models and repeated visual estimate surveys to calculate the abundance of the endemic geckos in sites with and without P. grandis. The predicted range of each species varied from 85 km2 to 376 km2. Sixty percent of the predicted range of P. grandis overlapped with the combined predicted ranges of the four endemic geckos; 15% of the combined predicted ranges of the four endemic geckos overlapped with P. grandis. Levin's niche breadth varied from 0.140 to 0.652 between P. grandis and the four endemic geckos. The abundance of endemic geckos was 89% lower in sites with P. grandis compared to sites without P. grandis, and the endemic geckos had been extirpated at four of ten sites we surveyed with P. grandis. Species Distribution Modelling, together with the breadth metrics, predicted that P. grandis can partly share the equivalent niche with endemic species and survive in a range of environmental conditions. We provide strong evidence that smaller endemic geckos are unlikely to survive in sympatry with P. grandis. This is a cause of concern in both Mauritius and other countries with endemic species of Phelsuma.

  7. Generalized Poisson distribution: the property of mixture of Poisson and comparison with negative binomial distribution.

    Science.gov (United States)

    Joe, Harry; Zhu, Rong

    2005-04-01

    We prove that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989). Because we find that the fits to count data of the generalized Poisson and negative binomial distributions are often similar, to understand their differences, we compare the probability mass functions and skewnesses of the generalized Poisson and negative binomial distributions with the first two moments fixed. They have slight differences in many situations, but their zero-inflated distributions, with masses at zero, means and variances fixed, can differ more. These probabilistic comparisons are helpful in selecting a better fitting distribution for modelling count data with long right tails. Through a real example of count data with large zero fraction, we illustrate how the generalized Poisson and negative binomial distributions as well as their zero-inflated distributions can be discriminated.

  8. A new bivariate negative binomial regression model

    Science.gov (United States)

    Faroughi, Pouya; Ismail, Noriszura

    2014-12-01

    This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.

  9. A Binomial Integer-Valued ARCH Model.

    Science.gov (United States)

    Ristić, Miroslav M; Weiß, Christian H; Janjić, Ana D

    2016-11-01

    We present an integer-valued ARCH model which can be used for modeling time series of counts with under-, equi-, or overdispersion. The introduced model has a conditional binomial distribution, and it is shown to be strictly stationary and ergodic. The unknown parameters are estimated by three methods: conditional maximum likelihood, conditional least squares and maximum likelihood type penalty function estimation. The asymptotic distributions of the estimators are derived. A real application of the novel model to epidemic surveillance is briefly discussed. Finally, a generalization of the introduced model is considered by introducing an integer-valued GARCH model.

  10. Penggunaan Model Binomial Pada Penentuan Harga Opsi Saham Karyawan

    Directory of Open Access Journals (Sweden)

    Dara Puspita Anggraeni

    2015-11-01

    Full Text Available Binomial Model for Valuing Employee Stock Options. Employee Stock Options (ESO differ from standard exchange-traded options. The three main differences in a valuation model for employee stock options : Vesting Period, Exit Rate and Non-Transferability. In this thesis, the model for valuing employee stock options discussed. This model are implement with a generalized binomial model.

  11. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2015-07-01

    Full Text Available Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels, I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial

  12. Bayesian Analysis for Binomial Models with Generalized Beta Prior Distributions.

    Science.gov (United States)

    Chen, James J.; Novick, Melvin, R.

    1984-01-01

    The Libby-Novick class of three-parameter generalized beta distributions is shown to provide a rich class of prior distributions for the binomial model that removes some restrictions of the standard beta class. A numerical example indicates the desirability of using these wider classes of densities for binomial models. (Author/BW)

  13. Fitting Additive Binomial Regression Models with the R Package blm

    Directory of Open Access Journals (Sweden)

    Stephanie Kovalchik

    2013-09-01

    Full Text Available The R package blm provides functions for fitting a family of additive regression models to binary data. The included models are the binomial linear model, in which all covariates have additive effects, and the linear-expit (lexpit model, which allows some covariates to have additive effects and other covariates to have logisitc effects. Additive binomial regression is a model of event probability, and the coefficients of linear terms estimate covariate-adjusted risk differences. Thus, in contrast to logistic regression, additive binomial regression puts focus on absolute risk and risk differences. In this paper, we give an overview of the methodology we have developed to fit the binomial linear and lexpit models to binary outcomes from cohort and population-based case-control studies. We illustrate the blm packages methods for additive model estimation, diagnostics, and inference with risk association analyses of a bladder cancer nested case-control study in the NIH-AARP Diet and Health Study.

  14. QUANTUM THEORY FOR THE BINOMIAL MODEL IN FINANCE THEORY

    Institute of Scientific and Technical Information of China (English)

    CHEN Zeqian

    2004-01-01

    In this paper, a quantum model for the binomial market in finance is proposed. We show that its risk-neutral world exhibits an intriguing structure as a disk in the unit ball of R3, whose radius is a function of the risk-free interest rate with two thresholds which prevent arbitrage opportunities from this quantum market. Furthermore, from the quantum mechanical point of view we re-deduce the Cox-Ross-Rubinstein binomial option pricing formula by considering Maxwell-Boltzmann statistics of the system of N distinguishable particles.

  15. Information-estimation relationships over binomial, negative binomial and Poisson models

    OpenAIRE

    Gil Taborda, Camilo

    2014-01-01

    Mención Internacional en el título de doctor This thesis presents several relationships between information theory and estimation theory over random transformations that are governed through probability mass functions of the type binomial, negative binomial and Poisson. The pioneer expressions that arose relating these fields date back to the 60's when Duncan proved that the input-output mutual information of a channel affected by Gaussian noise can be expressed as a time integral of the c...

  16. Dose-time-response modeling using negative binomial distribution.

    Science.gov (United States)

    Roy, Munmun; Choudhury, Kanak; Islam, M M; Matin, M A

    2013-01-01

    People exposed to certain diseases are required to be treated with a safe and effective dose level of a drug. In epidemiological studies to find out an effective dose level, different dose levels are applied to the exposed and a certain number of cures is observed. Negative binomial distribution is considered to fit overdispersed Poisson count data. This study investigates the time effect on the response at different time points as well as at different dose levels. The point estimation and confidence bands for ED(100p)(t) and LT(100p)(d) are formulated in closed form for the proposed dose-time-response model with the negative binomial distribution. Numerical illustrations are carried out in order to check the performance level of the proposed model.

  17. Low reheating temperatures in monomial and binomial inflationary models

    Energy Technology Data Exchange (ETDEWEB)

    Rehagen, Thomas; Gelmini, Graciela B. [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA 90095 (United States)

    2015-06-23

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied ϕ{sup 2} inflationary potential is no longer favored by current CMB data, as well as ϕ{sup p} with p>2, a ϕ{sup 1} potential and canonical reheating (w{sub re}=0) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n{sub s}, implies an upper bound on the reheating temperature of T{sub re}≲6×10{sup 10} GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a ϕ{sup 1} potential. We find that as a subdominant ϕ{sup 2} term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T{sub re}=4 MeV is excluded by the Planck 2015 68% confidence limit.

  18. An efficient binomial model-based measure for sequence comparison and its application.

    Science.gov (United States)

    Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong

    2011-04-01

    Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.

  19. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration.

  20. I Remember You: Independence and the Binomial Model

    Science.gov (United States)

    Levine, Douglas W.; Rockhill, Beverly

    2006-01-01

    We focus on the problem of ignoring statistical independence. A binomial experiment is used to determine whether judges could match, based on looks alone, dogs to their owners. The experimental design introduces dependencies such that the probability of a given judge correctly matching a dog and an owner changes from trial to trial. We show how…

  1. Simulation on Poisson and negative binomial models of count road accident modeling

    Science.gov (United States)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  2. Large Deviation Results for Generalized Compound Negative Binomial Risk Models

    Institute of Scientific and Technical Information of China (English)

    Fan-chao Kong; Chen Shen

    2009-01-01

    In this paper we extend and improve some results of the large deviation for random sums of random variables.Let {Xn;n≥1} be a sequence of non-negative,independent and identically distributed random variables with common heavy-tailed distribution function F and finite mean μ∈R+,{N(n);n≥0} be a sequence of negative binomial distributed random variables with a parameter p ∈(0,1),n≥0,let {M(n);n≥0} be a Poisson process with intensity λ0.Suppose {N(n);n≥0},{Xn;n≥1} and {M(n);n≥0} are mutually results.These results can be applied to certain problems in insurance and finance.

  3. Joint Analysis of Binomial and Continuous Traits with a Recursive Model

    DEFF Research Database (Denmark)

    Varona, Louis; Sorensen, Daniel

    2014-01-01

    This work presents a model for the joint analysis of a binomial and a Gaussian trait using a recursive parametrization that leads to a computationally efficient implementation. The model is illustrated in an analysis of mortality and litter size in two breeds of Danish pigs, Landrace and Yorkshir...

  4. Finite Time Ruin Probabilities and Large Deviations for Generalized Compound Binomial Risk Models

    Institute of Scientific and Technical Information of China (English)

    Yi Jun HU

    2005-01-01

    In this paper, we extend the classical compound binomial risk model to the case where the premium income process is based on a Poisson process, and is no longer a linear function. For this more realistic risk model, Lundberg type limiting results for the finite time ruin probabilities are derived. Asymptotic behavior of the tail probabilities of the claim surplus process is also investigated.

  5. The beta-binomial convolution model for 2 × 2 tables with missing cell counts

    NARCIS (Netherlands)

    Eisinga, Rob

    2009-01-01

    This paper considers the beta-binomial convolution model for the analysis of 2×2 tables with missing cell counts.We discuss maximumlikelihood (ML) parameter estimation using the expectation–maximization algorithm and study information loss relative to complete data estimators. We also examine bias o

  6. Computational results on the compound binomial risk model with nonhomogeneous claim occurrences

    NARCIS (Netherlands)

    Tuncel, A.; Tank, F.

    2013-01-01

    The aim of this paper is to give a recursive formula for non-ruin (survival) probability when the claim occurrences are nonhomogeneous in the compound binomial risk model. We give recursive formulas for non-ruin (survival) probability and for distribution of the total number of claims under the cond

  7. A mixed-binomial model for Likert-type personality measures.

    Science.gov (United States)

    Allik, Jüri

    2014-01-01

    Personality measurement is based on the idea that values on an unobservable latent variable determine the distribution of answers on a manifest response scale. Typically, it is assumed in the Item Response Theory (IRT) that latent variables are related to the observed responses through continuous normal or logistic functions, determining the probability with which one of the ordered response alternatives on a Likert-scale item is chosen. Based on an analysis of 1731 self- and other-rated responses on the 240 NEO PI-3 questionnaire items, it was proposed that a viable alternative is a finite number of latent events which are related to manifest responses through a binomial function which has only one parameter-the probability with which a given statement is approved. For the majority of items, the best fit was obtained with a mixed-binomial distribution, which assumes two different subpopulations who endorse items with two different probabilities. It was shown that the fit of the binomial IRT model can be improved by assuming that about 10% of random noise is contained in the answers and by taking into account response biases toward one of the response categories. It was concluded that the binomial response model for the measurement of personality traits may be a workable alternative to the more habitual normal and logistic IRT models.

  8. The Negative Binomial Distribution as a Renewal Model for the Recurrence of Large Earthquakes

    Science.gov (United States)

    Tejedor, Alejandro; Gómez, Javier B.; Pacheco, Amalio F.

    2015-01-01

    The negative binomial distribution is presented as the waiting time distribution of a cyclic Markov model. This cycle simulates the seismic cycle in a fault. As an example, this model, which can describe recurrences with aperiodicities between 0 and 0.5, is used to fit the Parkfield, California earthquake series in the San Andreas Fault. The performance of the model in the forecasting is expressed in terms of error diagrams and compared with other recurrence models from literature.

  9. Empirical Bayes Point Estimates of True Score Using a Compound Binomial Error Model. Research Memorandum 74-11.

    Science.gov (United States)

    Kearns, Jack

    Empirical Bayes point estimates of true score may be obtained if the distribution of observed score for a fixed examinee is approximated in one of several ways by a well-known compound binomial model. The Bayes estimates of true score may be expressed in terms of the observed score distribution and the distribution of a hypothetical binomial test.…

  10. Modeling and Predistortion of Envelope Tracking Power Amplifiers using a Memory Binomial Model

    DEFF Research Database (Denmark)

    Tafuri, Felice Francesco; Sira, Daniel; Larsen, Torben

    2013-01-01

    Error Power Ratio) below −51 dB. The simulated predistortion results showed that the MBM can improve the compensation of distortion in the adjacent channel of 5.8 dB and 5.7 dB compared to a memory polynomial predistorter (MPPD). The predistortion performance in the time domain showed an NMSE...... behavioral model capable of an improved performance when used for the modeling and predistortion of RF PAs deployed in ET transceivers. The proposed solution consists in a 2D behavioral model having as a dual-input the PA complex baseband envelope and the modulated supply waveform, peculiar of the ET case....... The model definition is based on binomial series, hence the name of memory binomial model (MBM). The MBM is here applied to measured data-sets acquired from an ET measurement set-up. When used as a PA model the MBM showed an NMSE (Normalized Mean Squared Error) as low as −40dB and an ACEPR (Adjacent Channel...

  11. A Negative Binomial Regression Model for Accuracy Tests

    Science.gov (United States)

    Hung, Lai-Fa

    2012-01-01

    Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…

  12. Bethe states for the two-site Bose–Hubbard model: A binomial approach

    Directory of Open Access Journals (Sweden)

    Gilberto Santos

    2015-06-01

    Full Text Available We calculate explicitly the Bethe vectors states by the algebraic Bethe ansatz method with the gl(2-invariant R-matrix for the two-site Bose–Hubbard model. Using a binomial expansion of the n-th power of a sum of two operators we get and solve a recursion equation. We calculate the scalar product and the norm of the Bethe vectors states. The form factors of the imbalance current operator are also computed.

  13. TESTING FOR VARYING DISPERSION OF LONGITUDINAL BINOMIAL DATA IN NONLINEAR LOGISTIC MODELS WITH RANDOM EFFECTS

    Institute of Scientific and Technical Information of China (English)

    林金官; 韦博成

    2004-01-01

    In this paper, it is discussed that two tests for varying dispersion of binomial data in the framework of nonlinear logistic models with random effects, which are widely used in analyzing longitudinal binomial data. One is the individual test and power calculation for varying dispersion through testing the randomness of cluster effects, which is extensions of Dean(1992) and Commenges et al (1994). The second test is the composite test for varying dispersion through simultaneously testing the randomness of cluster effects and the equality of random-effect means. The score test statistics are constructed and expressed in simple, easy to use, matrix formulas. The authors illustrate their test methods using the insecticide data (Giltinan, Capizzi & Malani (1988)).

  14. Pointer Sentinel Mixture Models

    OpenAIRE

    Merity, Stephen; Xiong, Caiming; Bradbury, James; Socher, Richard

    2016-01-01

    Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. O...

  15. Beta-binomial model for meta-analysis of odds ratios.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena

    2017-01-25

    In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  16. Predicting expressway crash frequency using a random effect negative binomial model: A case study in China.

    Science.gov (United States)

    Ma, Zhuanglin; Zhang, Honglu; Chien, Steven I-Jy; Wang, Jin; Dong, Chunjiao

    2017-01-01

    To investigate the relationship between crash frequency and potential influence factors, the accident data for events occurring on a 50km long expressway in China, including 567 crash records (2006-2008), were collected and analyzed. Both the fixed-length and the homogeneous longitudinal grade methods were applied to divide the study expressway section into segments. A negative binomial (NB) model and a random effect negative binomial (RENB) model were developed to predict crash frequency. The parameters of both models were determined using the maximum likelihood (ML) method, and the mixed stepwise procedure was applied to examine the significance of explanatory variables. Three explanatory variables, including longitudinal grade, road width, and ratio of longitudinal grade and curve radius (RGR), were found as significantly affecting crash frequency. The marginal effects of significant explanatory variables to the crash frequency were analyzed. The model performance was determined by the relative prediction error and the cumulative standardized residual. The results show that the RENB model outperforms the NB model. It was also found that the model performance with the fixed-length segment method is superior to that with the homogeneous longitudinal grade segment method.

  17. Power analyses for negative binomial models with application to multiple sclerosis clinical trials.

    Science.gov (United States)

    Rettiganti, Mallik; Nagaraja, H N

    2012-01-01

    We use negative binomial (NB) models for the magnetic resonance imaging (MRI)-based brain lesion count data from parallel group (PG) and baseline versus treatment (BVT) trials for relapsing remitting multiple sclerosis (RRMS) patients, and describe the associated likelihood ratio (LR), score, and Wald tests. We perform power analyses and sample size estimation using the simulated percentiles of the exact distribution of the test statistics for the PG and BVT trials. When compared to the corresponding nonparametric test, the LR test results in 30-45% reduction in sample sizes for the PG trials and 25-60% reduction for the BVT trials.

  18. Using the negative binomial distribution to model overdispersion in ecological count data.

    Science.gov (United States)

    Lindén, Andreas; Mäntyniemi, Samu

    2011-07-01

    A Poisson process is a commonly used starting point for modeling stochastic variation of ecological count data around a theoretical expectation. However, data typically show more variation than implied by the Poisson distribution. Such overdispersion is often accounted for by using models with different assumptions about how the variance changes with the expectation. The choice of these assumptions can naturally have apparent consequences for statistical inference. We propose a parameterization of the negative binomial distribution, where two overdispersion parameters are introduced to allow for various quadratic mean-variance relationships, including the ones assumed in the most commonly used approaches. Using bird migration as an example, we present hypothetical scenarios on how overdispersion can arise due to sampling, flocking behavior or aggregation, environmental variability, or combinations of these factors. For all considered scenarios, mean-variance relationships can be appropriately described by the negative binomial distribution with two overdispersion parameters. To illustrate, we apply the model to empirical migration data with a high level of overdispersion, gaining clearly different model fits with different assumptions about mean-variance relationships. The proposed framework can be a useful approximation for modeling marginal distributions of independent count data in likelihood-based analyses.

  19. Modeling random telegraph signal noise in CMOS image sensor under low light based on binomial distribution

    Science.gov (United States)

    Yu, Zhang; Xinmiao, Lu; Guangyi, Wang; Yongcai, Hu; Jiangtao, Xu

    2016-07-01

    The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. Project supported by the National Natural Science Foundation of China (Grant Nos. 61372156 and 61405053) and the Natural Science Foundation of Zhejiang Province of China (Grant No. LZ13F04001).

  20. Modeling random telegraph signal noise in CMOS image sensor under low light based on binomial distribution

    Institute of Scientific and Technical Information of China (English)

    张钰; 逯鑫淼; 王光义; 胡永才; 徐江涛

    2016-01-01

    The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures.

  1. Modeling citrus huanglongbing data using a zero-inflated negative binomial distribution

    Directory of Open Access Journals (Sweden)

    Eudmar Paiva de Almeida

    2016-06-01

    Full Text Available Zero-inflated data from field experiments can be problematic, as these data require the use of specific statistical models during the analysis process. This study utilized the zero-inflated negative binomial (ZINB model with the log- and logistic-link functions to describe the incidence of plants with Huanglongbing (HLB, caused by Candidatus liberibacter spp. in commercial citrus orchards in the Northwestern Parana State, Brazil. Each orchard was evaluated at different times. The ZINB model with random effects in both link functions provided the best fit, as the inclusion of these effects accounted for variations between orchards and the numbers of diseased plants. The results of this model show that older plants exhibit a lower probability of acquiring HLB. The application of insecticides on a calendar basis or during new foliage flushes resulted in a three times larger probability of developing HLB compared with applying insecticides only when the vector was detected.

  2. The destructive negative binomial cure rate model with a latent activation scheme.

    Science.gov (United States)

    Cancho, Vicente G; Bandyopadhyay, Dipankar; Louzada, Francisco; Yiqi, Bao

    2013-07-01

    A new flexible cure rate survival model is developed where the initial number of competing causes of the event of interest (say lesions or altered cells) follow a compound negative binomial (NB) distribution. This model provides a realistic interpretation of the biological mechanism of the event of interest as it models a destructive process of the initial competing risk factors and records only the damaged portion of the original number of risk factors. Besides, it also accounts for the underlying mechanisms that leads to cure through various latent activation schemes. Our method of estimation exploits maximum likelihood (ML) tools. The methodology is illustrated on a real data set on malignant melanoma, and the finite sample behavior of parameter estimates are explored through simulation studies.

  3. The Compound Binomial Risk Model with Randomly Charging Premiums and Paying Dividends to Shareholders

    Directory of Open Access Journals (Sweden)

    Xiong Wang

    2013-01-01

    Full Text Available Based on characteristics of the nonlife joint-stock insurance company, this paper presents a compound binomial risk model that randomizes the premium income on unit time and sets the threshold for paying dividends to shareholders. In this model, the insurance company obtains the insurance policy in unit time with probability and pays dividends to shareholders with probability when the surplus is no less than . We then derive the recursive formulas of the expected discounted penalty function and the asymptotic estimate for it. And we will derive the recursive formulas and asymptotic estimates for the ruin probability and the distribution function of the deficit at ruin. The numerical examples have been shown to illustrate the accuracy of the asymptotic estimations.

  4. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  5. Forecasting asthma-related hospital admissions in London using negative binomial models.

    Science.gov (United States)

    Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe

    2013-05-01

    Health forecasting can improve health service provision and individual patient outcomes. Environmental factors are known to impact chronic respiratory conditions such as asthma, but little is known about the extent to which these factors can be used for forecasting. Using weather, air quality and hospital asthma admissions, in London (2005-2006), two related negative binomial models were developed and compared with a naive seasonal model. In the first approach, predictive forecasting models were fitted with 7-day averages of each potential predictor, and then a subsequent multivariable model is constructed. In the second strategy, an exhaustive search of the best fitting models between possible combinations of lags (0-14 days) of all the environmental effects on asthma admission was conducted. Three models were considered: a base model (seasonal effects), contrasted with a 7-day average model and a selected lags model (weather and air quality effects). Season is the best predictor of asthma admissions. The 7-day average and seasonal models were trivial to implement. The selected lags model was computationally intensive, but of no real value over much more easily implemented models. Seasonal factors can predict daily hospital asthma admissions in London, and there is a little evidence that additional weather and air quality information would add to forecast accuracy.

  6. Assessing the Option to Abandon an Investment Project by the Binomial Options Pricing Model

    Directory of Open Access Journals (Sweden)

    Salvador Cruz Rambaud

    2016-01-01

    Full Text Available Usually, traditional methods for investment project appraisal such as the net present value (hereinafter NPV do not incorporate in their values the operational flexibility offered by including a real option included in the project. In this paper, real options, and more specifically the option to abandon, are analysed as a complement to cash flow sequence which quantifies the project. In this way, by considering the existing analogy with financial options, a mathematical expression is derived by using the binomial options pricing model. This methodology provides the value of the option to abandon the project within one, two, and in general n periods. Therefore, this paper aims to be a useful tool in determining the value of the option to abandon according to its residual value, thus making easier the control of the uncertainty element within the project.

  7. Binomial Markov-Switching Multifractal model with Skewed t innovations and applications to Chinese SSEC Index

    Science.gov (United States)

    Liu, Yufang; Zhang, Weiguo; Fu, Junhui

    2016-11-01

    This paper presents the Binomial Markov-switching Multifractal (BMSM) model of asset returns with Skewed t innovations (BMSM-Skewed t for short), which considers the fat tails, skewness and multifractality in asset returns simultaneously. The parameters of BMSM-Skewed t model can be estimated by Maximum Likelihood (ML) methods, and volatility forecasting can be accomplished via Bayesian updating. In order to evaluate the performance of BMSM-Skewed t model, BMSM model with Normal innovations (BMSM-N), BMSM model with Student-t innovations (BMSM-t) and GARCH(1,1) models (GARCH-N, GARCH-t and GARCH-Skewed t) are chosen for comparison. Through empirical studies on Shanghai Stock Exchange Composite Index (SSEC), we find that for sample estimation, BMSM models outperform the GARCH(1,1) models through BIC and AIC rules, and BMSM-Skewed t performs the best among all the models due to its fat tails, skewness and multifractality. In addition, BMSM-Skewed t model dominates other models at most forecasting horizons for out-of-sample volatility forecasts in terms of MSE, MAE and SPA test.

  8. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Directory of Open Access Journals (Sweden)

    Gu Mi

    Full Text Available This work is about assessing model adequacy for negative binomial (NB regression, particularly (1 assessing the adequacy of the NB assumption, and (2 assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  9. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  10. A general binomial regression model to estimate standardized risk differences from binary response data.

    Science.gov (United States)

    Kovalchik, Stephanie A; Varadhan, Ravi; Fetterman, Barbara; Poitras, Nancy E; Wacholder, Sholom; Katki, Hormuzd A

    2013-02-28

    Estimates of absolute risks and risk differences are necessary for evaluating the clinical and population impact of biomedical research findings. We have developed a linear-expit regression model (LEXPIT) to incorporate linear and nonlinear risk effects to estimate absolute risk from studies of a binary outcome. The LEXPIT is a generalization of both the binomial linear and logistic regression models. The coefficients of the LEXPIT linear terms estimate adjusted risk differences, whereas the exponentiated nonlinear terms estimate residual odds ratios. The LEXPIT could be particularly useful for epidemiological studies of risk association, where adjustment for multiple confounding variables is common. We present a constrained maximum likelihood estimation algorithm that ensures the feasibility of risk estimates of the LEXPIT model and describe procedures for defining the feasible region of the parameter space, judging convergence, and evaluating boundary cases. Simulations demonstrate that the methodology is computationally robust and yields feasible, consistent estimators. We applied the LEXPIT model to estimate the absolute 5-year risk of cervical precancer or cancer associated with different Pap and human papillomavirus test results in 167,171 women undergoing screening at Kaiser Permanente Northern California. The LEXPIT model found an increased risk due to abnormal Pap test in human papillomavirus-negative that was not detected with logistic regression. Our R package blm provides free and easy-to-use software for fitting the LEXPIT model.

  11. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  12. Predicting Cumulative Incidence Probability by Direct Binomial Regression

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...

  13. CUSUM chart to monitor autocorrelated counts using Negative Binomial GARMA model.

    Science.gov (United States)

    Albarracin, Orlando Yesid Esparza; Alencar, Airlane Pereira; Lee Ho, Linda

    2017-01-01

    Cumulative sum control charts have been used for health surveillance due to its efficiency to detect soon small shifts in the monitored series. However, these charts may fail when data are autocorrelated. An alternative procedure is to build a control chart based on the residuals after fitting autoregressive moving average models, but these models usually assume Gaussian distribution for the residuals. In practical health surveillance, count series can be modeled by Poisson or Negative Binomial regression, this last to control overdispersion. To include serial correlations, generalized autoregressive moving average models are proposed. The main contribution of the current article is to measure the impact, in terms of average run length on the performance of cumulative sum charts when the serial correlation is neglected in the regression model. Different statistics based on transformations, the deviance residual, and the likelihood ratio are used to build cumulative sum control charts to monitor counts with time varying means, including trend and seasonal effects. The monitoring of the weekly number of hospital admissions due to respiratory diseases for people aged over 65 years in the city São Paulo-Brazil is considered as an illustration of the current method.

  14. Vascular structure and binomial statistics for response modeling in radiosurgery of cerebral arteriovenous malformations

    Energy Technology Data Exchange (ETDEWEB)

    Andisheh, Bahram; Mavroidis, Panayiotis; Brahme, Anders; Lind, Bengt K [Department of Medical Radiation Physics, Karolinska Institutet and Stockholm University, Stockholm (Sweden); Bitaraf, Mohammad A [Department of Neurosurgery, Tehran University of Medical Sciences and Iran Gamma Knife Center, Tehran (Iran, Islamic Republic of)

    2010-04-07

    Radiation treatment of arteriovenous malformations (AVMs) has a slow and progressive vaso-occlusive effect. Some studies suggested the possible role of vascular structure in this process. A detailed biomathematical model has been used, where the morphological, biophysical and hemodynamic characteristics of intracranial AVM vessels are faithfully reproduced. The effect of radiation on plexiform and fistulous AVM nidus vessels was simulated using this theoretical model. The similarities between vascular and electrical networks were used to construct this biomathematical AVM model and provide an accurate rendering of transnidal and intranidal hemodynamics. The response of different vessels to radiation and their obliteration probability as a function of different angiostructures were simulated and total obliteration was defined as the probability of obliteration of all possible vascular pathways. The dose response of the whole AVM is observed to depend on the vascular structure of the intra-nidus AVM. Furthermore, a plexiform AVM appears to be more prone to obliteration compared with an AVM of the same size but having more arteriovenous fistulas. Finally, a binomial model was introduced, which considers the number of crucial vessels and is able to predict the dose response behavior of AVMs with a complex vascular structure.

  15. On identifiability of certain latent class models.

    NARCIS (Netherlands)

    van Wieringen, W.N.

    2005-01-01

    Blischke [1962. Moment estimators for the parameters of a mixture of two binomial distributions. Ann. Math. Statist. 33, 444-454] studies a mixture of two binomials, a latent class model. In this article we generalize this model to a mixture of two products of binomials. We show when this generalize

  16. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  17. EXPECTED PRESENT VALUE OF TOTAL DIVIDENDS IN THE COMPOUND BINOMIAL MODEL WITH DELAYED CLAIMS AND RANDOM INCOME

    Institute of Scientific and Technical Information of China (English)

    周杰明; 莫晓云; 欧辉; 杨向群

    2013-01-01

    In this paper, a compound binomial model with a constant dividend barrier and random income is considered. Two types of individual claims, main claims and by-claims, are defined, where every by-claim is induced by the main claim and may be delayed for one time period with a certain probability. The premium income is assumed to another binomial process to capture the uncertainty of the customer’s arrivals and payments. A system of difference equations with certain boundary conditions for the expected present value of total dividend payments prior to ruin is derived and solved. Explicit results are obtained when the claim sizes are Kn distributed or the claim size distributions have finite support. Numerical results are also provided to illustrate the impact of the delay of by-claims on the expected present value of dividends.

  18. Multilevel Mixture Factor Models

    Science.gov (United States)

    Varriale, Roberta; Vermunt, Jeroen K.

    2012-01-01

    Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…

  19. A semiparametric negative binomial generalized linear model for modeling over-dispersed count data with a heavy tail: Characteristics and applications to crash data.

    Science.gov (United States)

    Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy

    2016-06-01

    Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion.

  20. Modeling and analysis of self-similar traffic source based on fractal-binomial-noise-driven Poisson process

    Institute of Scientific and Technical Information of China (English)

    ZHANG Di; ZHANG Min; YE Pei-da

    2006-01-01

    This article explores the short-range dependence (SRD) and the long-range dependence (LRD) of self-similar traffic generated by the fractal-binomial-noise-driven Poisson process (FBNDP) model and lays emphasis on the former. By simulation, the SRD decaying trends with the increase of Hurst value and peak rate are obtained, respectively. After a comprehensive analysis of accuracy of self-similarity intensity,the optimal range of peak rate is determined by taking into account the time cost, the accuracy of self-similarity intensity,and the effect of SRD.

  1. Comparison of linear and zero-inflated negative binomial regression models for appraisal of risk factors associated with dental caries

    Directory of Open Access Journals (Sweden)

    Manu Batra

    2016-01-01

    Full Text Available Context: Dental caries among children has been described as a pandemic disease with a multifactorial nature. Various sociodemographic factors and oral hygiene practices are commonly tested for their influence on dental caries. In recent years, a recent statistical model that allows for covariate adjustment has been developed and is commonly referred zero-inflated negative binomial (ZINB models. Aim: To compare the fit of the two models, the conventional linear regression (LR model and ZINB model to assess the risk factors associated with dental caries. Materials and Methods: A cross-sectional survey was conducted on 1138 12-year-old school children in Moradabad Town, Uttar Pradesh during months of February-August 2014. Selected participants were interviewed using a questionnaire. Dental caries was assessed by recording decayed, missing, or filled teeth (DMFT index. Statistical Analysis Used: To assess the risk factor associated with dental caries in children, two approaches have been applied - LR model and ZINB model. Results: The prevalence of caries-free subjects was 24.1%, and mean DMFT was 3.4 ± 1.8. In LR model, all the variables were statistically significant. Whereas in ZINB model, negative binomial part showed place of residence, father′s education level, tooth brushing frequency, and dental visit statistically significant implying that the degree of being caries-free (DMFT = 0 increases for group of children who are living in urban, whose father is university pass out, who brushes twice a day and if have ever visited a dentist. Conclusion: The current study report that the LR model is a poorly fitted model and may lead to spurious conclusions whereas ZINB model has shown better goodness of fit (Akaike information criterion values - LR: 3.94; ZINB: 2.39 and can be preferred if high variance and number of an excess of zeroes are present.

  2. The Ongoing Binomial Revolution

    CERN Document Server

    Goss, David

    2011-01-01

    The Binomial Theorem has long been essential in mathematics. In one form or another it was known to the ancients and, in the hands of Leibniz, Newton, Euler, Galois, and others, it became an essential tool in both algebra and analysis. Indeed, Newton early on developed certain binomial series (see Section \\ref{newton}) which played a role in his subsequent work on the calculus. From the work of Leibniz, Galois, Frobenius, and many others, we know of its essential role in algebra. In this paper we rapidly trace the history of the Binomial Theorem, binomial series, and binomial coefficients, with emphasis on their decisive role in function field arithmetic. We also explain conversely how function field arithmetic is now leading to new results in the binomial theory via insights into characteristic $p$ $L$-series.

  3. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)

    2009-01-01

    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the

  4. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)

    2009-01-01

    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the sam

  5. Single-gene negative binomial regression models for RNA-Seq data with higher-order asymptotic inference.

    Science.gov (United States)

    Di, Yanming

    2015-01-01

    We consider negative binomial (NB) regression models for RNA-Seq read counts and investigate an approach where such NB regression models are fitted to individual genes separately and, in particular, the NB dispersion parameter is estimated from each gene separately without assuming commonalities between genes. This single-gene approach contrasts with the more widely-used dispersion-modeling approach where the NB dispersion is modeled as a simple function of the mean or other measures of read abundance, and then estimated from a large number of genes combined. We show that through the use of higher-order asymptotic techniques, inferences with correct type I errors can be made about the regression coefficients in a single-gene NB regression model even when the dispersion is unknown and the sample size is small. The motivations for studying single-gene models include: 1) they provide a basis of reference for understanding and quantifying the power-robustness trade-offs of the dispersion-modeling approach; 2) they can also be potentially useful in practice if moderate sample sizes become available and diagnostic tools indicate potential problems with simple models of dispersion.

  6. Meta-analysis for diagnostic accuracy studies: a new statistical model using beta-binomial distributions and bivariate copulas.

    Science.gov (United States)

    Kuss, Oliver; Hoyer, Annika; Solms, Alexander

    2014-01-15

    There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta-analysis for diagnostic accuracy studies. This model uses beta-binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta-analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration.

  7. The overlooked potential of generalized linear models in astronomy - III. Bayesian negative binomial regression and globular cluster populations

    Science.gov (United States)

    de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.

    2015-10-01

    In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.

  8. Bayesian mixture models for spectral density estimation

    OpenAIRE

    Cadonna, Annalisa

    2017-01-01

    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  9. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  10. Factors related to the use of antenatal care services in Ethiopia: Application of the zero-inflated negative binomial model.

    Science.gov (United States)

    Assefa, Enyew; Tadesse, Mekonnen

    2016-08-11

    The major causes for poor health in developing countries are inadequate access and under-use of modern health care services. The objective of this study was to identify and examine factors related to the use of antenatal care services using the 2011 Ethiopia Demographic and Health Survey data. The number of antenatal care visits during the last pregnancy by mothers aged 15 to 49 years (n = 7,737) was analyzed. More than 55% of the mothers did not use antenatal care (ANC) services, while more than 22% of the women used antenatal care services less than four times. More than half of the women (52%) who had access to health services had at least four antenatal care visits. The zero-inflated negative binomial model was found to be more appropriate for analyzing the data. Place of residence, age of mothers, woman's educational level, employment status, mass media exposure, religion, and access to health services were significantly associated with the use of antenatal care services. Accordingly, there should be progress toward a health-education program that enables more women to utilize ANC services, with the program targeting women in rural areas, uneducated women, and mothers with higher birth orders through appropriate media.

  11. On the multiplicity distribution in statistical model: (I) negative binomial distribution

    CERN Document Server

    Xu, Hao-jie

    2016-01-01

    With the distribution of principal thermodynamic variables (e.g.,volume) and the probability condition from reference multiplicity, we develop an improved baseline measure for multiplicity distribution in statistical model to replace the traditional Poisson expectations. We demonstrate the mismatches between experimental measurements and previous theoretical calculations on multiplicity distributions. We derive a general expression for multiplicity distribution, i.e. a conditional probability distribution, in statistical model and calculate its cumulants under Poisson approximation in connection with recent data for multiplicity fluctuations. We find that probability condition from reference multiplicity are crucial to explain the centrality resolution effect in experiment. With the improved baseline measure for multiplicity distribution, we can quantitatively reproduce the cumulants (cumulant products) for multiplicity distribution of total (net) charges measured in experiments.

  12. Mathematical finance theory review and exercises from binomial model to risk measures

    CERN Document Server

    Gianin, Emanuela Rosazza

    2013-01-01

    The book collects over 120 exercises on different subjects of Mathematical Finance, including Option Pricing, Risk Theory, and Interest Rate Models. Many of the exercises are solved, while others are only proposed. Every chapter contains an introductory section illustrating the main theoretical results necessary to solve the exercises. The book is intended as an exercise textbook to accompany graduate courses in mathematical finance offered at many universities as part of degree programs in Applied and Industrial Mathematics, Mathematical Engineering, and Quantitative Finance.

  13. The Survival Probability in Generalized Compund Binomial Risk Model%广义复合二项风险模型下的生存概率

    Institute of Scientific and Technical Information of China (English)

    龚日朝; 刘永清

    2001-01-01

    将复合二项风险模型的保费收入过程由单位时间内收取定常数推广为一个Poisson过程,即在单位时间内收取的保单数服从强度为的Poisson分布,假定每张保单的保费均为常数.然后研究了当赔付服从参数为的指数分布时,有限时间内的生存概率.%In this paper we considered a generalized compound binomial risk model, which the occurrence of the premium is described by a Poisson process. Then we got the survival probability in finite time period of an insurance company having initial capital in the generalized compound binomial risk model, when the individual claim size distribution is exponential with a parameter.

  14. Breeding biology of muscovy duck (Cairina moschata) under natural incubation: the use of the weibull function and a beta-binomial model to predict nest hatchability

    Science.gov (United States)

    Harun; Draisma; Frankena; Veeneklaas; Van Kampen M

    1999-05-07

    In this paper we tested the Weibull function and beta-binomial distribution to analyse and predict nest hatchability, using empirical data on hatchability in Muscovy duck (Cairina moschata) eggs under natural incubation (932 successfully incubated nests and 11 822 eggs). The estimated parameters of the Weibull function and beta-binomial model were compared with the logistic regression analysis. The maximum likelihood estimation of the parameters was used to quantify simultaneously the influence of the nesting behaviour and the duration of the reproduction cycle on hatchability. The estimated parameters showed that the hatchability was not affected in natural dump nests, but in artificial dump nests and in nests with non-term eggs the hatchability was reduced by 10 and 25%, respectively. Similar results were obtained using logistic regression. Both models provided a satisfactory description of the observed data set, but the beta-binomial model proved to have more parameters with practical and biological meaningful interpretations, because this model is able to quantify and incorporate the unexplained variation in a single parameter theta (which is a variance measure). Copyright 1999 Academic Press.

  15. Modelling alcohol consumption during adolescence using zero inflated negative binomial and decision trees

    Directory of Open Access Journals (Sweden)

    Alfonso Palmer

    2010-07-01

    Full Text Available Alcohol is currently the most consumed substance among the Spanish adolescent population. Some of the variables that bear an influence on this consumption include ease of access, use of alcohol by friends and some personality factors. The aim of this study was to analyze and quantify the predictive value of these variables specifically on alcohol consumption in the adolescent population. The useful sample was made up of 6,145 adolescents (49.8% boys and 50.2% girls with a mean age of 15.4 years (SE= 1.2. The data were analyzed using the statistical model for a count variable and Data Mining techniques. The results show the influence of ease of access, alcohol consumption by the group of friends, and certain personality factors on alcohol intake, allowing us to quantify the intensity of this influence according to age and gender. Knowing these factors is the starting point in elaborating specific preventive actions against alcohol consumption.

  16. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  17. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  18. On the mixture model for multiphase flow

    Energy Technology Data Exchange (ETDEWEB)

    Manninen, M.; Taivassalo, V. [VTT Energy, Espoo (Finland). Nuclear Energy; Kallio, S. [Aabo Akademi, Turku (Finland)

    1996-12-31

    Numerical flow simulation utilising a full multiphase model is impractical for a suspension possessing wide distributions in the particle size or density. Various approximations are usually made to simplify the computational task. In the simplest approach, the suspension is represented by a homogeneous single-phase system and the influence of the particles is taken into account in the values of the physical properties. This study concentrates on the derivation and closing of the model equations. The validity of the mixture model is also carefully analysed. Starting from the continuity and momentum equations written for each phase in a multiphase system, the field equations for the mixture are derived. The mixture equations largely resemble those for a single-phase flow but are represented in terms of the mixture density and velocity. The volume fraction for each dispersed phase is solved from a phase continuity equation. Various approaches applied in closing the mixture model equations are reviewed. An algebraic equation is derived for the velocity of a dispersed phase relative to the continuous phase. Simplifications made in calculating the relative velocity restrict the applicability of the mixture model to cases in which the particles reach the terminal velocity in a short time period compared to the characteristic time scale of the flow of the mixture. (75 refs.)

  19. Feasibility analysis in the expansion proposal of the nuclear power plant Laguna Verde: application of real options, binomial model; Analisis de viabilidad en la propuesta de expansion de la central nucleoelectrica Laguna Verde: aplicacion de opciones reales, modelo binomial

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez I, S.; Ortiz C, E.; Chavez M, C., E-mail: lunitza@gmail.com [UNAM, Facultad de Ingenieria, Circuito Interior, Ciudad Universitaria, 04510 Mexico D. F. (Mexico)

    2011-11-15

    At the present time, is an unquestionable fact that the nuclear electrical energy is a topic of vital importance, no more because eliminates the dependence of the hydrocarbons and is friendly with the environment, but because is also a sure and reliable energy source, and represents a viable alternative before the claims in the growing demand of electricity in Mexico. Before this panorama, was intended several scenarios to elevate the capacity of electric generation of nuclear origin with a variable participation. One of the contemplated scenarios is represented by the expansion project of the nuclear power plant Laguna Verde through the addition of a third reactor that serves as detonator of an integral program that proposes the installation of more nuclear reactors in the country. Before this possible scenario, the Federal Commission of Electricity like responsible organism of supplying energy to the population should have tools that offer it the flexibility to be adapted to the possible changes that will be presented along the project and also gives a value to the risk to future. The methodology denominated Real Options, Binomial model was proposed as an evaluation tool that allows to quantify the value of the expansion proposal, demonstrating the feasibility of the project through a periodic visualization of their evolution, all with the objective of supplying a financial analysis that serves as base and justification before the evident apogee of the nuclear energy that will be presented in future years. (Author)

  20. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  1. Distinguishing between Rural and Urban Road Segment Traffic Safety Based on Zero-Inflated Negative Binomial Regression Models

    OpenAIRE

    Xuedong Yan; Bin Wang; Meiwu An; Cuiping Zhang

    2012-01-01

    In this study, the traffic crash rate, total crash frequency, and injury and fatal crash frequency were taken into consideration for distinguishing between rural and urban road segment safety. The GIS-based crash data during four and half years in Pikes Peak Area, US were applied for the analyses. The comparative statistical results show that the crash rates in rural segments are consistently lower than urban segments. Further, the regression results based on Zero-Inflated Negative Binomial (...

  2. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  3. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  4. Identifiability of large phylogenetic mixture models.

    Science.gov (United States)

    Rhodes, John A; Sullivant, Seth

    2012-01-01

    Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. Such models are of increasing interest for data analysis, as they can capture the variety of evolutionary processes that may be occurring across long sequences of DNA or proteins. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. Identifiability is, however, essential to their use for statistical inference.We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. This provides a theoretical justification for some current empirical studies, and indicates that extensions to even more mixture components should be theoretically well behaved. We also extend our results to certain mixtures on different trees, using the same algebraic techniques.

  5. A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys.

    Directory of Open Access Journals (Sweden)

    James A Wiley

    Full Text Available We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.

  6. A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys.

    Science.gov (United States)

    Wiley, James A; Martin, John Levi; Herschkorn, Stephen J; Bond, Jason

    2015-01-01

    We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model) but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales) are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.

  7. Flexible Rasch Mixture Models with Package psychomix

    Directory of Open Access Journals (Sweden)

    Hannah Frick

    2012-05-01

    Full Text Available Measurement invariance is an important assumption in the Rasch model and mixture models constitute a flexible way of checking for a violation of this assumption by detecting unobserved heterogeneity in item response data. Here, a general class of Rasch mixture models is established and implemented in R, using conditional maximum likelihood estimation of the item parameters (given the raw scores along with flexible specification of two model building blocks: (1 Mixture weights for the unobserved classes can be treated as model parameters or based on covariates in a concomitant variable model. (2 The distribution of raw score probabilities can be parametrized in two possible ways, either using a saturated model or a specification through mean and variance. The function raschmix( in the R package psychomix provides these models, leveraging the general infrastructure for fitting mixture models in the flexmix package. Usage of the function and its associated methods is illustrated on artificial data as well as empirical data from a study of verbally aggressive behavior.

  8. Lattice Model for water-solute mixtures

    OpenAIRE

    Furlan, A. P.; Almarza, N. G.; M. C. Barbosa

    2016-01-01

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting on, hydrophilic, inert and hydrophobic interactions. Extensive Monte Carlo simulations were carried out and the behavior of pure components and the excess proper...

  9. Marginalized zero-inflated negative binomial regression with application to dental caries.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Long, D Leann; Divaris, Kimon

    2016-05-10

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared with marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children.

  10. A Skew-Normal Mixture Regression Model

    Science.gov (United States)

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  11. Mixture model analysis of complex samples

    NARCIS (Netherlands)

    Wedel, M; ter Hofstede, F; Steenkamp, JBEM

    1998-01-01

    We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample desi

  12. Fused lasso algorithm for Cox' proportional hazards and binomial logit models with application to copy number profiles.

    Science.gov (United States)

    Chaturvedi, Nimisha; de Menezes, Renée X; Goeman, Jelle J

    2014-05-01

    This paper presents an efficient algorithm based on the combination of Newton Raphson and Gradient Ascent, for using the fused lasso regression method to construct a genome-based classifier. The characteristic structure of copy number data suggests that feature selection should take genomic location into account for producing more interpretable results for genome-based classifiers. The fused lasso penalty, an extension of the lasso penalty, encourages sparsity of the coefficients and their differences by penalizing the L1-norm for both of them at the same time, thus using genomic location. The major advantage of the algorithm over other existing fused lasso optimization techniques is its ability to predict binomial as well as survival response efficiently. We apply our algorithm to two publicly available datasets in order to predict survival and binary outcomes.

  13. The Supervised Learning Gaussian Mixture Model

    Institute of Scientific and Technical Information of China (English)

    马继涌; 高文

    1998-01-01

    The traditional Gaussian Mixture Model(GMM)for pattern recognition is an unsupervised learning method.The parameters in the model are derived only by the training samples in one class without taking into account the effect of sample distributions of other classes,hence,its recognition accuracy is not ideal sometimes.This paper introduces an approach for estimating the parameters in GMM in a supervising way.The Supervised Learning Gaussian Mixture Model(SLGMM)improves the recognition accuracy of the GMM.An experimental example has shown its effectiveness.The experimental results have shown that the recognition accuracy derived by the approach is higher than those obtained by the Vector Quantization(VQ)approach,the Radial Basis Function (RBF) network model,the Learning Vector Quantization (LVQ) approach and the GMM.In addition,the training time of the approach is less than that of Multilayer Perceptrom(MLP).

  14. Distinguishing between Binomial, Hypergeometric and Negative Binomial Distributions

    Science.gov (United States)

    Wroughton, Jacqueline; Cole, Tarah

    2013-01-01

    Recognizing the differences between three discrete distributions (Binomial, Hypergeometric and Negative Binomial) can be challenging for students. We present an activity designed to help students differentiate among these distributions. In addition, we present assessment results in the form of pre- and post-tests that were designed to assess the…

  15. Population mixture model for nonlinear telomere dynamics

    Science.gov (United States)

    Itzkovitz, Shalev; Shlush, Liran I.; Gluck, Dan; Skorecki, Karl

    2008-12-01

    Telomeres are DNA repeats protecting chromosomal ends which shorten with each cell division, eventually leading to cessation of cell growth. We present a population mixture model that predicts an exponential decrease in telomere length with time. We analytically solve the dynamics of the telomere length distribution. The model provides an excellent fit to available telomere data and accounts for the previously unexplained observation of telomere elongation following stress and bone marrow transplantation, thereby providing insight into the nature of the telomere clock.

  16. Self-assembly models for lipid mixtures

    Science.gov (United States)

    Singh, Divya; Porcar, Lionel; Butler, Paul; Perez-Salas, Ursula

    2006-03-01

    Solutions of mixed long and short (detergent-like) phospholipids referred to as ``bicelle'' mixtures in the literature, are known to form a variety of different morphologies based on their total lipid composition and temperature in a complex phase diagram. Some of these morphologies have been found to orient in a magnetic field, and consequently bicelle mixtures are widely used to study the structure of soluble as well as membrane embedded proteins using NMR. In this work, we report on the low temperature phase of the DMPC and DHPC bicelle mixture, where there is agreement on the discoid structures but where molecular packing models are still being contested. The most widely accepted packing arrangement, first proposed by Vold and Prosser had the lipids completely segregated in the disk: DHPC in the rim and DMPC in the disk. Using data from small angle neutron scattering (SANS) experiments, we show how radius of the planar domain of the disks is governed by the effective molar ratio qeff of lipids in aggregate and not the molar ratio q (q = [DMPC]/[DHPC] ) as has been understood previously. We propose a new quantitative (packing) model and show that in this self assembly scheme, qeff is the real determinant of disk sizes. Based on qeff , a master equation can then scale the radii of disks from mixtures with varying q and total lipid concentration.

  17. Hierarchical mixture models for assessing fingerprint individuality

    OpenAIRE

    Dass, Sarat C.; Li, Mingfei

    2009-01-01

    The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...

  18. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  19. Bayesian mixture models for partially verified data

    DEFF Research Database (Denmark)

    Kostoulas, Polychronis; Browne, William J.; Nielsen, Søren Saxmose;

    2013-01-01

    for some individuals, in order to minimize this loss in the discriminatory power. The distribution of the continuous antibody response against MAP has been obtained for healthy, MAP-infected and MAP-infectious cows of different age groups. The overall power of the milk-ELISA to discriminate between healthy......Bayesian mixture models can be used to discriminate between the distributions of continuous test responses for different infection stages. These models are particularly useful in case of chronic infections with a long latent period, like Mycobacterium avium subsp. paratuberculosis (MAP) infection...

  20. Video compressive sensing using Gaussian mixture models.

    Science.gov (United States)

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2014-11-01

    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  1. Negative Binomial-Lindley Distribution and Its Application

    Directory of Open Access Journals (Sweden)

    Hossein Zamani

    2010-01-01

    Full Text Available Problem statement: The modeling of claims count is one of the most important topics in actuarial theory and practice. Many attempts were implemented in expanding the classes of mixed and compound distributions, especially in the distribution of exponential family, resulting in a better fit on count data. In some cases, it is proven that mixed distributions, in particular mixed Poisson and mixed negative binomial, provided better fit compared to other distributions. Approach: In this study, we introduce a new mixed negative binomial distribution by mixing the distributions of negative binomial (r,p and Lindley (θ, where the reparameterization of p = exp(-λ is considered. Results: The closed form and the factorial moment of the new distribution, i.e., the negative binomial-Lindley distribution, are derived. In addition, the parameters estimation for negative binomial-Lindley via the method of moments (MME and the Maximum Likelihood Estimation (MLE are provided. Conclusion: The application of negative binomial-Lindley distribution is carried out on two samples of insurance data. Based on the results, it is shown that the negative binomial-Lindley provides a better fit compared to the Poisson and the negative binomial for count data where the probability at zero has a large value.

  2. [Comparison of two spectral mixture analysis models].

    Science.gov (United States)

    Wang, Qin-Jun; Lin, Qi-Zhong; Li, Ming-Xiao; Wang, Li-Ming

    2009-10-01

    A spectral mixture analysis experiment was designed to compare the spectral unmixing effects of linear spectral mixture analysis (LSMA) and constraint linear spectral mixture analysis (CLSMA). In the experiment, red, green, blue and yellow colors were printed on a coarse album as four end members. Thirty nine mixed samples were made according to each end member's different percent in one pixel. Then, field spectrometer was located on the top of the mixed samples' center to measure spectrum one by one. Inversion percent of each end member in the pixel was extracted using LSMA and CLSMA models. Finally, normalized mean squared error was calculated between inversion and real percent to compare the two models' effects on spectral unmixing. Results from experiment showed that the total error of LSMA was 0.30087 and that of CLSMA was 0.37552 when using all bands in the spectrum. Therefore, LSMA was 0.075 less than that of CLSMA when the whole bands of four end members' spectra were used. On the other hand, the total error of LSMA was 0.28095 and that of CLSMA was 0.29805 after band selection. So, LSMA was 0.017 less than that of CLSMA when bands selection was performed. Therefore, whether all or selected bands were used, the accuracy of LSMA was better than that of CLSMA because during the process of spectrum measurement, errors caused by instrument or human were introduced into the model, leading to that the measured data could not mean the strict requirement of CLSMA and therefore reduced its accuracy: Furthermore, the total error of LSMA using selected bands was 0.02 less than that using the whole bands. The total error of CLSMA using selected bands was 0.077 less than that using the whole bands. So, in the same model, spectral unmixing using selected bands to reduce the correlation of end members' spectra was superior to that using the whole bands.

  3. Investigation of a Gamma model for mixture STR samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Lauritzen, Steffen L.

    The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis.......The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis....

  4. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    Science.gov (United States)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered

  5. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    performed satisfactorily and predicted the general behavior of the systems, but qCPA used fewer adjustable parameters to achieve similar predictions. It has been demonstrated that qCPA is a promising model which, compared to CPA, systematically improves the predictions of the experimentally determined phase......, accurate predictions of the thermodynamic properties and phase equilibria of mixtures containing CO2 are challenging with classical models such as the Soave-Redlich-Kwong (SRK) equation of state (EoS). This is believed to be due to the fact, that CO2 has a large quadrupole moment which the classical models...... do not explicitly account for. In this thesis, in an attempt to obtain a physically more consistent model, the cubicplus association (CPA) EoS is extended to include quadrupolar interactions. The new quadrupolar CPA (qCPA) can be used with the experimental value of the quadrupolemoment...

  6. The Binomial Distribution in Shooting

    Science.gov (United States)

    Chalikias, Miltiadis S.

    2009-01-01

    The binomial distribution is used to predict the winner of the 49th International Shooting Sport Federation World Championship in double trap shooting held in 2006 in Zagreb, Croatia. The outcome of the competition was definitely unexpected.

  7. A new Markov Binomial distribution

    OpenAIRE

    Leda D. Minkova; Omey, Edward

    2011-01-01

    In this paper, we introduce a two state homogeneous Markov chain and define a geometric distribution related to this Markov chain. We define also the negative binomial distribution similar to the classical case and call it NB related to interrupted Markov chain. The new binomial distribution is related to the interrupted Markov chain. Some characterization properties of the Geometric distributions are given. Recursion formulas and probability mass functions for the NB distribution and the new...

  8. Bayesian Estimation of a Mixture Model

    Directory of Open Access Journals (Sweden)

    Ilhem Merah

    2015-05-01

    Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.

  9. Compound negative binomial distribution with negative multinomial summands

    Science.gov (United States)

    Jordanova, Pavlina K.; Petkova, Monika P.; Stehlík, Milan

    2016-12-01

    The class of Negative Binomial distributions seems to be introduced by Greenwood and Yule in 1920. Due to its wide spread application, investigations of distributions, closely related with it will be always contemporary. Bates, Neyman and Wishart introduce Negative Multinomial distribution. They reach it considering the mixture of independent Poisson distributed random variables with one and the same Gamma mixing variable. This paper investigates a particular case of multivariate compound distribution with one and the same compounding variable. In our case it is Negative Binomial or Sifted Negative Binomial. The summands with equal indexes in different coordinates are Negative Multinomially distributed. In case without shifting, considered as a mixture, the resulting distribution coincides with Mixed Negative Multinomial distribution with scale changed Negative Binomially distributed first parameter. We prove prove that it is Multivariate Power Series Distributed and find explicit form of its parameters. When the summands are geometrically distributed this distribution is stochastically equivalent to a product of independent Bernoulli random variable and appropriate multivariate Geometrically distributed random vector. We show that Compound Shifted Negative Binomial Distribution with Geometric Summands is a particular case of Negative Multinomial distribution with new parameters.

  10. Mixture latent autoregressive models for longitudinal data

    CERN Document Server

    Bartolucci, Francesco; Pennoni, Fulvia

    2011-01-01

    Many relevant statistical and econometric models for the analysis of longitudinal data include a latent process to account for the unobserved heterogeneity between subjects in a dynamic fashion. Such a process may be continuous (typically an AR(1)) or discrete (typically a Markov chain). In this paper, we propose a model for longitudinal data which is based on a mixture of AR(1) processes with different means and correlation coefficients, but with equal variances. This model belongs to the class of models based on a continuous latent process, and then it has a natural interpretation in many contexts of application, but it is more flexible than other models in this class, reaching a goodness-of-fit similar to that of a discrete latent process model, with a reduced number of parameters. We show how to perform maximum likelihood estimation of the proposed model by the joint use of an Expectation-Maximisation algorithm and a Newton-Raphson algorithm, implemented by means of recursions developed in the hidden Mark...

  11. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    .e. the window length. In this work we use the Wishart Mixture Model (WMM) as a probabilistic model for dFC based on variational inference. The framework admits arbitrary window lengths and number of dynamic components and includes the static one-component model as a special case. We exploit that the WMM...

  12. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  13. Mixture Model and MDSDCA for Textual Data

    Science.gov (United States)

    Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît

    E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.

  14. Mixtures of multiplicative cascade models in geochemistry

    Directory of Open Access Journals (Sweden)

    F. P. Agterberg

    2007-05-01

    Full Text Available Multifractal modeling of geochemical map data can help to explain the nature of frequency distributions of element concentration values for small rock samples and their spatial covariance structure. Useful frequency distribution models are the lognormal and Pareto distributions which plot as straight lines on logarithmic probability and log-log paper, respectively. The model of de Wijs is a simple multiplicative cascade resulting in discrete logbinomial distribution that closely approximates the lognormal. In this model, smaller blocks resulting from dividing larger blocks into parts have concentration values with constant ratios that are scale-independent. The approach can be modified by adopting random variables for these ratios. Other modifications include a single cascade model with ratio parameters that depend on magnitude of concentration value. The Turcotte model, which is another variant of the model of de Wijs, results in a Pareto distribution. Often a single straight line on logarithmic probability or log-log paper does not provide a good fit to observed data and two or more distributions should be fitted. For example, geochemical background and anomalies (extremely high values have separate frequency distributions for concentration values and for local singularity coefficients. Mixtures of distributions can be simulated by adding the results of separate cascade models. Regardless of properties of background, an unbiased estimate can be obtained of the parameter of the Pareto distribution characterizing anomalies in the upper tail of the element concentration frequency distribution or lower tail of the local singularity distribution. Computer simulation experiments and practical examples are used to illustrate the approach.

  15. Application of a random effects negative binomial model to examine tram-involved crash frequency on route sections in Melbourne, Australia.

    Science.gov (United States)

    Naznin, Farhana; Currie, Graham; Logan, David; Sarvi, Majid

    2016-07-01

    Safety is a key concern in the design, operation and development of light rail systems including trams or streetcars as they impose crash risks on road users in terms of crash frequency and severity. The aim of this study is to identify key traffic, transit and route factors that influence tram-involved crash frequencies along tram route sections in Melbourne. A random effects negative binomial (RENB) regression model was developed to analyze crash frequency data obtained from Yarra Trams, the tram operator in Melbourne. The RENB modelling approach can account for spatial and temporal variations within observation groups in panel count data structures by assuming that group specific effects are randomly distributed across locations. The results identify many significant factors effecting tram-involved crash frequency including tram service frequency (2.71), tram stop spacing (-0.42), tram route section length (0.31), tram signal priority (-0.25), general traffic volume (0.18), tram lane priority (-0.15) and ratio of platform tram stops (-0.09). Findings provide useful insights on route section level tram-involved crashes in an urban tram or streetcar operating environment. The method described represents a useful planning tool for transit agencies hoping to improve safety performance.

  16. Empirical profile mixture models for phylogenetic reconstruction

    National Research Council Canada - National Science Library

    Si Quang, Le; Gascuel, Olivier; Lartillot, Nicolas

    2008-01-01

    Motivation: Previous studies have shown that accounting for site-specific amino acid replacement patterns using mixtures of stationary probability profiles offers a promising approach for improving...

  17. Detecting non-binomial sex allocation when developmental mortality operates.

    Science.gov (United States)

    Wilkinson, Richard D; Kapranas, Apostolos; Hardy, Ian C W

    2016-11-01

    Optimal sex allocation theory is one of the most intricately developed areas of evolutionary ecology. Under a range of conditions, particularly under population sub-division, selection favours sex being allocated to offspring non-randomly, generating non-binomial variances of offspring group sex ratios. Detecting non-binomial sex allocation is complicated by stochastic developmental mortality, as offspring sex can often only be identified on maturity with the sex of non-maturing offspring remaining unknown. We show that current approaches for detecting non-binomiality have limited ability to detect non-binomial sex allocation when developmental mortality has occurred. We present a new procedure using an explicit model of sex allocation and mortality and develop a Bayesian model selection approach (available as an R package). We use the double and multiplicative binomial distributions to model over- and under-dispersed sex allocation and show how to calculate Bayes factors for comparing these alternative models to the null hypothesis of binomial sex allocation. The ability to detect non-binomial sex allocation is greatly increased, particularly in cases where mortality is common. The use of Bayesian methods allows for the quantification of the evidence in favour of each hypothesis, and our modelling approach provides an improved descriptive capability over existing approaches. We use a simulation study to demonstrate substantial improvements in power for detecting non-binomial sex allocation in situations where current methods fail, and we illustrate the approach in real scenarios using empirically obtained datasets on the sexual composition of groups of gregarious parasitoid wasps.

  18. The Big Date Classfication Mathematical Modeling Based on the Binomial, Poisson Gaussian Random Clustering%基于二项--泊松的高斯随机聚类数学建模稳定性验证

    Institute of Scientific and Technical Information of China (English)

    王志同

    2016-01-01

    Big data clustering process is a gaussian random process, thus in large-scale data classification, build sound data classification model, is very important to improve the ability of mathematical statistics. Binomial, poisson model with global solution of the convex optimization random clustering performance, using binomial, poisson model, the superiority of gaussian random data processing in finite dimensional space, for data clustering analysis. Build the KKT conditions of binomial, poisson model, obtains the binomial, poisson model polynomial kernel, the boundary value of periodic solution of a gaussian clustering feature decomposition, draw Schur complement functional criterion, binomial, poisson model of large-scale data classification system of mathematical statistics, eventually improve the accuracy of the large data clustering. Results show that the derived using binomial, poisson model in the process of gaussian random big data classification is of stable convergence, effectively improves the big data statistics and analysis ability.%大数据的聚类过程是高斯随机过程,因此在大数据分类中,构建稳健的数据分类模型,提高数理统计能力至关重要。二项-泊松模型具有全局解的凸优化随机聚类性能,利用二项-泊松模型对高斯随机性数据处理的优势,在有限维空间中,进行数据聚类分析。构建二项-泊松模型的KKT条件,取得二项-泊松模型的边值周期解多项式核,进行高斯聚类特征分解,得出Schur complement泛函准则,建立二项-泊松模型的数理统计大数据分类系统,最终验证了稳定性。推导结果表明,利用二项-泊松模型在高斯随机大数据分类过程中是稳定收敛的,有效提高了大数据的数理统计和分析能力。

  19. Modeling methods for mixture-of-mixtures experiments applied to a tablet formulation problem.

    Science.gov (United States)

    Piepel, G F

    1999-01-01

    During the past few years, statistical methods for the experimental design, modeling, and optimization of mixture experiments have been widely applied to drug formulation problems. Different methods are required for mixture-of-mixtures (MoM) experiments in which a formulation is a mixture of two or more "major" components, each of which is a mixture of one or more "minor" components. Two types of MoM experiments are briefly described. A tablet formulation optimization example from a 1997 article in this journal is used to illustrate one type of MoM experiment and corresponding empirical modeling methods. Literature references that discuss other methods for MoM experiments are also provided.

  20. Some Alternating Double Binomial Sums

    Institute of Scientific and Technical Information of China (English)

    ZHENG De-yin; TANG Pei-pei

    2013-01-01

    We consider some new alternating double binomial sums. By using the Lagrange inversion formula, we obtain explicit expressions of the desired results which are related to a third-order linear recursive sequence. Furthermore, their recursive relation and generating functions are obtained.

  1. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  2. An empirical tool to evaluate the safety of cyclists: Community based, macro-level collision prediction models using negative binomial regression.

    Science.gov (United States)

    Wei, Feng; Lovegrove, Gordon

    2013-12-01

    Today, North American governments are more willing to consider compact neighborhoods with increased use of sustainable transportation modes. Bicycling, one of the most effective modes for short trips with distances less than 5km is being encouraged. However, as vulnerable road users (VRUs), cyclists are more likely to be injured when involved in collisions. In order to create a safe road environment for them, evaluating cyclists' road safety at a macro level in a proactive way is necessary. In this paper, different generalized linear regression methods for collision prediction model (CPM) development are reviewed and previous studies on micro-level and macro-level bicycle-related CPMs are summarized. On the basis of insights gained in the exploration stage, this paper also reports on efforts to develop negative binomial models for bicycle-auto collisions at a community-based, macro-level. Data came from the Central Okanagan Regional District (CORD), of British Columbia, Canada. The model results revealed two types of statistical associations between collisions and each explanatory variable: (1) An increase in bicycle-auto collisions is associated with an increase in total lane kilometers (TLKM), bicycle lane kilometers (BLKM), bus stops (BS), traffic signals (SIG), intersection density (INTD), and arterial-local intersection percentage (IALP). (2) A decrease in bicycle collisions was found to be associated with an increase in the number of drive commuters (DRIVE), and in the percentage of drive commuters (DRP). These results support our hypothesis that in North America, with its current low levels of bicycle use (macro-level CPMs. Copyright © 2012. Published by Elsevier Ltd.

  3. Species Tree Inference Using a Mixture Model.

    Science.gov (United States)

    Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens

    2015-09-01

    Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic

  4. Certain Binomial Sums with recursive coefficients

    CERN Document Server

    Kilic, Emrah

    2010-01-01

    In this short note, we establish some identities containing sums of binomials with coefficients satisfying third order linear recursive relations. As a result and in particular, we obtain general forms of earlier identities involving binomial coefficients and Fibonacci type sequences.

  5. Smoothness in Binomial Edge Ideals

    Directory of Open Access Journals (Sweden)

    Hamid Damadi

    2016-06-01

    Full Text Available In this paper we study some geometric properties of the algebraic set associated to the binomial edge ideal of a graph. We study the singularity and smoothness of the algebraic set associated to the binomial edge ideal of a graph. Some of these algebraic sets are irreducible and some of them are reducible. If every irreducible component of the algebraic set is smooth we call the graph an edge smooth graph, otherwise it is called an edge singular graph. We show that complete graphs are edge smooth and introduce two conditions such that the graph G is edge singular if and only if it satisfies these conditions. Then, it is shown that cycles and most of trees are edge singular. In addition, it is proved that complete bipartite graphs are edge smooth.

  6. Learning High-Dimensional Mixtures of Graphical Models

    CERN Document Server

    Anandkumar, A; Kakade, S M

    2012-01-01

    We consider the problem of learning mixtures of discrete graphical models in high dimensions and propose a novel method for estimating the mixture components with provable guarantees. The method proceeds mainly in three stages. In the first stage, it estimates the union of the Markov graphs of the mixture components (referred to as the union graph) via a series of rank tests. It then uses this estimated union graph to compute the mixture components via a spectral decomposition method. The spectral decomposition method was originally proposed for latent class models, and we adapt this method for learning the more general class of graphical model mixtures. In the end, the method produces tree approximations of the mixture components via the Chow-Liu algorithm. Our output is thus a tree-mixture model which serves as a good approximation to the underlying graphical model mixture. When the union graph has sparse node separators, we prove that our method has sample and computational complexities scaling as poly(p, ...

  7. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations.

  8. Second-order model selection in mixture experiments

    Energy Technology Data Exchange (ETDEWEB)

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  9. A stochastic evolutionary model generating a mixture of exponential distributions

    CERN Document Server

    Fenner, Trevor; Loizou, George

    2015-01-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in \\cite{FENN15} so that it can generate mixture models,in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  10. Detection of unobserved heterogeneity with growth mixture models

    OpenAIRE

    Jost Reinecke; Luca Mariotti

    2009-01-01

    Latent growth curve models as structural equation models are extensively discussedin various research fields (Duncan et al., 2006). Recent methodological and statisticalextension are focused on the consideration of unobserved heterogeneity in empiricaldata. Muth´en extended the classical structural equation approach by mixture components,i. e. categorical latent classes (Muth´en 2002, 2004, 2007).The paper will discuss applications of growth mixture models with data from oneof the first panel...

  11. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for asparta

  12. Modeling and interpreting biological effects of mixtures in the environment: introduction to the metal mixture modeling evaluation project.

    Science.gov (United States)

    Van Genderen, Eric; Adams, William; Dwyer, Robert; Garman, Emily; Gorsuch, Joseph

    2015-04-01

    The fate and biological effects of chemical mixtures in the environment are receiving increased attention from the scientific and regulatory communities. Understanding the behavior and toxicity of metal mixtures poses unique challenges for incorporating metal-specific concepts and approaches, such as bioavailability and metal speciation, in multiple-metal exposures. To avoid the use of oversimplified approaches to assess the toxicity of metal mixtures, a collaborative 2-yr research project and multistakeholder group workshop were conducted to examine and evaluate available higher-tiered chemical speciation-based metal mixtures modeling approaches. The Metal Mixture Modeling Evaluation project and workshop achieved 3 important objectives related to modeling and interpretation of biological effects of metal mixtures: 1) bioavailability models calibrated for single-metal exposures can be integrated to assess mixture scenarios; 2) the available modeling approaches perform consistently well for various metal combinations, organisms, and endpoints; and 3) several technical advancements have been identified that should be incorporated into speciation models and environmental risk assessments for metals.

  13. Simulation of rheological behavior of asphalt mixture with lattice model

    Institute of Scientific and Technical Information of China (English)

    杨圣枫; 杨新华; 陈传尧

    2008-01-01

    A three-dimensional(3D) lattice model for predicting the rheological behavior of asphalt mixtures was presented.In this model asphalt mixtures were described as a two-phase composite material consisting of asphalt sand and coarse aggregates distributed randomly.Asphalt sand was regarded as a viscoelastic material and aggregates as an elastic material.The rheological response of asphalt mixture subjected to different constant stresses was simulated.The calibrated overall creep strain shows a good approximation to experimental results.

  14. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    2011, and compute dollar losses and implied standard deviation losses. We compare our results to those of existing mixture models and other benchmarks like component models and jump models. Using the model confidence set test, the overall dollar root mean squared error of the best performing benchmark...

  15. Ruin Probability and Asymptotic Estimate of a Compound Binomial Distribution Model%复合二项分布模型的破产概率及其渐近估计

    Institute of Scientific and Technical Information of China (English)

    许璐; 赵闻达

    2012-01-01

    运用古典概率的有关知识,通过建立合适的数学模型导出了复合二项分布的破产概率的显式解,进而得到了它的渐近估计表达式。所得结论包含了有关文献的结果。%The classical probability theory is used to derive solution of the ultimate ruin prob- ability in a compound binomial distribution model, and its asymptotic estimation is obtained. The conclusion has improved the result in related literature.

  16. Proper Versus Improper Mixtures in the ESR Model

    CERN Document Server

    Garola, Claudio

    2011-01-01

    The interpretation of mixtures is problematic in quantum mechanics (QM) because of nonobjectivity of properties. The ESR model restores objectivity reinterpreting quantum probabilities as conditional on detection and embodying the mathematical formalism of QM into a broader noncontextual (hence local) framework. We have recently provided a Hilbert space representation of the generalized observables that appear in the ESR model. We show here that each proper mixture is represented by a family of density operators parametrized by the macroscopic properties characterizing the physical system $\\Omega$ that is considered and that each improper mixture is represented by a single density operator which coincides with the operator that represents it in QM. The new representations avoid the problems mentioned above and entail some predictions that differ from the predictions of QM. One can thus contrive experiments for distinguishing empirically proper from improper mixtures, hence for confirming or disproving the ESR...

  17. Mixture modeling approach to flow cytometry data.

    Science.gov (United States)

    Boedigheimer, Michael J; Ferbas, John

    2008-05-01

    Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.

  18. Stochastic downscaling of precipitation with neural network conditional mixture models

    Science.gov (United States)

    Carreau, Julie; Vrac, Mathieu

    2011-10-01

    We present a new class of stochastic downscaling models, the conditional mixture models (CMMs), which builds on neural network models. CMMs are mixture models whose parameters are functions of predictor variables. These functions are implemented with a one-layer feed-forward neural network. By combining the approximation capabilities of mixtures and neural networks, CMMs can, in principle, represent arbitrary conditional distributions. We evaluate the CMMs at downscaling precipitation data at three stations in the French Mediterranean region. A discrete (Dirac) component is included in the mixture to handle the "no-rain" events. Positive rainfall is modeled with a mixture of continuous densities, which can be either Gaussian, log-normal, or hybrid Pareto (an extension of the generalized Pareto). CMMs are stochastic weather generators in the sense that they provide a model for the conditional density of local variables given large-scale information. In this study, we did not look for the most appropriate set of predictors, and we settled for a decent set as the basis to compare the downscaling models. The set of predictors includes the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalyses sea level pressure fields on a 6 × 6 grid cell region surrounding the stations plus three date variables. We compare the three distribution families of CMMs with a simpler benchmark model, which is more common in the downscaling community. The difference between the benchmark model and CMMs is that positive rainfall is modeled with a single Gamma distribution. The results show that CMM with hybrid Pareto components outperforms both the CMM with Gaussian components and the benchmark model in terms of log-likelihood. However, there is no significant difference with the log-normal CMM. In general, the additional flexibility of mixture models, as opposed to using a single distribution, allows us to better represent the

  19. Binomial lattice for pricing Asian options on yields

    Institute of Scientific and Technical Information of China (English)

    杨德生

    2003-01-01

    An efficient binomial lattice for pricing Asian options on yields is established under the affine term structure model. In order to reconnect the path of the discrete lattice,the technique of D. Nelson and K. Ramaswamy is used to transform a stochastic interest rate process into a stochastic diffusion with unit volatility. By the binomial lattice and linear interpolation,the prices of Asian options on yields can be obtained. As the number of nodes in the tree structure grows linearly with the number of time steps, the computational speed is improved. The numerical experiments to verify the validity of the lattice are also provided.

  20. Count data modeling and classification using finite mixtures of distributions.

    Science.gov (United States)

    Bouguila, Nizar

    2011-02-01

    In this paper, we consider the problem of constructing accurate and flexible statistical representations for count data, which we often confront in many areas such as data mining, computer vision, and information retrieval. In particular, we analyze and compare several generative approaches widely used for count data clustering, namely multinomial, multinomial Dirichlet, and multinomial generalized Dirichlet mixture models. Moreover, we propose a clustering approach via a mixture model based on a composition of the Liouville family of distributions, from which we select the Beta-Liouville distribution, and the multinomial. The novel proposed model, which we call multinomial Beta-Liouville mixture, is optimized by deterministic annealing expectation-maximization and minimum description length, and strives to achieve a high accuracy of count data clustering and model selection. An important feature of the multinomial Beta-Liouville mixture is that it has fewer parameters than the recently proposed multinomial generalized Dirichlet mixture. The performance evaluation is conducted through a set of extensive empirical experiments, which concern text and image texture modeling and classification and shape modeling, and highlights the merits of the proposed models and approaches.

  1. 零膨胀负二项回归模型的推广与费率厘定%Generalization of zero-inflated negative binomial regression model and ratemaking

    Institute of Scientific and Technical Information of China (English)

    徐昕; 袁卫; 孟生旺

    2012-01-01

    When the claim numbers appear to be over-dispersed in ratemaking, negative binomial regression model will be usually applied. However, it is also possible that the claim numbers may be zero-inflated, and then the negative binomial regression is not suitable for those data. The paper makes generalization of zero-inflated negative binomial distribution based on traditional ones to deal with the over-dispersed and zero-inflated data simultaneously. At the end of the paper, the extended model is applied to a data set of automobile insurance loss and the result shows that the goodness-of-fit can be effectively improved.%在费率厘定中,当索赔次数数据存在过离散(over-dispersion)特征时,通常会采用负二项回归模型,但当索赔数据中同时又出现零膨胀(zero-inflated)问题时,负二项回归模型不再适合对这样的数据进行分析.在传统的零膨胀负二项回归模型为基础,并将其推广到更为一般的形式,同时利用解决费率厘定中出现的既有过离散又有零膨胀的问题.通过对一组汽车损失数据的拟合,推广后的零膨胀负二项回归模型有效地改善了拟合效果.

  2. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    framework. The global form of heterogeneity is incorporated in a Hedonic Price Index model that encompasses a nonlinear function of the geographical coordinates of each dwelling. The local form of heterogeneity is subsequently modeled as a Finite Mixture Model for the residuals of the Hedonic Index...

  3. Using the {Beta}-binomial distribution to characterize forest health

    Energy Technology Data Exchange (ETDEWEB)

    Zarnoch, S. J. [USDA Forest Service, Southern Research Station, Athens, GA (United States); Anderson, R.L.; Sheffield, R. M. [USDA Forest Service, Southern Research Station, Asheville, NC (United States)

    1995-03-01

    Forest health monitoring programs often use base variables which are dichotomous (i e. alive/dead, damaged/undamaged) to describe the health of trees. Typical sampling designs usually consist of randomly or systematically chosen clusters of trees for observation.It was claimed that contagiousness of diseases for example may result in non-uniformity of affected trees, so that distribution of the proportions, rather than simply the mean proportion, becomes important. The use of the {Beta}-binomial model was suggested for such cases. Use of the {Beta}-binomial distribution model applied in forest health analyses, was described.. Data on dogwood anthracnose (caused by Discula destructiva), a disease of flowering dogwood (Cornus florida L.), was used to illustrate the utility of the model. The {Beta}-binomial model allowed the detection of different distributional patterns of dogwood anthracnose over time and space. Results led to further speculation regarding the cause of the patterns. Traditional proportion analyses like ANOVA would not have detected the trends found using the {Beta}-binomial model, until more distinct patterns had evolved at a later date. The model was said to be flexible and require no special weighting or transformations of data.Another advantage claimed was its ability to handle unequal sample sizes.

  4. 两类零膨胀负二项回归模型在汽车保险定价中的应用%Application of two types of negative binomial regression models in automobile insurance ratemaking

    Institute of Scientific and Technical Information of China (English)

    徐昕; 郭念国

    2011-01-01

    The paper discusses two types of distribution forms zero-inflated negative binomial regression models, and applies the two models to a set of actual automobile insurance loss data. The results show that the two types of zero-inflated negative binomial regression models can improve the goodness-of-fit effectively than the common claim frequency models when the loss data is zero-inflated.%讨论了两种分布形式的零膨胀负二项回归模型,并应用一组实际汽车保险损失数据对两类模型进行了实证比较.结果表明,对于具有零膨胀特征的损失数据,零膨胀负二项回归模型的拟合结果优于普通索赔频率回归模型.

  5. Novel mixture model for the representation of potential energy surfaces

    Science.gov (United States)

    Pham, Tien Lam; Kino, Hiori; Terakura, Kiyoyuki; Miyake, Takashi; Dam, Hieu Chi

    2016-10-01

    We demonstrate that knowledge of chemical physics on a materials system can be automatically extracted from first-principles calculations using a data mining technique; this information can then be utilized to construct a simple empirical atomic potential model. By using unsupervised learning of the generative Gaussian mixture model, physically meaningful patterns of atomic local chemical environments can be detected automatically. Based on the obtained information regarding these atomic patterns, we propose a chemical-structure-dependent linear mixture model for estimating the atomic potential energy. Our experiments show that the proposed mixture model significantly improves the accuracy of the prediction of the potential energy surface for complex systems that possess a large diversity in their local structures.

  6. Finite mixture varying coefficient models for analyzing longitudinal heterogenous data.

    Science.gov (United States)

    Lu, Zhaohua; Song, Xinyuan

    2012-03-15

    This paper aims to develop a mixture model to study heterogeneous longitudinal data on the treatment effect of heroin use from a California Civil Addict Program. Each component of the mixture is characterized by a varying coefficient mixed effect model. We use the Bayesian P-splines approach to approximate the varying coefficient functions. We develop Markov chain Monte Carlo algorithms to estimate the smooth functions, unknown parameters, and latent variables in the model. We use modified deviance information criterion to determine the number of components in the mixture. A simulation study demonstrates that the modified deviance information criterion selects the correct number of components and the estimation of unknown quantities is accurate. We apply the proposed model to the heroin treatment study. Furthermore, we identify heterogeneous longitudinal patterns.

  7. Phylogenetic mixture models can reduce node-density artifacts.

    Science.gov (United States)

    Venditti, Chris; Meade, Andrew; Pagel, Mark

    2008-04-01

    We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the

  8. Community Detection Using Multilayer Edge Mixture Model

    CERN Document Server

    Zhang, Han; Lai, Jian-Huang; Yu, Philip S

    2016-01-01

    A wide range of complex systems can be modeled as networks with corresponding constraints on the edges and nodes, which have been extensively studied in recent years. Nowadays, with the progress of information technology, systems that contain the information collected from multiple perspectives have been generated. The conventional models designed for single perspective networks fail to depict the diverse topological properties of such systems, so multilayer network models aiming at describing the structure of these networks emerge. As a major concern in network science, decomposing the networks into communities, which usually refers to closely interconnected node groups, extracts valuable information about the structure and interactions of the network. Unlike the contention of dozens of models and methods in conventional single-layer networks, methods aiming at discovering the communities in the multilayer networks are still limited. In order to help explore the community structure in multilayer networks, we...

  9. Problems on Divisibility of Binomial Coefficients

    Science.gov (United States)

    Osler, Thomas J.; Smoak, James

    2004-01-01

    Twelve unusual problems involving divisibility of the binomial coefficients are represented in this article. The problems are listed in "The Problems" section. All twelve problems have short solutions which are listed in "The Solutions" section. These problems could be assigned to students in any course in which the binomial theorem and Pascal's…

  10. Currency lookback options and observation frequency: A binomial approach

    NARCIS (Netherlands)

    T.H.F. Cheuk; A.C.F. Vorst (Ton)

    1997-01-01

    textabstractIn the last decade, interest in exotic options has been growing, especially in the over-the-counter currency market. In this paper we consider Iookback currency options, which are path-dependent. We show that a one-state variable binomial model for currency Iookback options can be constr

  11. Modeling Biodegradation Kinetics on Benzene and Toluene and Their Mixture

    Directory of Open Access Journals (Sweden)

    Aparecido N. Módenes

    2007-10-01

    Full Text Available The objective of this work was to model the biodegradation kinetics of toxic compounds toluene and benzene as pure substrates and in a mixture. As a control, Monod and Andrews models were used. To predict substrates interactions, more sophisticated models of inhibition and competition, and SKIP (sum kinetics interactions parameters model were applied. The models evaluation was performed based on the experimental data from Pseudomonas putida F1 activities published in the literature. In parameter identification procedure, the global method of particle swarm optimization (PSO was applied. The simulation results show that the better description of the biodegradation process of pure toxic substrate can be achieved by Andrews' model. The biodegradation process of a mixture of toxic substrates is modeled the best when modified competitive inhibition and SKIP models are used. The developed software can be used as a toolbox of a kinetics model catalogue of industrial wastewater treatment for process design and optimization.

  12. Statistical Compressed Sensing of Gaussian Mixture Models

    CERN Document Server

    Yu, Guoshen

    2011-01-01

    A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is u...

  13. A non-linear beta-binomial regression model for mapping EORTC QLQ- C30 to the EQ-5D-3L in lung cancer patients: a comparison with existing approaches.

    Science.gov (United States)

    Khan, Iftekhar; Morris, Stephen

    2014-11-12

    The performance of the Beta Binomial (BB) model is compared with several existing models for mapping the EORTC QLQ-C30 (QLQ-C30) on to the EQ-5D-3L using data from lung cancer trials. Data from 2 separate non small cell lung cancer clinical trials (TOPICAL and SOCCAR) are used to develop and validate the BB model. Comparisons with Linear, TOBIT, Quantile, Quadratic and CLAD models are carried out. The mean prediction error, R(2), proportion predicted outside the valid range, clinical interpretation of coefficients, model fit and estimation of Quality Adjusted Life Years (QALY) are reported and compared. Monte-Carlo simulation is also used. The Beta-Binomial regression model performed 'best' among all models. For TOPICAL and SOCCAR trials, respectively, residual mean square error (RMSE) was 0.09 and 0.11; R(2) was 0.75 and 0.71; observed vs. predicted means were 0.612 vs. 0.608 and 0.750 vs. 0.749. Mean difference in QALY's (observed vs. predicted) were 0.051 vs. 0.053 and 0.164 vs. 0.162 for TOPICAL and SOCCAR respectively. Models tested on independent data show simulated 95% confidence from the BB model containing the observed mean more often (77% and 59% for TOPICAL and SOCCAR respectively) compared to the other models. All algorithms over-predict at poorer health states but the BB model was relatively better, particularly for the SOCCAR data. The BB model may offer superior predictive properties amongst mapping algorithms considered and may be more useful when predicting EQ-5D-3L at poorer health states. We recommend the algorithm derived from the TOPICAL data due to better predictive properties and less uncertainty.

  14. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    Science.gov (United States)

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  15. Multi-resolution image segmentation based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    Tang Yinggan; Liu Dong; Guan Xinping

    2006-01-01

    Mixture model based image segmentation method, which assumes that image pixels are independent and do not consider the position relationship between pixels, is not robust to noise and usually leads to misclassification. A new segmentation method, called multi-resolution Gaussian mixture model method, is proposed. First, an image pyramid is constructed and son-father link relationship is built between each level of pyramid. Then the mixture model segmentation method is applied to the top level. The segmentation result on the top level is passed top-down to the bottom level according to the son-father link relationship between levels. The proposed method considers not only local but also global information of image, it overcomes the effect of noise and can obtain better segmentation result. Experimental result demonstrates its effectiveness.

  16. A Gamma Model for Mixture STR Samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Morling, Niels

    This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered in the amp......This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered...... in the amplifier output and thereby the question of confidence in separate DNA -profiles suggested by an output is addressed....

  17. Modeling of Complex Mixtures: JP-8 Toxicokinetics

    Science.gov (United States)

    2008-10-01

    diffusion, including metabolic loss via the cytochrome P-450 system, described by non-linear Michaelis - Menten kinetics as shown in the following...point. Inhalation and iv were the dose routes for the rat study. The modelers used saturable ( Michaelis - Menten ) kinetics as well as a second... Michaelis - Menten liver metabolic constants for n-decane have been measured (Km = 1.5 mg/L and Vmax = 0.4 mg/hour) using rat liver slices in a vial

  18. Spatial mixture multiscale modeling for aggregated health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-09-01

    One of the main goals in spatial epidemiology is to study the geographical pattern of disease risks. For such purpose, the convolution model composed of correlated and uncorrelated components is often used. However, one of the two components could be predominant in some regions. To investigate the predominance of the correlated or uncorrelated component for multiple scale data, we propose four different spatial mixture multiscale models by mixing spatially varying probability weights of correlated (CH) and uncorrelated heterogeneities (UH). The first model assumes that there is no linkage between the different scales and, hence, we consider independent mixture convolution models at each scale. The second model introduces linkage between finer and coarser scales via a shared uncorrelated component of the mixture convolution model. The third model is similar to the second model but the linkage between the scales is introduced through the correlated component. Finally, the fourth model accommodates for a scale effect by sharing both CH and UH simultaneously. We applied these models to real and simulated data, and found that the fourth model is the best model followed by the second model.

  19. Hidden Markov Models with Factored Gaussian Mixtures Densities

    Institute of Scientific and Technical Information of China (English)

    LI Hao-zheng; LIU Zhi-qiang; ZHU Xiang-hua

    2004-01-01

    We present a factorial representation of Gaussian mixture models for observation densities in Hidden Markov Models(HMMs), which uses the factorial learning in the HMM framework. We derive the reestimation formulas for estimating the factorized parameters by the Expectation Maximization (EM) algorithm. We conduct several experiments to compare the performance of this model structure with Factorial Hidden Markov Models(FHMMs) and HMMs, some conclusions and promising empirical results are presented.

  20. A stochastic evolutionary model generating a mixture of exponential distributions

    Science.gov (United States)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  1. On Simon's two-stage design for single-arm phase IIA cancer clinical trials under beta-binomial distribution.

    Science.gov (United States)

    Liu, Junfeng; Lin, Yong; Shih, Weichung Joe

    2010-05-10

    Simon (Control. Clin. Trials 1989; 10:1-10)'s two-stage design has been broadly applied to single-arm phase IIA cancer clinical trials in order to minimize either the expected or the maximum sample size under the null hypothesis of drug inefficacy, i.e. when the pre-specified amount of improvement in response rate (RR) is not expected to be observed. This paper studies a realistic scenario where the standard and experimental treatment RRs follow two continuous distributions (e.g. beta distribution) rather than two single values. The binomial probabilities in Simon's (Control. Clin. Trials 1989; 10:1-10) design are replaced by prior predictive Beta-binomial probabilities that are the ratios of two beta functions and domain-restricted RRs involve incomplete beta functions to induce the null hypothesis acceptance probability. We illustrate that Beta-binomial mixture model based two-stage design retains certain desirable properties for hypothesis testing purpose. However, numerical results show that such designs may not exist under certain hypothesis and error rate (type I and II) setups within maximal sample size approximately 130. Furthermore, we give theoretical conditions for asymptotic two-stage design non-existence (sample size goes to infinity) in order to improve the efficiency of design search and to avoid needless searching.

  2. Multinomial mixture model with heterogeneous classification probabilities

    Science.gov (United States)

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  3. Estimating negative binomial parameters from occurrence data with detection times.

    Science.gov (United States)

    Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub

    2016-11-01

    The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples.

  4. Hard-sphere kinetic models for inert and reactive mixtures

    Science.gov (United States)

    Polewczak, Jacek

    2016-10-01

    I consider stochastic variants of a simple reacting sphere (SRS) kinetic model (Xystris and Dahler 1978 J. Chem. Phys. 68 387-401, Qin and Dahler 1995 J. Chem. Phys. 103 725-50, Dahler and Qin 2003 J. Chem. Phys. 118 8396-404) for dense reacting mixtures. In contrast to the line-of-center models of chemical reactive models, in the SRS kinetic model, the microscopic reversibility (detailed balance) can be easily shown to be satisfied, and thus all mathematical aspects of the model can be fully justified. In the SRS model, the molecules behave as if they were single mass points with two internal states. Collisions may alter the internal states of the molecules, and this occurs when the kinetic energy associated with the reactive motion exceeds the activation energy. Reactive and non-reactive collision events are considered to be hard sphere-like. I consider a four component mixture A, B, A *, B *, in which the chemical reactions are of the type A+B\\rightleftharpoons {{A}\\ast}+{{B}\\ast} , with A * and B * being distinct species from A and B. This work extends the joined works with George Stell to the kinetic models of dense inert and reactive mixtures. The idea of introducing smearing-type effect in the collisional process results in a new class of stochastic kinetic models for both inert and reactive mixtures. In this paper the important new mathematical properties of such systems of kinetic equations are proven. The new results for stochastic revised Enskog system for inert mixtures are also provided.

  5. Robust estimation of unbalanced mixture models on samples with outliers.

    Science.gov (United States)

    Galimzianova, Alfiia; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-01

    Mixture models are often used to compactly represent samples from heterogeneous sources. However, in real world, the samples generally contain an unknown fraction of outliers and the sources generate different or unbalanced numbers of observations. Such unbalanced and contaminated samples may, for instance, be obtained by high density data sensors such as imaging devices. Estimation of unbalanced mixture models from samples with outliers requires robust estimation methods. In this paper, we propose a novel robust mixture estimator incorporating trimming of the outliers based on component-wise confidence level ordering of observations. The proposed method is validated and compared to the state-of-the-art FAST-TLE method on two data sets, one consisting of synthetic samples with a varying fraction of outliers and a varying balance between mixture weights, while the other data set contained structural magnetic resonance images of the brain with tumors of varying volumes. The results on both data sets clearly indicate that the proposed method is capable to robustly estimate unbalanced mixtures over a broad range of outlier fractions. As such, it is applicable to real-world samples, in which the outlier fraction cannot be estimated in advance.

  6. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities....... Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%....

  7. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  8. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  9. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  10. Detecting Social Desirability Bias Using Factor Mixture Models

    Science.gov (United States)

    Leite, Walter L.; Cooper, Lou Ann

    2010-01-01

    Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…

  11. An integral equation model for warm and hot dense mixtures

    CERN Document Server

    Starrett, C E; Daligault, J; Hamel, S

    2014-01-01

    In Starrett and Saumon [Phys. Rev. E 87, 013104 (2013)] a model for the calculation of electronic and ionic structures of warm and hot dense matter was described and validated. In that model the electronic structure of one "atom" in a plasma is determined using a density functional theory based average-atom (AA) model, and the ionic structure is determined by coupling the AA model to integral equations governing the fluid structure. That model was for plasmas with one nuclear species only. Here we extend it to treat plasmas with many nuclear species, i.e. mixtures, and apply it to a carbon-hydrogen mixture relevant to inertial confinement fusion experiments. Comparison of the predicted electronic and ionic structures with orbital-free and Kohn-Sham molecular dynamics simulations reveals excellent agreement wherever chemical bonding is not significant.

  12. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict...

  13. A general mixture model for sediment laden flows

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián

    2017-09-01

    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  14. Adaptive mixture observation models for multiple object tracking

    Institute of Scientific and Technical Information of China (English)

    CUI Peng; SUN LiFeng; YANG ShiQiang

    2009-01-01

    Multiple object tracking (MOT) poses many difficulties to conventional well-studied single object track-ing (SOT) algorithms, such as severe expansion of configuration space, high complexity of motion con-ditions, and visual ambiguities among nearby targets, among which the visual ambiguity problem is the central challenge. In this paper, we address this problem by embedding adaptive mixture observation models (AMOM) into a mixture tracker which is implemented in Particle Filter framework. In AMOM, the extracted multiple features for appearance description are combined according to their discriminative power between ambiguity prone objects, where the discriminability of features are evaluated by online entropy-based feature selection techniques. The induction of AMOM can help to surmount the Incapa-bility of conventional mixture tracker in handling object occlusions, and meanwhile retain its merits of flexibility and high efficiency. The final experiments show significant improvement in MOT scenarios compared with other methods.

  15. 私募股权投资中二叉树期权估值模型研究%Research on the Binomial Tree Option Valuation Model in Private Equity Investment

    Institute of Scientific and Technical Information of China (English)

    李爱民; 韩佳佳

    2016-01-01

    In this paper, real option theory is applied to make investment decision of private equity, we analyze the defects of tradition-al valuation methods and the option characteristics of private equity investment, besides, we construct the option valuation model of binomial tree under risk neutral condition and carry on empirical analysis with the new investment decision model.%本文将实物期权理论引入私募股权投资决策中,分析了传统估值方法存在的缺陷及私募股权投资的期权特性,构建了基于风险中性条件的二叉树期权估值模型,并对该模型在私募股权投资中进行了实例分析。

  16. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  17. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  18. The physical model for research of behavior of grouting mixtures

    Science.gov (United States)

    Hajovsky, Radovan; Pies, Martin; Lossmann, Jaroslav

    2016-06-01

    The paper deals with description of physical model designed for verification of behavior of grouting mixtures when applied below underground water level. Described physical model has been set up to determine propagation of grouting mixture in a given environment. Extension of grouting in this environment is based on measurement of humidity and temperature with the use of combined sensors located within preinstalled special measurement probes around grouting needle. Humidity was measured by combined capacity sensor DTH-1010, temperature was gathered by a NTC thermistor. Humidity sensors measured time when grouting mixture reached sensor location point. NTC thermistors measured temperature changes in time starting from initial of injection. This helped to develop 3D map showing the distribution of grouting mixture through the environment. Accomplishment of this particular measurement was carried out by a designed primary measurement module capable of connecting 4 humidity and temperature sensors. This module also takes care of converting these physical signals into unified analogue signals consequently brought to the input terminals of analogue input of programmable automation controller (PAC) WinPAC-8441. This controller ensures the measurement itself, archiving and visualization of all data. Detail description of a complex measurement system and evaluation in form of 3D animations and graphs is supposed to be in a full paper.

  19. Landmine detection using mixture of discrete hidden Markov models

    Science.gov (United States)

    Frigui, Hichem; Hamdi, Anis; Missaoui, Oualid; Gader, Paul

    2009-05-01

    We propose a landmine detection algorithm that uses a mixture of discrete hidden Markov models. We hypothesize that the data are generated by K models. These different models reflect the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Model identification could be achieved through clustering in the parameters space or in the feature space. However, this approach is inappropriate as it is not trivial to define a meaningful distance metric for model parameters or sequence comparison. Our proposed approach is based on clustering in the log-likelihood space, and has two main steps. First, one HMM is fit to each of the R individual sequence. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an R×R log-likelihood distance matrix that will be partitioned into K groups using a hierarchical clustering algorithm. In the second step, we pool the sequences, according to which cluster they belong, into K groups, and we fit one HMM to each group. The mixture of these K HMMs would be used to build a descriptive model of the data. An artificial neural networks is then used to fuse the output of the K models. Results on large and diverse Ground Penetrating Radar data collections show that the proposed method can identify meaningful and coherent HMM models that describe different properties of the data. Each HMM models a group of alarm signatures that share common attributes such as clutter, mine type, and burial depth. Our initial experiments have also indicated that the proposed mixture model outperform the baseline HMM that uses one model for the mine and one model for the background.

  20. Gaussian mixture models as flux prediction method for central receivers

    Science.gov (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  1. Requirement of Working Stably of Compound Negative Binomial Risk Model with Constant Interest rate%常利率复合负二项风险模型下稳定经营的必要条件

    Institute of Scientific and Technical Information of China (English)

    乔克林; 高渊; 张宁

    2015-01-01

    Assume that insurance companies began to hold capital to u,with constane δ is accumulation of interest rates,and policy number always obey hegative binomia process,manage compensate total number follows poisson process. we give the compound negative binomial risk model with constant interest rate and the requirement of insur-ance company working stably.%假设保险公司刚开始持有的资本为u,以常数δ为利率积累,并且保单总份数服从负二项过程,理赔总次数服从Poisson过程,给出常利率复合负二项风险模型以及稳定经营的必要条件。

  2. The Kumaraswamy Binomial Distribution%Kumaraswamy Binomial分布

    Institute of Scientific and Technical Information of China (English)

    李效虎; 黄彦彦; 赵雪艳

    2011-01-01

    The probability of success in a Binomial model is often viewed as a continuous random variable when needs to be considered.In this note,we study the mixed Binomial model with the probability of success having the Kumaraswany distribution.Stochastic orders and dependence in this model are discussed; Further,the new models are employed to fit some real data sets,and the numerical results reveal that KB models perform better than Beta-Binomial model in some occasions.%在Binomial模型中考虑over-dispersion时,每一个独立事件成功的概率通常被视为一个连续随机变量.在本文中,我们提出了成功概率服从Kumaraswamy分布的混合Binomial模型.讨论了这个模型的随机序和相依性;并用数据来拟合这些模型,数值计算结果表明在拟合某些数据时KB模型比BB模型拟合效果更好.

  3. A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization

    Directory of Open Access Journals (Sweden)

    Gonzalo Vegas-Sanchez-Ferrero

    2012-01-01

    Full Text Available Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG distribution (which also generalizes the Nakagami distribution was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1 a simple but robust methodology to estimate the ML parameters of GG distributions and (2 a Generalized Gama Mixture Model (GGMM. These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.

  4. Modeling, clustering, and segmenting video with mixtures of dynamic textures.

    Science.gov (United States)

    Chan, Antoni B; Vasconcelos, Nuno

    2008-05-01

    A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.

  5. Modelling noise in second generation sequencing forensic genetics STR data using a one-inflated (zero-truncated) negative binomial model

    DEFF Research Database (Denmark)

    Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt;

    2015-01-01

    –10% of the total marker coverage. In comparison, our method resulted in three allelic drop-outs (true alleles below threshold), whereas the 10%-threshold induced 12 drop-outs. The non-filtered error reads (e.g. stutters, shoulders and reads with miscalled bases) will subsequently be modelled by different...

  6. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although the norma......In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...... the normalized L2 distance was slightly inferior to the Kullback-Leibler distance with respect to classification performance, it has the advantage of obeying the triangle inequality, which allows for efficient searching....

  7. Detecting Clusters in Atom Probe Data with Gaussian Mixture Models.

    Science.gov (United States)

    Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P

    2017-04-01

    Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.

  8. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

    Science.gov (United States)

    2007-09-01

    unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ...Pless. Figure 1 shows, for each algorithm, the point cloud with each point colored and marked differently according to its classification. In the dif...1: Clustering of a spiral and a plane. Results with different algorithms (this is a color figure). Due to the statistical nature of the R-TPMM

  9. Analysis of Forest Foliage Using a Multivariate Mixture Model

    Science.gov (United States)

    Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.

    1997-01-01

    Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.

  10. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  11. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-11-01

    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  12. PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON

    Directory of Open Access Journals (Sweden)

    PUTU SUSAN PRADAWATI

    2013-09-01

    Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.

  13. PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON

    Directory of Open Access Journals (Sweden)

    PUTU SUSAN PRADAWATI

    2013-09-01

    Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.

  14. Induced polarization of clay-sand mixtures. Experiments and modelling.

    Science.gov (United States)

    Okay, G.; Leroy, P.

    2012-04-01

    The complex conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite (mainly Na-Montmorillonite) in the frequency range 1.4 mHz - 12 kHz. The experiments were performed with various clay contents (1, 5, 20, and 100 % in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). Induced polarization measurements were performed with a cylindrical four-electrode sample-holder associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO4 electrodes. The results illustrate the strong impact of the CEC of the clay minerals upon the complex conductivity. The quadrature conductivity increases steadily with the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents. The experimental data show good agreement with predicted values given by our SIP model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex conductivity of the porous medium constituted of the grains and the electrolyte. The SIP model includes also the effect of the grain size distribution upon the complex conductivity spectra.

  15. Hits per trial: Basic analysis of binomial data

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.

    1994-09-01

    This report presents simple statistical methods for analyzing binomial data, such as the number of failures in some number of demands. It gives point estimates, confidence intervals, and Bayesian intervals for the failure probability. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the failure probability varies randomly. Examples and SAS programs are given.

  16. Statistical Inference for a Class of Multivariate Negative Binomial Distributions

    DEFF Research Database (Denmark)

    Rubak, Ege H.; Møller, Jesper; McCullagh, Peter

    This paper considers statistical inference procedures for a class of models for positively correlated count variables called -permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...... studied in the literature, while this is the first statistical paper on -permanental random fields. The focus is on maximum likelihood estimation, maximum quasi-likelihood estimation and on maximum composite likelihood estimation based on uni- and bivariate distributions. Furthermore, new results...

  17. Sand - rubber mixtures submitted to isotropic loading: a minimal model

    Science.gov (United States)

    Platzer, Auriane; Rouhanifar, Salman; Richard, Patrick; Cazacliu, Bogdan; Ibraim, Erdin

    2017-06-01

    The volume of scrap tyres, an undesired urban waste, is increasing rapidly in every country. Mixing sand and rubber particles as a lightweight backfill is one of the possible alternatives to avoid stockpiling them in the environment. This paper presents a minimal model aiming to capture the evolution of the void ratio of sand-rubber mixtures undergoing an isotropic compression loading. It is based on the idea that, submitted to a pressure, the rubber chips deform and partially fill the porous space of the system, leading to a decrease of the void ratio with increasing pressure. Our simple approach is capable of reproducing experimental data for two types of sand (a rounded one and a sub-angular one) and up to mixtures composed of 50% of rubber.

  18. Dirichlet multinomial mixtures: generative models for microbial metagenomics.

    Science.gov (United States)

    Holmes, Ian; Harris, Keith; Quince, Christopher

    2012-01-01

    We introduce Dirichlet multinomial mixtures (DMM) for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' (http://code.google.com/p/microbedmm/). This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP)' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD) phenotypes, ileal Crohn's disease (ICD) is associated with a more variable

  19. Dirichlet multinomial mixtures: generative models for microbial metagenomics.

    Directory of Open Access Journals (Sweden)

    Ian Holmes

    Full Text Available We introduce Dirichlet multinomial mixtures (DMM for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' (http://code.google.com/p/microbedmm/. This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD phenotypes, ileal Crohn's disease (ICD is associated with

  20. Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.

    Science.gov (United States)

    He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L

    2015-10-01

    Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.

  1. The Spectral Mixture Models: A Minimum Information Divergence Approach

    Science.gov (United States)

    2010-04-01

    Bayesian   Information   Criterion .   Developing a metric that measures the fitness of different models is beyond the scope of our discussion.    2.1...data,  then  the  results  are  questionable  or  perhaps  wrong.    Various  information   criteria  have  been  proposed  such  as  the  Akaike   and...LABORATORY INFORMATION DIRECTORATE THE SPECTRAL MIXTURE MODELS

  2. Determining of migraine prognosis using latent growth mixture models

    Institute of Scientific and Technical Information of China (English)

    Bahar Tasdelen; Aynur Ozge; Hakan Kaleagasi; Semra Erdogan; Tufan Mengi

    2011-01-01

    Background This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies,participants are classified with respect to baseline status and followed within a certain time period.However,latent growth mixture model is the most suitable method,which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence,we planned this comprehensive study to identify prognostic factors in migraine.Methods The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity,frequency,and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures,semiparametric and group-based mixture modeling approach,were applied to define the developmental trajectories.Results While the three-group model for the severity (mild,moderate,severe) and frequency (low,medium,high) of headache appeared to be appropriate,the four-group model for the duration (low,medium,high,extremely high) was more suitable. The severity of headache increased in the patients with nausea,vomiting,photophobia and phonophobia.The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration.Conclusions Nausea,vomiting and photophobia were the most significant factors to identify developmental trajectories.The remission time was not the same for the severity,frequency,and duration of headache.

  3. Combinatorial Clustering and the Beta Negative Binomial Process.

    Science.gov (United States)

    Broderick, Tamara; Mackey, Lester; Paisley, John; Jordan, Michael I

    2015-02-01

    We develop a Bayesian nonparametric approach to a general family of latent class problems in which individuals can belong simultaneously to multiple classes and where each class can be exhibited multiple times by an individual. We introduce a combinatorial stochastic process known as the negative binomial process ( NBP ) as an infinite-dimensional prior appropriate for such problems. We show that the NBP is conjugate to the beta process, and we characterize the posterior distribution under the beta-negative binomial process ( BNBP) and hierarchical models based on the BNBP (the HBNBP). We study the asymptotic properties of the BNBP and develop a three-parameter extension of the BNBP that exhibits power-law behavior. We derive MCMC algorithms for posterior inference under the HBNBP , and we present experiments using these algorithms in the domains of image segmentation, object recognition, and document analysis.

  4. Ecological Effects of the Invasive Giant Madagascar Day Gecko on Endemic Mauritian Geckos: Applications of Binomial-Mixture and Species Distribution Models

    NARCIS (Netherlands)

    Buckland, S.; Cole, N.C.; Aguirre-Gutiérrez, J.; Gallagher, L.E.; Henshaw, S.M.; Besnard, A.; Tucker, R.M.; Bachraz, V.; Ruhomaun, K.; Harris, S.

    2014-01-01

    The invasion of the giant Madagascar day gecko Phelsuma grandis has increased the threats to the four endemic Mauritian day geckos (Phelsuma spp.) that have survived on mainland Mauritius. We had two main aims: (i) to predict the spatial distribution and overlap of P. grandis and the endemic geckos

  5. On approximation of Markov binomial distributions

    CERN Document Server

    Xia, Aihua; 10.3150/09-BEJ194

    2010-01-01

    For a Markov chain $\\mathbf{X}=\\{X_i,i=1,2,...,n\\}$ with the state space $\\{0,1\\}$, the random variable $S:=\\sum_{i=1}^nX_i$ is said to follow a Markov binomial distribution. The exact distribution of $S$, denoted $\\mathcal{L}S$, is very computationally intensive for large $n$ (see Gabriel [Biometrika 46 (1959) 454--460] and Bhat and Lal [Adv. in Appl. Probab. 20 (1988) 677--680]) and this paper concerns suitable approximate distributions for $\\mathcal{L}S$ when $\\mathbf{X}$ is stationary. We conclude that the negative binomial and binomial distributions are appropriate approximations for $\\mathcal{L}S$ when $\\operatorname {Var}S$ is greater than and less than $\\mathbb{E}S$, respectively. Also, due to the unique structure of the distribution, we are able to derive explicit error estimates for these approximations.

  6. Background Subtraction with DirichletProcess Mixture Models.

    Science.gov (United States)

    Haines, Tom S F; Tao Xiang

    2014-04-01

    Video analysis often begins with background subtraction. This problem is often approached in two steps-a background model followed by a regularisation scheme. A model of the background allows it to be distinguished on a per-pixel basis from the foreground, whilst the regularisation combines information from adjacent pixels. We present a new method based on Dirichlet process Gaussian mixture models, which are used to estimate per-pixel background distributions. It is followed by probabilistic regularisation. Using a non-parametric Bayesian method allows per-pixel mode counts to be automatically inferred, avoiding over-/under- fitting. We also develop novel model learning algorithms for continuous update of the model in a principled fashion as the scene changes. These key advantages enable us to outperform the state-of-the-art alternatives on four benchmarks.

  7. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    Science.gov (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  8. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid......, alcohols and glycols) are considered, the importance of cross-association is investigated. The cross-association is accounted for either via combining rules or using a cross-solvation energy obtained from experimental spectroscopic or calorimetric data or from ab initio calculations. In both cases two...

  9. Modeling human mortality using mixtures of bathtub shaped failure distributions.

    Science.gov (United States)

    Bebbington, Mark; Lai, Chin-Diew; Zitikis, Ricardas

    2007-04-07

    Aging and mortality is usually modeled by the Gompertz-Makeham distribution, where the mortality rate accelerates with age in adult humans. The resulting parameters are interpreted as the frailty and decrease in vitality with age. This fits well to life data from 'westernized' societies, where the data are accurate, of high resolution, and show the effects of high quality post-natal care. We show, however, that when the data are of lower resolution, and contain considerable structure in the infant mortality, the fit can be poor. Moreover, the Gompertz-Makeham distribution is consistent with neither the force of natural selection, nor the recently identified 'late life mortality deceleration'. Although actuarial models such as the Heligman-Pollard distribution can, in theory, achieve an improved fit, the lack of a closed form for the survival function makes fitting extremely arduous, and the biological interpretation can be lacking. We show, that a mixture, assigning mortality to exogenous or endogenous causes, using the reduced additive and flexible Weibull distributions, models well human mortality over the entire life span. The components of the mixture are asymptotically consistent with the reliability and biological theories of aging. The relative simplicity of the mixture distribution makes feasible a technique where the curvature functions of the corresponding survival and hazard rate functions are used to identify the beginning and the end of various life phases, such as infant mortality, the end of the force of natural selection, and late life mortality deceleration. We illustrate our results with a comparative analysis of Canadian and Indonesian mortality data.

  10. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  11. Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua

    2016-01-01

    Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...... elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated...

  12. A mixture copula Bayesian network model for multimodal genomic data

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2017-04-01

    Full Text Available Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  13. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.

    2012-04-01

    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  14. Nonlinear sensor fault diagnosis using mixture of probabilistic PCA models

    Science.gov (United States)

    Sharifi, Reza; Langari, Reza

    2017-02-01

    This paper presents a methodology for sensor fault diagnosis in nonlinear systems using a Mixture of Probabilistic Principal Component Analysis (MPPCA) models. This methodology separates the measurement space into several locally linear regions, each of which is associated with a Probabilistic PCA (PPCA) model. Using the transformation associated with each PPCA model, a parity relation scheme is used to construct a residual vector. Bayesian analysis of the residuals forms the basis for detection and isolation of sensor faults across the entire range of operation of the system. The resulting method is demonstrated in its application to sensor fault diagnosis of a fully instrumented HVAC system. The results show accurate detection of sensor faults under the assumption that a single sensor is faulty.

  15. Gaussian Mixture Model and Rjmcmc Based RS Image Segmentation

    Science.gov (United States)

    Shi, X.; Zhao, Q. H.

    2017-09-01

    For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  16. Refining personality disorder subtypes and classification using finite mixture modeling.

    Science.gov (United States)

    Yun, Rebecca J; Stern, Barry L; Lenzenweger, Mark F; Tiersky, Lana A

    2013-04-01

    The current Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic system for Axis II disorders continues to be characterized by considerable heterogeneity and poor discriminant validity. Such problems impede accurate personality disorder (PD) diagnosis. As a result, alternative assessment tools are often used in conjunction with the DSM. One popular framework is the object relational model developed by Kernberg and his colleagues (J. F. Clarkin, M. F. Lenzenweger, F. Yeomans, K. N. Levy, & O. F. Kernberg, 2007, An object relations model of borderline pathology, Journal of Personality Disorders, Vol. 21, pp. 474-499; O. F. Kernberg, 1984, Severe Personality Disorders, New Haven, CT: Yale University Press; O. F. Kernberg & E. Caligor, 2005, A psychoanalytic theory of personality disorders, in M. F. Lenzenweger & J. F. Clarkin, Eds., Major Theories of Personality Disorder, New York, NY: Guilford Press). Drawing on this model and empirical studies thereof, the current study attempted to clarify Kernberg's (1984) PD taxonomy and identify subtypes within a sample with varying levels of personality pathology using finite mixture modeling. Subjects (N = 141) were recruited to represent a wide range of pathology. The finite mixture modeling results indicated that 3 components were harbored within the variables analyzed. Group 1 was characterized by low levels of antisocial, paranoid, and aggressive features, and Group 2 was characterized by elevated paranoid features. Group 3 revealed the highest levels across the 3 variables. The validity of the obtained solution was then evaluated by reference to a variety of external measures that supported the validity of the identified grouping structure. Findings generally appear congruent with previous research, which argued that a PD taxonomy based on paranoid, aggressive, and antisocial features is a viable supplement to current diagnostic systems. Our study suggests that Kernberg's object relational model offers a

  17. An Interesting Application of the Binomial Distribution.

    Science.gov (United States)

    Newell, G. J.; MacFarlane, J. D.

    1984-01-01

    Presents an application of the binomial distribution in which the distribution is used to detect differences between the sensory properties of food products. Included is a BASIC computer program listing used to generate triangle and duo-trio test results. (JN)

  18. Adaptive bayesian analysis for binomial proportions

    CSIR Research Space (South Africa)

    Das, Sonali

    2008-10-01

    Full Text Available The authors consider the problem of statistical inference of binomial proportions for non-matched, correlated samples, under the Bayesian framework. Such inference can arise when the same group is observed at a different number of times with the aim...

  19. A model for steady flows of magma-volatile mixtures

    CERN Document Server

    Belan, Marco

    2012-01-01

    A general one-dimensional model for the steady adiabatic motion of liquid-volatile mixtures in vertical ducts with varying cross-section is presented. The liquid contains a dissolved part of the volatile and is assumed to be incompressible and in thermomechanical equilibrium with a perfect gas phase, which is generated by the exsolution of the same volatile. An inverse problem approach is used -- the pressure along the duct is set as an input datum, and the other physical quantities are obtained as output. This fluid-dynamic model is intended as an approximate description of magma-volatile mixture flows of interest to geophysics and planetary sciences. It is implemented as a symbolic code, where each line stands for an analytic expression, whether algebraic or differential, which is managed by the software kernel independently of the numerical value of each variable. The code is versatile and user-friendly and permits to check the consequences of different hypotheses even through its early steps. Only the las...

  20. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    Science.gov (United States)

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  1. 时间间隔服从二项分布的冲击模型的特征量的分布%The Distribution of Characteristics of Shock Model of Time Interval Obeying the Binomial Distribution

    Institute of Scientific and Technical Information of China (English)

    马明; 陆琬; 吉佩玉

    2015-01-01

    This study did a research on a type of the random shock model. In the case of time interval of random shock model obeying the binomial distribution, this study researched the three indicators:the arrival time of shock, the total number of shocking time at any time, and if the shock of time reached the frequency and found the shock arrival time, the number of shocking time, and the probability distribution of the chance of shocking at any given time.%对一类随机冲击模型进行了研究,在随机冲击模型冲击到达的时间间隔服从二项分布的情况下,对冲击到达时刻、到任一时刻为止共冲击次数、时刻是否有冲击到达的概率这3个指标做了研究,得到了冲击到达时刻、冲击次数和任一时刻是否有冲击的概率分布。

  2. Statistical Inference for Binomial-generalized Pareto Compound Extreme Value Distribution Model%二项-广义Pareto复合极值分布模型的统计推断

    Institute of Scientific and Technical Information of China (English)

    张香云; 程维虎

    2012-01-01

    Extreme value theory is mainly the study on extreme events of small probability & major impact. At present, the compound extreme value distribution has been widely used in hydrology, meteorology, earthquake, insurance, finance and other fields. In this paper, we establish binomial-generalized Pareto compound extreme value distribution model based on extreme value type theorem and PBDH theorem, derive parameter estimation of the established compound model by probability weighted moments, get critical values of Kolmogorov-Smirnov test statistic.%极值理论主要研究小概率、大影响的极端事件.当前,复合极值分布已经广泛应用于水文、气象、地震、保险、金融等领域.本文以极值类型定理和PBDH定理为理论依据,构建了二项-广义Pareto复合极值分布模型;使用概率加权矩方法,对所建立的复合模型推导参数估计式;利用计算机模拟,得到了Kolmogorov-Smirnov(简称KS)检验统计量的临界值.

  3. Mixture of a seismicity model based on the rate-and-state friction and ETAS model

    Science.gov (United States)

    Iwata, T.

    2015-12-01

    Currently the ETAS model [Ogata, 1988, JASA] is considered to be a standard model of seismicity. However, because the ETAS model is a purely statistical one, the physics-based seismicity model derived from the rate-and-state friction (hereafter referred to as Dieterich model) [Dieterich, 1994, JGR] is frequently examined. However, the original version of the Dieterich model has several problems in the application to real earthquake sequences and therefore modifications have been conducted in previous studies. Iwata [2015, Pageoph] is one of such studies and shows that the Dieterich model is significantly improved as a result of the inclusion of the effect of secondary aftershocks (i.e., aftershocks caused by previous aftershocks). However, still the performance of the ETAS model is superior to that of the improved Dieterich model. For further improvement, the mixture of the Dieterich and ETAS models is examined in this study. To achieve the mixture, the seismicity rate is represented as a sum of the ETAS and Dieterich models of which weights are given as k and 1-k, respectively. This mixture model is applied to the aftershock sequences of the 1995 Kobe and 2004 Mid-Niigata sequences which have been analyzed in Iwata [2015]. Additionally, the sequence of the Matsushiro earthquake swarm in central Japan 1965-1970 is also analyzed. The value of k and parameters of the ETAS and Dieterich models are estimated by means of the maximum likelihood method, and the model performances are assessed on the basis of AIC. For the two aftershock sequences, the AIC values of the ETAS model are around 3-9 smaller (i.e., better) than those of the mixture model. On the contrary, for the Matsushiro swarm, the AIC value of the mixture model is 5.8 smaller than that of the ETAS model, indicating that the mixture of the two models results in significant improvement of the seismicity model.

  4. MODELLING AND PARAMETER ESTIMATION IN REACTIVE CONTINUOUS MIXTURES: THE CATALYTIC CRACKING OF ALKANES. PART I

    Directory of Open Access Journals (Sweden)

    PEIXOTO F. C.

    1999-01-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture. An explicit solution is found and experimental data on the catalytic cracking of a mixture of alkanes are used for deactivation and kinetic parameter estimation.

  5. Classifying Gamma-Ray Bursts with Gaussian Mixture Model

    CERN Document Server

    Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-01-01

    Using Gaussian Mixture Model (GMM) and Expectation Maximization Algorithm, we perform an analysis of time duration ($T_{90}$) for \\textit{CGRO}/BATSE, \\textit{Swift}/BAT and \\textit{Fermi}/GBM Gamma-Ray Bursts. The $T_{90}$ distributions of 298 redshift-known \\textit{Swift}/BAT GRBs have also been studied in both observer and rest frames. Bayesian Information Criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the \\textit{CGRO}/BATSE and \\textit{Fermi}/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the \\textit{Swift}/BAT bursts in the rest frame, which is consistent with some previous results. However, \\textit{Swift} GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of \\textit{Swift}/BAT.

  6. Classifying gamma-ray bursts with Gaussian Mixture Model

    Science.gov (United States)

    Zhang, Zhi-Bin; Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-11-01

    Using Gaussian Mixture Model (GMM) and expectation-maximization algorithm, we perform an analysis of time duration (T90) for Compton Gamma Ray Observatory (CGRO)/BATSE, Swift/BAT and Fermi/GBM gamma-ray bursts (GRBs). The T90 distributions of 298 redshift-known Swift/BAT GRBs have also been studied in both observer and rest frames. Bayesian information criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the CGRO/BATSE and Fermi/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the Swift/BAT bursts in the rest frame, which is consistent with some previous results. However, Swift GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of Swift/BAT.

  7. Mixtures of Polya trees for flexible spatial frailty survival modelling.

    Science.gov (United States)

    Zhao, Luping; Hanson, Timothy E; Carlin, Bradley P

    2009-06-01

    Mixtures of Polya trees offer a very flexible nonparametric approach for modelling time-to-event data. Many such settings also feature spatial association that requires further sophistication, either at the point level or at the lattice level. In this paper, we combine these two aspects within three competing survival models, obtaining a data analytic approach that remains computationally feasible in a fully hierarchical Bayesian framework using Markov chain Monte Carlo methods. We illustrate our proposed methods with an analysis of spatially oriented breast cancer survival data from the Surveillance, Epidemiology and End Results program of the National Cancer Institute. Our results indicate appreciable advantages for our approach over competing methods that impose unrealistic parametric assumptions, ignore spatial association or both.

  8. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  9. Advances in Behavioral Genetics Modeling Using Mplus: Applications of Factor Mixture Modeling to Twin Data

    National Research Council Canada - National Science Library

    Muthen, Bengt; Asparouhov, Tihomir; Rebollo, Irene

    2006-01-01

    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder...

  10. Tomography of binomial states of the radiation field

    NARCIS (Netherlands)

    Bazrafkan, MR; Man'ko, [No Value

    2004-01-01

    The symplectic, optical, and photon-number tomographic symbols of binomial states of the radiation field are studied. Explicit relations for all tomograms of the binomial states are obtained. Two measures for nonclassical properties of these states are discussed.

  11. Tomography of binomial states of the radiation field

    NARCIS (Netherlands)

    Bazrafkan, MR; Man'ko, [No Value

    2004-01-01

    The symplectic, optical, and photon-number tomographic symbols of binomial states of the radiation field are studied. Explicit relations for all tomograms of the binomial states are obtained. Two measures for nonclassical properties of these states are discussed.

  12. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  13. Minimum cross entropy formalism of binomial tree model for option pricing%最小叉熵方法推导期权定价二叉树模型

    Institute of Scientific and Technical Information of China (English)

    李英华; 李兴斯

    2011-01-01

    借助最小叉熵方法建立了新模型,即把标的资产(股票)价格看成一个信息系统,根据以往股票价格的历史信息给出股票价格的一个概率密度作为先验概率密度,然后在当前股票价格变化的随机变量的矩约束下,用最小叉熵方法来预测n△t时闻点末的股票价格分布最靠近先验概率的概率密度,从而得到参数P、u、d.新模型直接可用现有非线性规划算法进行求解或者转化为其对偶形式用无约束优化来求解,计算方便,经济、物理含义明确,有效克服了二又树及其演化方法的不足,且不受股票价格变化运动形式限制,是一个统一的模型.与B-S、CRR、JR、TGR、Wil1、Wil2方法数值比较结果表明,多数情况下新方法收敛速度快,计算稳定.%Using minimum cross entropy formalism, the new model was constructed, which takes the price states of the underlying asset (stock) as an information system. Firstly, the prior probability density from the historical data of the stock price was gained. Secondly, the probability density closest to the prior probability of the stock price distribution for the binomial tree model at the end of moment nat was derived by the minimum cross entropy formalism under the moments constraints of stock price change. Then, the parameters p, u, d were gained. The new model is not only easy in calculating because of being solved by the existing nonlinear programming algorithm or nonconstrained optimization through its dual problem, but also has clear economical and physical meaning. The new one tackles the drawbacks of the binomial model and its evolution, and is a unified model without being restricted by the type of the stock price probability distribution. Finally, compared with the B-S model, CRR model and JR model, TGR model, Wil1 model, Wil2 model, the calculation results show that the new method can more rapidly converge in most cases, and is more

  14. Using the beta-binomial distribution to characterize forest health

    Energy Technology Data Exchange (ETDEWEB)

    Zarnoch, S.J.; Anderson, R.L.; Sheffield, R.M.

    1995-12-31

    The beta-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused by Discula destructiva) in the southeastern United States. The parameters estimates have important biological interpretation, and tests of hypotheses are more meaningful than traditional statistical analyses. The value of a modeling approach to dichotomous data analysis is emphasized.

  15. Library Book Circulation and the Beta-Binomial Distribution.

    Science.gov (United States)

    Gelman, E.; Sichel, H. S.

    1987-01-01

    Argues that library book circulation is a binomial rather than a Poisson process, and that individual book popularities are continuous beta distributions. Three examples demonstrate the superiority of beta over negative binomial distribution, and it is suggested that a bivariate-binomial process would be helpful in predicting future book…

  16. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females.

  17. Mixture models versus free energy of hydration models for waste glass durability

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T.; Masuga, P.

    1996-03-01

    Two approaches for modeling high-level waste glass durability as a function of glass composition are compared. The mixture approach utilizes first-order mixture (FOM) or second-order mixture (SOM) polynomials in composition, whereas the free energy of hydration (FEH) approach assumes durability is linearly related to the FEH of glass. Both approaches fit their models to data using least squares regression. The mixture and FEH approaches are used to model glass durability as a function of glass composition for several simulated waste glass data sets. The resulting FEH and FOM model coefficients and goodness-of-fit statistics are compared, both within and across data sets. The goodness-of-fit statistics show that the FOM model fits/predicts durability in each data set better (sometimes much better) than the FEH model. Considerable differences also exist between some FEH and FOM model component coefficients for each of the data sets. These differences are due to the mixture approach having a greater flexibility to account for the effect of a glass component depending on the level and range of the component and on the levels of other glass components. The mixture approach can also account for higher-order (e.g., curvilinear or interactive) effects of components, whereas the FEH approach cannot. SOM models were developed for three of the data sets, and are shown to improve on the corresponding FOM models. Thus, the mixture approach has much more flexibility than the FEH approach for approximating the relationship between glass composition and durability for various glass composition regions.

  18. Improved model for mixtures of polymers and hard spheres

    Science.gov (United States)

    D'Adamo, Giuseppe; Pelissetto, Andrea

    2016-12-01

    Extensive Monte Carlo simulations are used to investigate how model systems of mixtures of polymers and hard spheres approach the scaling limit. We represent polymers as lattice random walks of length L with an energy penalty w for each intersection (Domb-Joyce model), interacting with hard spheres of radius R c via a hard-core pair potential of range {{R}\\text{mon}}+{{R}c} , where R mon is identified as the monomer radius. We show that the mixed polymer-colloid interaction gives rise to new confluent corrections. The leading ones scale as {{L}-ν} , where ν ≈ 0.588 is the usual Flory exponent. Finally, we determine optimal values of the model parameters w and R mon that guarantee the absence of the two leading confluent corrections. This improved model shows a significantly faster convergence to the asymptotic limit L\\to ∞ and is amenable for extensive and accurate numerical simulations at finite density, with only a limited computational effort.

  19. Compressive sensing by learning a Gaussian mixture model from measurements.

    Science.gov (United States)

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2015-01-01

    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.

  20. Generalized binomial distribution in photon statistics

    Science.gov (United States)

    Ilyin, Aleksey

    2015-01-01

    The photon-number distribution between two parts of a given volume is found for an arbitrary photon statistics. This problem is related to the interaction of a light beam with a macroscopic device, for example a diaphragm, that separates the photon flux into two parts with known probabilities. To solve this problem, a Generalized Binomial Distribution (GBD) is derived that is applicable to an arbitrary photon statistics satisfying probability convolution equations. It is shown that if photons obey Poisson statistics then the GBD is reduced to the ordinary binomial distribution, whereas in the case of Bose- Einstein statistics the GBD is reduced to the Polya distribution. In this case, the photon spatial distribution depends on the phase-space volume occupied by the photons. This result involves a photon bunching effect, or collective behavior of photons that sharply differs from the behavior of classical particles. It is shown that the photon bunching effect looks similar to the quantum interference effect.

  1. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  2. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture proportion

  3. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary...

  4. Generalized binomial multiplicative cascade processes and asymmetrical multifractal distributions

    Science.gov (United States)

    Cheng, Q.

    2014-04-01

    The concepts and models of multifractals have been employed in various fields in the geosciences to characterize singular fields caused by nonlinear geoprocesses. Several indices involved in multifractal models, i.e., asymmetry, multifractality, and range of singularity, are commonly used to characterize nonlinear properties of multifractal fields. An understanding of how these indices are related to the processes involved in the generation of multifractal fields is essential for multifractal modeling. In this paper, a five-parameter binomial multiplicative cascade model is proposed based on the anisotropic partition processes. Each partition divides the unit set (1-D length or 2-D area) into h equal subsets (segments or subareas) and m1 of them receive d1 (> 0) and m2 receive d2 (> 0) proportion of the mass in the previous subset, respectively, where m1+m2 ≤ h. The model is demonstrated via several examples published in the literature with asymmetrical fractal dimension spectra. This model demonstrates the various properties of asymmetrical multifractal distributions and multifractal indices with explicit functions, thus providing insight into and an understanding of the properties of asymmetrical binomial multifractal distributions.

  5. Fully Bayesian mixture model for differential gene expression: simulations and model checks.

    Science.gov (United States)

    Lewin, Alex; Bochkina, Natalia; Richardson, Sylvia

    2007-01-01

    We present a Bayesian hierarchical model for detecting differentially expressed genes using a mixture prior on the parameters representing differential effects. We formulate an easily interpretable 3-component mixture to classify genes as over-expressed, under-expressed and non-differentially expressed, and model gene variances as exchangeable to allow for variability between genes. We show how the proportion of differentially expressed genes, and the mixture parameters, can be estimated in a fully Bayesian way, extending previous approaches where this proportion was fixed and empirically estimated. Good estimates of the false discovery rates are also obtained. Different parametric families for the mixture components can lead to quite different classifications of genes for a given data set. Using Affymetrix data from a knock out and wildtype mice experiment, we show how predictive model checks can be used to guide the choice between possible mixture priors. These checks show that extending the mixture model to allow extra variability around zero instead of the usual point mass null fits the data better. A software package for R is available.

  6. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    Science.gov (United States)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  7. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2015-01-01

    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  8. Statistical inference for a class of multivariate negative binomial distributions

    DEFF Research Database (Denmark)

    Rubak, Ege Holger; Møller, Jesper; McCullagh, Peter

    This paper considers statistical inference procedures for a class of models for positively correlated count variables called α-permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...... studied in the literature, while this is the first statistical paper on α-permanental randomfields. The focus is on maximum likelihood estimation, maximum quasi-likelihood estimation and on maximum composite likelihood estimation based on uni- and bivariate distributions. Furthermore, new results for α...

  9. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  10. Describing Anopheles arabiensis aquatic habitats in two riceland agro-ecosystems in Mwea, Kenya using a negative binomial regression model with a non-homogenous mean.

    Science.gov (United States)

    Jacob, Benjamin G; Griffith, Daniel; Muturi, Ephantus; Caamano, Erick X; Shililu, Josephat; Githure, John I; Novak, Robert J

    2009-01-01

    This research illustrates a geostatistical approach for modeling the spatial distribution patterns of Anopheles arabiensis Patton (Patton) aquatic habitats in two riceland environments. QuickBird 0.61 m data, encompassing the visible bands and the near-infra-red (NIR) band, were selected to synthesize images of An. arabiensis aquatic habitats. These bands and field sampled data were used to determine ecological parameters associated with riceland larval habitat development. SAS was used to calculate univariate statistics, correlations and Poisson regression models. Global autocorrelation statistics were generated in ArcGISfrom georeferenced Anopheles aquatic habitats in the study sites. The geographic distribution of Anopheles gambiae s.l. aquatic habitats in the study sites exhibited weak positive autocorrelation; similar numbers of log-larval count habitats tend to clustered in space. Individual rice land habitat data were further evaluated in terms of their covariations with spatial autocorrelation, by regressing them on candidate spatial filter eigenvectors. Each eigenvector generated from a geographically weighted matrix, for both study sites, revealed a distinctive spatial pattern. The spatial autocorrelation components suggest the presence of roughly 14-30% redundant information in the aquatic habitat larval count samples. Synthetic map pattern variables furnish a method of capturing spatial dependency effects in the mean response term in regression analyses of rice land An. arabiensis aquatic habitat data.

  11. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  12. Sum of Bernoulli Mixtures: Beyond Conditional Independence

    Directory of Open Access Journals (Sweden)

    Taehan Bae

    2014-01-01

    Full Text Available We consider the distribution of the sum of Bernoulli mixtures under a general dependence structure. The level of dependence is measured in terms of a limiting conditional correlation between two of the Bernoulli random variables. The conditioning event is that the mixing random variable is larger than a threshold and the limit is with respect to the threshold tending to one. The large-sample distribution of the empirical frequency and its use in approximating the risk measures, value at risk and conditional tail expectation, are presented for a new class of models which we call double mixtures. Several illustrative examples with a Beta mixing distribution, are given. As well, some data from the area of credit risk are fit with the models, and comparisons are made between the new models and also the classical Beta-binomial model.

  13. 制造业区域集聚与技术创新:基于负二项模型的实证分析%Industrial Agglomeration and Innovation: An Application of the Negative Binomial Model

    Institute of Scientific and Technical Information of China (English)

    张萃

    2012-01-01

    Based on the localization character of knowledge spillover, this paper examines the impact of Chinese industrial agglomeration on innovation from the spatial view by using the negative binomial regression model. The empirical evidence through the maximum likelihood estimation shows that industrial agglomeration's innovation effect is significant, which has also been supported by the extended regressions on the high-innovation industry and the high-agglomeration industry. In addition, FDI has no impact on the innovation of the high-innovation industry and the domestic firms seem to promote the innovation of the foreign firms.%本文以知识溢出的地方化(localization)特性为前提条件,从空间视点切入,首次运用负二项回归模型实证考察了中国制造业区域集聚对技术创新的影响。通过最大似然估计得出的结果表明,制造业区域集聚之技术创新效应非常显著。这一结论也得到了对高创新行业和高集聚行业拓展回归分析的支持。研究还显示,FDI对高创新行业的技术创新作用并不显著,反而出现了内资企业向外资企业逆向技术扩散的可能。

  14. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  15. A person-fit index for polytomous Rasch models, latent class models, and their mixture generalizations

    NARCIS (Netherlands)

    von Davier, M; Molenaar, IW

    2003-01-01

    A normally distributed person-fit index is proposed for detecting aberrant response patterns in latent class models and mixture distribution IRT models for dichotomous and polytomous data. This article extends previous work on the null distribution of person-fit indices for the dichotomous Rasch mod

  16. Strained and unconstrained multivariate normal finite mixture modeling of Piagetian data.

    NARCIS (Netherlands)

    Dolan, C.V.; Jansen, B.R.J.; van der Maas, H.L.J.

    2004-01-01

    We present the results of multivariate normal mixture modeling of Piagetian data. The sample consists of 101 children, who carried out a (pseudo-)conservation computer task on four occasions. We fitted both cross-sectional mixture models, and longitudinal models based on a Markovian transition

  17. Global cross-calibration of Landsat spectral mixture models

    CERN Document Server

    Sousa, Daniel

    2016-01-01

    Data continuity for the Landsat program relies on accurate cross-calibration among sensors. The Landsat 8 OLI has been shown to exhibit superior performance to the sensors on Landsats 4-7 with respect to radiometric calibration, signal to noise, and geolocation. However, improvements to the positioning of the spectral response functions on the OLI have resulted in known biases for commonly used spectral indices because the new band responses integrate absorption features differently from previous Landsat sensors. The objective of this analysis is to quantify the impact of these changes on linear spectral mixture models that use imagery collected by different Landsat sensors. The 2013 underflight of Landsat 7 and 8 provides an opportunity to cross calibrate the spectral mixing spaces of the ETM+ and OLI sensors using near-simultaneous acquisitions from a wide variety of land cover types worldwide. We use 80,910,343 pairs of OLI and ETM+ spectra to characterize the OLI spectral mixing space and perform a cross-...

  18. Fuzzy local Gaussian mixture model for brain MR image segmentation.

    Science.gov (United States)

    Ji, Zexuan; Xia, Yong; Sun, Quansen; Chen, Qiang; Xia, Deshen; Feng, David Dagan

    2012-05-01

    Accurate brain tissue segmentation from magnetic resonance (MR) images is an essential step in quantitative brain image analysis. However, due to the existence of noise and intensity inhomogeneity in brain MR images, many segmentation algorithms suffer from limited accuracy. In this paper, we assume that the local image data within each voxel's neighborhood satisfy the Gaussian mixture model (GMM), and thus propose the fuzzy local GMM (FLGMM) algorithm for automated brain MR image segmentation. This algorithm estimates the segmentation result that maximizes the posterior probability by minimizing an objective energy function, in which a truncated Gaussian kernel function is used to impose the spatial constraint and fuzzy memberships are employed to balance the contribution of each GMM. We compared our algorithm to state-of-the-art segmentation approaches in both synthetic and clinical data. Our results show that the proposed algorithm can largely overcome the difficulties raised by noise, low contrast, and bias field, and substantially improve the accuracy of brain MR image segmentation.

  19. Advances in behavioral genetics modeling using Mplus: applications of factor mixture modeling to twin data.

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Rebollo, Irene

    2006-06-01

    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder. In this model, heritability is simultaneously studied with respect to latent class membership and within-class severity dimensions. Different latent classes of individuals are allowed to have different heritability for the severity dimensions. The factor mixture approach appears to have great potential for the genetic analyses of heterogeneous populations. Generalizations for longitudinal data are also outlined.

  20. Binomial ARMA count series from renewal processes

    CERN Document Server

    Koshkin, Sergiy

    2011-01-01

    This paper describes a new method for generating stationary integer-valued time series from renewal processes. We prove that if the lifetime distribution of renewal processes is nonlattice and the probability generating function is rational, then the generated time series satisfy causal and invertible ARMA type stochastic difference equations. The result provides an easy method for generating integer-valued time series with ARMA type autocovariance functions. Examples of generating binomial ARMA(p,p-1) series from lifetime distributions with constant hazard rates after lag p are given as an illustration. An estimation method is developed for the AR(p) cases.

  1. Numerical simulation of slurry jets using mixture model

    Directory of Open Access Journals (Sweden)

    Wen-xin HUAI

    2013-01-01

    Full Text Available Slurry jets in a static uniform environment were simulated with a two-phase mixture model in which flow-particle interactions were considered. A standard k-ε turbulence model was chosen to close the governing equations. The computational results were in agreement with previous laboratory measurements. The characteristics of the two-phase flow field and the influences of hydraulic and geometric parameters on the distribution of the slurry jets were analyzed on the basis of the computational results. The calculated results reveal that if the initial velocity of the slurry jet is high, the jet spreads less in the radial direction. When the slurry jet is less influenced by the ambient fluid (when the Stokes number St is relatively large, the turbulent kinetic energy k and turbulent dissipation rate ε, which are relatively concentrated around the jet axis, decrease more rapidly after the slurry jet passes through the nozzle. For different values of St, the radial distributions of streamwise velocity and particle volume fraction are both self-similar and fit a Gaussian profile after the slurry jet fully develops. The decay rate of the particle velocity is lower than that of water velocity along the jet axis, and the axial distributions of the centerline particle streamwise velocity are self-similar along the jet axis. The pattern of particle dispersion depends on the Stokes number St. When St = 0.39, the particle dispersion along the radial direction is considerable, and the relative velocity is very low due to the low dynamic response time. When St = 3.08, the dispersion of particles along the radial direction is very little, and most of the particles have high relative velocities along the streamwise direction.

  2. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  3. Estudo por simulação Monte Carlo de um estimador robusto utilizado na inferência de um modelo binomial contaminado = A Monte Carlo simulation study of a robust estimator used in the inference of a contaminated binomial model

    Directory of Open Access Journals (Sweden)

    Marcelo Angelo Cirillo

    2010-07-01

    Full Text Available A inferencia estatistica em populacoes binomiais contaminadas esta sujeita a erros grosseiros de estimacao, uma vez que as amostras nao sao identicamente distribuidas. Por esse problema, este trabalho tem por objetivo determinar qual a melhor constante de afinidade (c1 que proporcione melhor desempenho em um estimador pertencente a classedos estimadores-E. Com esse proposito, neste trabalho, foi utilizada a metodologia, considerando-se o metodo de simulacao Monte Carlo, no qual diferentes configuracoes descritas pela combinacao de valores parametricos, niveis de contaminacao e tamanhos de amostra foram avaliados. Concluiu-se que, para alta probabilidade de mistura (ƒÁ = 0,40, recomenda-se assumir c1 = 0,1 nas situacoes de grandes amostras (n = 50 e n = 80. The statistical inference in binomial population is subject to gross errors of estimate, as the samples are not identically distributed. Due to this problem, this work aims to determine which is the best affinity constant (c1 that provides the best performance in the estimator, belonging to the class of E-estimators. With that purpose, the methodology used in this work was applied considering the Monte Carlo simulation method, in which different configurations described by combination of parametric values, levels of contamination and sample sizes were appraised. It was concluded that for the high probability of contamination (ƒÁ = 0.40, c1 = 0.1 is recommended in cases with large samples (n = 50 and n = 80.

  4. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  5. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    Science.gov (United States)

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. Copyright © 2016. Published by Elsevier B.V.

  6. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)

    2009-01-01

    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  7. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  8. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  9. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available mixture models have been used to generate sub-pixel land cover classifications, however, traditionally this makes use of mixtures of normal distributions. However, these models fail to represent many land cover classes accurately, as these are usually...

  10. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  11. Generalized binomial distribution in photon statistics

    Directory of Open Access Journals (Sweden)

    Ilyin Aleksey

    2015-01-01

    Full Text Available The photon-number distribution between two parts of a given volume is found for an arbitrary photon statistics. This problem is related to the interaction of a light beam with a macroscopic device, for example a diaphragm, that separates the photon flux into two parts with known probabilities. To solve this problem, a Generalized Binomial Distribution (GBD is derived that is applicable to an arbitrary photon statistics satisfying probability convolution equations. It is shown that if photons obey Poisson statistics then the GBD is reduced to the ordinary binomial distribution, whereas in the case of Bose- Einstein statistics the GBD is reduced to the Polya distribution. In this case, the photon spatial distribution depends on the phase-space volume occupied by the photons. This result involves a photon bunching effect, or collective behavior of photons that sharply differs from the behavior of classical particles. It is shown that the photon bunching effect looks similar to the quantum interference effect.

  12. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    -alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... of the model of relevance to the petroleum and chemical industries: high pressure vapor-liquid and liquid-liquid equilibrium in alcohol-containing mixtures, mixtures with gas hydrate inhibitors and mixtures with polar and hydrogen bonding chemicals including organic acids. Some comparisons with conventional...

  13. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  14. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into dis

  15. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    Science.gov (United States)

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  16. Approximate order-up-to policies for inventory systems with binomial yield

    NARCIS (Netherlands)

    Ju, Wanrong; Gabor, Adriana F.; Ommeren, van Jan-Kees C.W.

    2013-01-01

    This paper studies an inventory policy for a retailer who orders his products from a supplier whose deliveries only partially satisfy the quality requirements. We model this situation by an infinite-horizon periodic-review model with binomial random yield and positive lead time. We propose an order-

  17. Approximate Order-up-to Policies for Inventory Systems with Binomial Yield

    NARCIS (Netherlands)

    W. Ju (Wanrong); A.F. Gabor (Adriana); J.C.W. van Ommeren (Jan-Kees)

    2013-01-01

    textabstractThis paper studies an inventory policy for a retailer who orders his products from a supplier whose deliveries only partially satisfy the quality require- ments. We model this situation by an infinite-horizon periodic-review model with binomial random yield and positive lead time. We pro

  18. Approximating Order-up-to Policies for Inventory Systems with Binomial Yield

    NARCIS (Netherlands)

    W. Ju (Wanrong); A.F. Gabor (Adriana); J.C.W. van Ommeren (Jan-Kees)

    2014-01-01

    markdownabstract__Abstract__ This paper studies an inventory policy for a retailer who orders his products from a supplier whose deliveries only partially satisfy the quality requirements. We model this situation by an innite-horizon periodic-review model with binomial random yield and positive lea

  19. Approximate Order-up-to Policies for Inventory Systems with Binomial Yield

    NARCIS (Netherlands)

    W. Ju (Wanrong); A.F. Gabor (Adriana); J.C.W. van Ommeren (Jan-Kees)

    2013-01-01

    textabstractThis paper studies an inventory policy for a retailer who orders his products from a supplier whose deliveries only partially satisfy the quality require- ments. We model this situation by an infinite-horizon periodic-review model with binomial random yield and positive lead time. We

  20. Approximating Order-up-to Policies for Inventory Systems with Binomial Yield

    NARCIS (Netherlands)

    W. Ju (Wanrong); A.F. Gabor (Adriana); J.C.W. van Ommeren (Jan-Kees)

    2014-01-01

    markdownabstract__Abstract__ This paper studies an inventory policy for a retailer who orders his products from a supplier whose deliveries only partially satisfy the quality requirements. We model this situation by an innite-horizon periodic-review model with binomial random yield and

  1. Approximate order-up-to policies for inventory systems with binomial yield

    NARCIS (Netherlands)

    Ju, Wanrong; Gabor, Adriana F.; van Ommeren, Jan C.W.

    2013-01-01

    This paper studies an inventory policy for a retailer who orders his products from a supplier whose deliveries only partially satisfy the quality requirements. We model this situation by an infinite-horizon periodic-review model with binomial random yield and positive lead time. We propose an

  2. Bayesian mixture modeling using a hybrid sampler with application to protein subfamily identification.

    Science.gov (United States)

    Fong, Youyi; Wakefield, Jon; Rice, Kenneth

    2010-01-01

    Predicting protein function is essential to advancing our knowledge of biological processes. This article is focused on discovering the functional diversification within a protein family. A Bayesian mixture approach is proposed to model a protein family as a mixture of profile hidden Markov models. For a given mixture size, a hybrid Markov chain Monte Carlo sampler comprising both Gibbs sampling steps and hierarchical clustering-based split/merge proposals is used to obtain posterior inference. Inference for mixture size concentrates on comparing the integrated likelihoods. The choice of priors is critical with respect to the performance of the procedure. Through simulation studies, we show that 2 priors that are based on independent data sets allow correct identification of the mixture size, both when the data are homogeneous and when the data are generated from a mixture. We illustrate our method using 2 sets of real protein sequences.

  3. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  4. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  5. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    In this research work, we assumed that the densities of each component in a mixture are linearly distributed across the interface between the coexisting vapor and liquid phases, and we developed a linear gradient theory model for computing interfacial tensions of mixtures, especially mixtures...... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile....

  6. Mathematical model of the component mixture distribution in the molten cast iron during centrifugation (sedimentation)

    Science.gov (United States)

    Bikulov, R. A.; Kotlyar, L. M.

    2014-12-01

    For the development and management of the manufacturing processes of axisymmetric articles with compositional structure by centrifugal casting method [1,2,3,4] is necessary to create a generalized mathematical model of the dynamics of component mixture in the molten cast iron during centrifugation. In article. based on the analysis of the dynamics of two-component mixture at sedimentation, a method of successive approximations to determine the distribution of a multicomponent mixture by centrifugation in a parabolic crucible is developed.

  7. Adaptive Mixture Modelling Metropolis Methods for Bayesian Analysis of Non-linear State-Space Models.

    Science.gov (United States)

    Niemi, Jarad; West, Mike

    2010-06-01

    We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.

  8. Modelling viscosity and mass fraction of bitumen - diluent mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Miadonye, A.; Latour, N.; Puttagunta, V.R. [Lakehead Univ., Thunder Bay, ON (Canada)

    1999-07-01

    In recovery of bitumen in oil sands extraction, the reduction of the viscosity is important above and below ground. The addition of liquid diluent breaks down or weakens the intermolecular forces that create a high viscosity in bitumen. The addition of even 5% of diluent can cause a viscosity reduction in excess of 8%, thus facilitating the in situ recovery and pipeline transportation of bitumen. Knowledge of bitumen - diluent viscosity is highly important because without it, determination of upgrading processes, in situ recovery, well simulation, heat transfer, fluid flow and a variety of other engineering problems would be difficult or impossible to solve. The development of a simple correlation to predict the viscosity of binary mixtures of bitumen - diluent in any proportion is described. The developed correlation used to estimate the viscosities and mass fractions of bitumen - diluent mixtures was within acceptable limits of error. For the prediction of mixture viscosities, the developed correlation gave the best results with an overall average absolute deviation of 12% compared to those of Chironis (17%) and Cragoe (23%). Predictions of diluent mass fractions yielded a much better result with an overall average absolute deviation of 5%. The unique features of the correlation include its computational simplicity, its applicability to mixtures at temperatures other than 30 degrees C, and the fact that only the bitumen and diluent viscosities are needed to make predictions. It is the only correlation capable of predicting viscosities of mixtures, as well as diluent mass fractions required to reduce bitumen viscosity to pumping viscosities. The prediction of viscosities at 25, 60.3, and 82.6 degrees C produced excellent results, particularly at high temperatures with an average absolute deviation of below 10%. 11 refs., 3 figs., 8 tabs.

  9. Unsupervised Segmentation of Spectral Images with a Spatialized Gaussian Mixture Model and Model Selection

    Directory of Open Access Journals (Sweden)

    Cohen S.X.

    2014-03-01

    Full Text Available In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.

  10. Mixture Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2007-12-01

    A mixture experiment involves combining two or more components in various proportions or amounts and then measuring one or more responses for the resulting end products. Other factors that affect the response(s), such as process variables and/or the total amount of the mixture, may also be studied in the experiment. A mixture experiment design specifies the combinations of mixture components and other experimental factors (if any) to be studied and the response variable(s) to be measured. Mixture experiment data analyses are then used to achieve the desired goals, which may include (i) understanding the effects of components and other factors on the response(s), (ii) identifying components and other factors with significant and nonsignificant effects on the response(s), (iii) developing models for predicting the response(s) as functions of the mixture components and any other factors, and (iv) developing end-products with desired values and uncertainties of the response(s). Given a mixture experiment problem, a practitioner must consider the possible approaches for designing the experiment and analyzing the data, and then select the approach best suited to the problem. Eight possible approaches include 1) component proportions, 2) mathematically independent variables, 3) slack variable, 4) mixture amount, 5) component amounts, 6) mixture process variable, 7) mixture of mixtures, and 8) multi-factor mixture. The article provides an overview of the mixture experiment designs, models, and data analyses for these approaches.

  11. Numerical Simulation of Water Jet Flow Using Diffusion Flux Mixture Model

    Directory of Open Access Journals (Sweden)

    Zhi Shang

    2014-01-01

    Full Text Available A multidimensional diffusion flux mixture model was developed to simulate water jet two-phase flows. Through the modification of the gravity using the gradients of the mixture velocity, the centrifugal force on the water droplets was able to be considered. The slip velocities between the continuous phase (gas and the dispersed phase (water droplets were able to be calculated through multidimensional diffusion flux velocities based on the modified multidimensional drift flux model. Through the numerical simulations, comparing with the experiments and the simulations of traditional algebraic slip mixture model on the water mist spray, the model was validated.

  12. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  13. Extending the Binomial Checkpointing Technique for Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    2016-10-10

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.

  14. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties for ...

  15. A Lattice Boltzmann Model of Binary Fluid Mixture

    CERN Document Server

    Orlandini, E; Yeomans, J M; Orlandini, Enzo; Swift, Michael R.

    1995-01-01

    We introduce a lattice Boltzmann for simulating an immiscible binary fluid mixture. Our collision rules are derived from a macroscopic thermodynamic description of the fluid in a way motivated by the Cahn-Hilliard approach to non-equilibrium dynamics. This ensures that a thermodynamically consistent state is reached in equilibrium. The non-equilibrium dynamics is investigated numerically and found to agree with simple analytic predictions in both the one-phase and the two-phase region of the phase diagram.

  16. Convergence Properties of Kemp's q-Binomial Distribution

    OpenAIRE

    Gerhold, Stefan; Zeiner, Martin

    2008-01-01

    We consider Kemp's q-analogue of the binomial distribution. Several convergence results involving the classical binomial, the Heine, the discrete normal, and the Poisson distribution are established. Some of them are q-analogues of classical convergence properties. Besides elementary estimates, we apply Mellin transform asymptotics.

  17. Wigner Function of Density Operator for Negative Binomial Distribution

    Institute of Scientific and Technical Information of China (English)

    HE Min-Hua; XU Xing-Lei; ZHANG Duan-Ming; LI Hong-Qi; PAN Gui-Jun; YIN Yan-Ping; CHEN Zhi-Yuan

    2008-01-01

    By using the technique of integration within an ordered product (IWOP) of operator we derive Wigner function of density operator for negative binomial distribution of radiation field in the mixed state case, then we derive the Wigner function of squeezed number state, which yields negative binomial distribution by virtue of the entangled state representation and the entangled Wigner operator.

  18. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  19. Evaluation of the generalized Goodwin Staton integral using binomial expansion theorem

    Science.gov (United States)

    Mamedov, B. A.

    2007-05-01

    A new analytical technique for evaluating the generalized Goodwin Staton integral (GGS) is described. A closed-form evaluation is presented. The GGS integral are expressed in terms of linear combinations of binomial coefficients and incomplete gamma function. A further comparison of analytical results with numerical models demonstrates a high accuracy of the developed analytical solution. The convergence of the results is shown.

  20. Pricing of perpetual American and Bermudan options by binomial tree method

    Institute of Scientific and Technical Information of China (English)

    LIN Jianwei; LIANG Jin

    2007-01-01

    In this paper, we consider the binomial tree method for pricing perpetual American and perpetual Bermudan options. The closed form solutions of these discrete models are solved. Explicit formulas for the optimal exercise boundary of the perpetual American option is obtained.A nonlinear equation that is satisfied by the optimal exercise boundaries of the perpetual Bermudan option is found.

  1. Use of a negative binomial distribution to describe the presence of Sphyrion laevigatum in Genypterus blacodes.

    Science.gov (United States)

    Peña-Rehbein, Patricio; De los Ríos-Escalante, Patricio; Castro, Raúl; Navarrete, Carolina

    2013-01-01

    This paper describes the frequency and number of Sphyrion laevigatum in the skin of Genypterus blacodes, an important economic resource in Chile. The analysis of a spatial distribution model indicated that the parasites tended to cluster. Variations in the number of parasites per host could be described by a negative binomial distribution. The maximum number of parasites observed per host was two.

  2. A Bayesian estimation on right censored survival data with mixture and non-mixture cured fraction model based on Beta-Weibull distribution

    Science.gov (United States)

    Yusuf, Madaki Umar; Bakar, Mohd. Rizam B. Abu

    2016-06-01

    Models for survival data that includes the proportion of individuals who are not subject to the event under study are known as a cure fraction models or simply called long-term survival models. The two most common models used to estimate the cure fraction are the mixture model and the non-mixture model. in this work, we present mixture and the non-mixture cure fraction models for survival data based on the beta-Weibull distribution. This four parameter distribution has been proposed as an alternative extension of the Weibull distribution in the analysis of lifetime data. This approach allows the inclusion of covariates in the models, where the estimation of the parameters was obtained under a Bayesian approach using Gibbs sampling methods.

  3. 微核试验数据的Poisson和负二项回归模型拟合效果比较%Comparison of Fitting Results of Poisson Regression and Negative Binomial Regression Models for Data of Cytokinesis-block Micronucleus Test

    Institute of Scientific and Technical Information of China (English)

    郑辉烈; 王增珍; 俞慧强

    2011-01-01

    Objective To compare the fitting results of the Poisson regression model and negative binomial regression model for data of cytokinesis-block micronucleus test, and to provide a basis for statistical analysis of data of cytokinesis-block micronucleus test. Methods By using the log likelihood function,the deviance,Pearson x2 and cluster index, the fitting results of Poisson regression model and the negative binomial regression model for data of cytokinesis-block micronucleus test were evaluated. Result The ratio of log lielihood function to degree of freedom for negative binomial regression was greater than that for Poisson regression. The ratio of deviance to degree of freedom and the ratio of Pearson x2 to degree of freedom for negative binomial regression were less than those for Poisson regression. There was a significant difference in cluster index that was not equal to zero for negative binomial regression model(x2= 1 160.42, P<0.001).Conclusion The negative binomial regression model was superior to Poisson regression model for data of cytokinesis-block micronucleus test.%目的 比较Poisson和负二项回归模型对微核试验数据(每1 000个双核淋巴细胞中具有微核的淋巴细胞数)的拟合效果,为微核试验数据的模型拟合提供依据.方法 运用微核试验数据,拟合Poisson分布和负二项分布回归模型,采用对数似然函数、偏差统计量、Pearson χ2统计量和聚集性指数等指标比较2种回归模型对实例数据的拟合效果.结果 负二项回归模型对数似然函数值与自由度的比值(-2.51)大于Poisson回归模型(-3.52);负二项回归模型拟合优度统计量-偏差统计量和Pearson χ2统计量与对应的自由度比值(1.16和1.07)小于Poisson回归模型;聚集性指数的似然比检验(H0:k=0)显示,聚集性指数不等于0具有统计学意义(χ2=1 160.42,P<0.001).结论对于微核试验数据,拟合负二项回归模型要优于Poisson回归模型.

  4. A Bayesian Mixture Model for PoS Induction Using Multiple Features

    OpenAIRE

    Christodoulopoulos, Christos; Goldwater, Sharon; Steedman, Mark

    2011-01-01

    In this paper we present a fully unsupervised syntactic class induction system formulated as a Bayesian multinomial mixture model, where each word type is constrained to belong to a single class. By using a mixture model rather than a sequence model (e.g., HMM), we are able to easily add multiple kinds of features, including those at both the type level (morphology features) and token level (context and alignment features, the latter from parallel corpora). Using only context features, our sy...

  5. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  6. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  7. Irreversible Processes in a Universe modelled as a mixture of a Chaplygin gas and radiation

    CERN Document Server

    Kremer, G M

    2003-01-01

    The evolution of a Universe modelled as a mixture of a Chaplygin gas and radiation is determined by taking into account irreversible processes. This mixture could interpolate periods of a radiation dominated, a matter dominated and a cosmological constant dominated Universe. The results of a Universe modelled by this mixture are compared with the results of a mixture whose constituents are radiation and quintessence. Among other results it is shown that: (a) for both models there exists a period of a past deceleration with a present acceleration; (b) the slope of the acceleration of the Universe modelled as a mixture of a Chaplygin gas with radiation is more pronounced than that modelled as a mixture of quintessence and radiation; (c) the energy density of the Chaplygin gas tends to a constant value at earlier times than the energy density of quintessence does; (d) the energy density of radiation for both mixtures coincide and decay more rapidly than the energy densities of the Chaplygin gas and of quintessen...

  8. Enumerative and binomial sampling plans for citrus mealybug (Homoptera: pseudococcidae) in citrus groves.

    Science.gov (United States)

    Martínez-Ferrer, María Teresa; Ripollés, José Luís; Garcia-Marí, Ferran

    2006-06-01

    The spatial distribution of the citrus mealybug, Planococcus citri (Risso) (Homoptera: Pseudococcidae), was studied in citrus groves in northeastern Spain. Constant precision sampling plans were designed for all developmental stages of citrus mealybug under the fruit calyx, for late stages on fruit, and for females on trunks and main branches; more than 66, 286, and 101 data sets, respectively, were collected from nine commercial fields during 1992-1998. Dispersion parameters were determined using Taylor's power law, giving aggregated spatial patterns for citrus mealybug populations in three locations of the tree sampled. A significant relationship between the number of insects per organ and the percentage of occupied organs was established using either Wilson and Room's binomial model or Kono and Sugino's empirical formula. Constant precision (E = 0.25) sampling plans (i.e., enumerative plans) for estimating mean densities were developed using Green's equation and the two binomial models. For making management decisions, enumerative counts may be less labor-intensive than binomial sampling. Therefore, we recommend enumerative sampling plans for the use in an integrated pest management program in citrus. Required sample sizes for the range of population densities near current management thresholds, in the three plant locations calyx, fruit, and trunk were 50, 110-330, and 30, respectively. Binomial sampling, especially the empirical model, required a higher sample size to achieve equivalent levels of precision.

  9. Modeling adsorption of liquid mixtures on porous materials

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2009-01-01

    The multicomponent potential theory of adsorption (MPTA), which was previously applied to adsorption from gases, is extended onto adsorption of liquid mixtures on porous materials. In the MPTA, the adsorbed fluid is considered as an inhomogeneous liquid with thermodynamic properties that depend...... on the distance from the solid surface (or position in the porous space). The theory describes the two kinds of interactions present in the adsorbed fluid, i.e. the fluid-fluid and fluid-solid interactions, by means of an equation of state and interaction potentials, respectively. The proposed extension...

  10. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte Ca...... forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications....

  11. Comparison and Field Validation of Binomial Sampling Plans for Oligonychus perseae (Acari: Tetranychidae) on Hass Avocado in Southern California.

    Science.gov (United States)

    Lara, Jesus R; Hoddle, Mark S

    2015-08-01

    Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed.

  12. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  13. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  14. Optimal Penalty Functions Based on MCMC for Testing Homogeneity of Mixture Models

    Directory of Open Access Journals (Sweden)

    Rahman Farnoosh

    2012-07-01

    Full Text Available This study is intended to provide an estimation of penalty function for testing homogeneity of mixture models based on Markov chain Monte Carlo simulation. The penalty function is considered as a parametric function and parameter of determinative shape of the penalty function in conjunction with parameters of mixture models are estimated by a Bayesian approach. Different mixture of uniform distribution are used as prior. Some simulation examples are perform to confirm the efficiency of the present work in comparison with the previous approaches.

  15. Scattering for mixtures of hard spheres: comparison of total scattering intensities with model.

    Science.gov (United States)

    Anderson, B J; Gopalakrishnan, V; Ramakrishnan, S; Zukoski, C F

    2006-03-01

    The angular dependence of the intensity of x-rays scattered from binary and ternary hard sphere mixtures is investigated and compared to the predictions of two scattering models. Mixture ratio and total volume fraction dependent effects are investigated for size ratios equal to 0.51 and 0.22. Comparisons of model predictions with experimental results indicate the significant impact of the role of particle size distributions in interpreting the angular dependence of the scattering at wave vectors probing density fluctuations intermediate between the sizes of the particles in the mixture.

  16. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    OpenAIRE

    Lei Wang; Satoshi Uchida

    2008-01-01

    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classificat...

  17. MULTIPLE REFLECTION EFFECTS IN NONLINEAR MIXTURE MODEL FOR HYPERSPECTRAL IMAGE ANALYSIS

    OpenAIRE

    Liu, C. Y.; Ren, H.

    2016-01-01

    Hyperspectral spectrometers can record electromagnetic energy with hundreds or thousands of spectral channels. With such high spectral resolution, the spectral information has better capability for material identification. Because of the spatial resolution, one pixel in hyperspectral images usually covers several meters, and it may contain more than one material. Therefore, the mixture model must be considered. Linear mixture model (LMM) has been widely used for remote sensing target...

  18. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only...... components and to only one parameter for mixtures consisting of non-polar components. A possibility of complete prediction of the parameters is discussed....

  19. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...... categorizing only the most extreme SCS observations as mastitic, and such cases of subclinical infections may be the most closely related to clinical (treated) mastitis...

  20. NUMERICAL ANALYSIS ON BINOMIAL TREE METHODS FOR AMERICAN LOOKBACK OPTIONS

    Institute of Scientific and Technical Information of China (English)

    戴民

    2001-01-01

    Lookback options are path-dependent options. In general, the binomial tree methods,as the most popular approaches to pricing options, involve a path dependent variable as well as the underlying asset price for lookback options. However, for floating strike lookback options, a single-state variable binomial tree method can be constructed. This paper is devoted to the convergence analysis of the single-state binomial tree methods both for discretely and continuously monitored American floating strike lookback options. We also investigate some properties of such options, including effects of expiration date, interest rate and dividend yield on options prices,properties of optimal exercise boundaries and so on.

  1. Time accelerated Monte Carlo simulations of biological networks using the binomial tau-leap method.

    Science.gov (United States)

    Chatterjee, Abhijit; Mayawala, Kapil; Edwards, Jeremy S; Vlachos, Dionisios G

    2005-05-01

    Developing a quantitative understanding of intracellular networks requires simulations and computational analyses. However, traditional differential equation modeling tools are often inadequate due to the stochasticity of intracellular reaction networks that can potentially influence the phenotypic characteristics. Unfortunately, stochastic simulations are computationally too intense for most biological systems. Herein, we have utilized the recently developed binomial tau-leap method to carry out stochastic simulations of the epidermal growth factor receptor induced mitogen activated protein kinase cascade. Results indicate that the binomial tau-leap method is computationally 100-1000 times more efficient than the exact stochastic simulation algorithm of Gillespie. Furthermore, the binomial tau-leap method avoids negative populations and accurately captures the species populations along with their fluctuations despite the large difference in their size. http://www.dion.che.udel.edu/multiscale/Introduction.html. Fortran 90 code available for academic use by email. Details about the binomial tau-leap algorithm, software and a manual are available at the above website.

  2. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    Science.gov (United States)

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  3. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  4. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... surface tensions of 34 binary mixtures with an overall average absolute deviation of 3.46%. The results show good agreement between the predicted and experimental surface tensions. Next, the SGT model was applied to correlate surface tensions of binary mixtures containing alcohols, water or/and glycerol...

  5. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.;

    2007-01-01

    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, even...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved....

  6. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Nsiri Benayad

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  7. The role of Poisson's binomial distribution in the analysis of TEM images.

    Science.gov (United States)

    Tejada, Arturo; den Dekker, Arnold J

    2011-11-01

    Frank's observation that a TEM bright-field image acquired under non-stationary conditions can be modeled by the time integral of the standard TEM image model [J. Frank, Nachweis von objektbewegungen im lichtoptis- chen diffraktogramm von elektronenmikroskopischen auf- nahmen, Optik 30 (2) (1969) 171-180.] is re-derived here using counting statistics based on Poisson's binomial distribution. The approach yields a statistical image model that is suitable for image analysis and simulation.

  8. Discriminative variable subsets in Bayesian classification with mixture models, with application in flow cytometry studies.

    Science.gov (United States)

    Lin, Lin; Chan, Cliburn; West, Mike

    2016-01-01

    We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets.

  9. Volumetric Properties of Chloroalkanes + Amines Mixtures: Theoretical Analysis Using the ERAS-Model

    Science.gov (United States)

    Tôrres, R. B.; Hoga, H. E.; Magalhães, J. G.; Volpe, P. L. O.

    2009-08-01

    In this study, experimental data of excess molar volumes of {dichloromethane (DCM), or trichloromethane (TCM) + n-butylamine (n-BA), or +s-butylamine (s-BA), or +t-butylamine (t-BA), or +diethylamine (DEA), or +triethylamine (TEA)} mixtures as a function of composition have been used to test the applicability of the extended real associated solution model (ERAS-Model). The values of the excess molar volume were negative for (DCM + t-BA, or +DEA, or +TEA and TCM + n-BA, or +s-BA, or +DEA, or +TEA) mixtures and present sigmoid curves for (DCM + n-BA, or +s-BA) mixtures over the complete mole-fraction range. The agreement between theoretical and experimental results is discussed in terms of cross-association between the components present in the mixtures.

  10. Kinetic Modeling of Gasoline Surrogate Components and Mixtures under Engine Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Pitz, W J; Westbrook, C K; Curran, H J

    2010-01-11

    Real fuels are complex mixtures of thousands of hydrocarbon compounds including linear and branched paraffins, naphthenes, olefins and aromatics. It is generally agreed that their behavior can be effectively reproduced by simpler fuel surrogates containing a limited number of components. In this work, an improved version of the kinetic model by the authors is used to analyze the combustion behavior of several components relevant to gasoline surrogate formulation. Particular attention is devoted to linear and branched saturated hydrocarbons (PRF mixtures), olefins (1-hexene) and aromatics (toluene). Model predictions for pure components, binary mixtures and multicomponent gasoline surrogates are compared with recent experimental information collected in rapid compression machine, shock tube and jet stirred reactors covering a wide range of conditions pertinent to internal combustion engines (3-50 atm, 650-1200K, stoichiometric fuel/air mixtures). Simulation results are discussed focusing attention on the mixing effects of the fuel components.

  11. Simulating asymmetric colloidal mixture with adhesive hard sphere model.

    Science.gov (United States)

    Jamnik, A

    2008-06-21

    Monte Carlo simulation and Percus-Yevick (PY) theory are used to investigate the structural properties of a two-component system of the Baxter adhesive fluids with the size asymmetry of the particles of both components mimicking an asymmetric binary colloidal mixture. The radial distribution functions for all possible species pairs, g(11)(r), g(22)(r), and g(12)(r), exhibit discontinuities at the interparticle distances corresponding to certain combinations of n and m values (n and m being integers) in the sum nsigma(1)+msigma(2) (sigma(1) and sigma(2) being the hard-core diameters of individual components) as a consequence of the impulse character of 1-1, 2-2, and 1-2 attractive interactions. In contrast to the PY theory, which predicts the delta function peaks in the shape of g(ij)(r) only at the distances which are the multiple of the molecular sizes corresponding to different linear structures of successively connected particles, the simulation results reveal additional peaks at intermediate distances originating from the formation of rigid clusters of various geometries.

  12. Some covariance models based on normal scale mixtures

    CERN Document Server

    Schlather, Martin

    2011-01-01

    Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (J. Amer. Statist. Assoc. 97 (2002) 590--600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.

  13. Mixture Models for the Analysis of Repeated Count Data.

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Böckenholt, U

    1995-01-01

    Repeated count data showing overdispersion are commonly analysed by using a Poisson model with varying intensity parameter. resulting in a mixed model. A mixed model with a gamma distribution for the Poisson parameter does not adequately fit a data set on 721 children's spelling errors. An

  14. Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria, Density, and Viscosity

    Science.gov (United States)

    2011-01-01

    Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, and Viscosity Nathaniel...Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, And Viscosity 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: phase equilibria ; modified Sanchez-Lacombe equation of state

  15. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  16. Some Large Deviation Results for Generalized Compound Binomial Risk Models%广义复合二项风险模型的若干大偏差结果

    Institute of Scientific and Technical Information of China (English)

    孔繁超; 赵朋

    2009-01-01

    This paper is a further investigation of large deviation for partial and random sums of random variables, where {X_n, n≥ 1} is non-negative independent identically distributed random variables with a common heavy-tailed distribution function F on the real line R and finite mean μ∈ R. {N(n),n≥ 0} is a binomial process with a parameter p ∈ (0, 1) and independent of {X_n,n≥1}; {M(n),n≥0} is a Poisson process with intensity λ > 0, S_n=∑~(N(n))_(i=1) X_i-cM(n). Suppose F ∈ C, we futher extend and improve some large deviation results. These results can apply to certain problems in insurance and finance.

  17. Exploring Public Perception of Paratransit Service Using Binomial Logistic Regression

    Directory of Open Access Journals (Sweden)

    Hisashi Kubota

    2007-01-01

    Full Text Available Knowledge of the market is a requirement for a successful provision of public transportation. This study aims to explore public perception of paratransit service, as represented by the user and non-user of paratransit. The analysis has been conducted based on the public’s response, by creating several binomial logistic regression models using the public perception of the quality of service, quality of car, quality of driver, and fare. These models illustrate the characteristics and important variables to establish whether the public will use more paratransit in the future once improvements will have been made. Moreover, several models are developed to explore public perception in order to find out whether they agree to the replacement of paratransit with other types of transportation modes. All models are well fitting. These models are able to explain the respondents’ characteristics and to reveal their actual perception of the operation of paratransit. This study provides a useful tool to know the market in greater depth.

  18. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den

    2011-01-01

    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  19. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den

    2011-01-01

    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  20. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  1. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  2. Structure-reactivity modeling using mixture-based representation of chemical reactions

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-07-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  3. Quiver mutation sequences and $q$-binomial identities

    CERN Document Server

    Kato, Akishi; Terashima, Yuji

    2016-01-01

    In this paper, first we introduce a quantity called a partition function for a quiver mutation sequence. The partition function is a generating function whose weight is a $q$-binomial associated with each mutation. Then, we show that the partition function can be expressed as a ratio of products of quantum dilogarithms. This provides a systematic way of constructing various $q$-binomial multisum identities.

  4. Recombining binomial tree for constant elasticity of variance process

    OpenAIRE

    Hi Jun Choe; Jeong Ho Chu; So Jeong Shin

    2014-01-01

    The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...

  5. Microstructure modeling and virtual test of asphalt mixture based on three-dimensional discrete element method

    Institute of Scientific and Technical Information of China (English)

    马涛; 张德育; 张垚; 赵永利; 黄晓明

    2016-01-01

    The objective of this work is to model the microstructure of asphalt mixture and build virtual test for asphalt mixture by using Particle Flow Code in three dimensions (PFC3D) based on three-dimensional discrete element method. A randomly generating algorithm was proposed to capture the three-dimensional irregular shape of coarse aggregate. And then, modeling algorithm and method for graded aggregates were built. Based on the combination of modeling of coarse aggregates, asphalt mastic and air voids, three-dimensional virtual sample of asphalt mixture was modeled by using PFC3D. Virtual tests for penetration test of aggregate and uniaxial creep test of asphalt mixture were built and conducted by using PFC3D. By comparison of the testing results between virtual tests and actual laboratory tests, the validity of the microstructure modeling and virtual test built in this study was verified. Additionally, compared with laboratory test, the virtual test is easier to conduct and has less variability. It is proved that microstructure modeling and virtual test based on three-dimensional discrete element method is a promising way to conduct research of asphalt mixture.

  6. Use of the binomial distribution to predict impairment: application in a nonclinical sample.

    Science.gov (United States)

    Axelrod, Bradley N; Wall, Jacqueline R; Estes, Bradley W

    2008-01-01

    A mathematical model based on the binomial theory was developed to illustrate when abnormal score variations occur by chance in a multitest battery (Ingraham & Aiken, 1996). It has been successfully used as a comparison for obtained test scores in clinical samples, but not in nonclinical samples. In the current study, this model has been applied to demographically corrected scores on the Halstead-Reitan Neuropsychological Test Battery, obtained from a sample of 94 nonclinical college students. Results found that 15% of the sample had impairments suggested by the Halstead Impairment Index, using criteria established by Reitan and Wolfson (1993). In addition, one-half of the sample obtained impaired scores on one or two tests. These results were compared to that predicted by the binomial model and found to be consistent. The model therefore serves as a useful resource for clinicians considering the probability of impaired test performance.

  7. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...... in the commercial software PFC3D, including the slip model, linear stiffness-contact model, and contact bond model. A macro-scale Burger's model was first established and the input parameters of Burger's contact model were calibrated by adjusting them so that the model fitted the experimental data for the complex...

  8. Mixture Density Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  9. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  10. Treatment of nonignorable missing data when modeling unobserved heterogeneity with finite mixture models.

    Science.gov (United States)

    Lehmann, Thomas; Schlattmann, Peter

    2017-01-01

    Multiple imputation has become a widely accepted technique to deal with the problem of incomplete data. Typically, imputation of missing values and the statistical analysis are performed separately. Therefore, the imputation model has to be consistent with the analysis model. If the data are analyzed with a mixture model, the parameter estimates are usually obtained iteratively. Thus, if the data are missing not at random, parameter estimation and treatment of missingness should be combined. We solve both problems by simultaneously imputing values using the data augmentation method and estimating parameters using the EM algorithm. This iterative procedure ensures that the missing values are properly imputed given the current parameter estimates. Properties of the parameter estimates were investigated in a simulation study. The results are illustrated using data from the National Health and Nutrition Examination Survey.

  11. A class-adaptive spatially variant mixture model for image segmentation.

    Science.gov (United States)

    Nikou, Christophoros; Galatsanos, Nikolaos P; Likas, Aristidis C

    2007-04-01

    We propose a new approach for image segmentation based on a hierarchical and spatially variant mixture model. According to this model, the pixel labels are random variables and a smoothness prior is imposed on them. The main novelty of this work is a new family of smoothness priors for the label probabilities in spatially variant mixture models. These Gauss-Markov random field-based priors allow all their parameters to be estimated in closed form via the maximum a posteriori (MAP) estimation using the expectation-maximization methodology. Thus, it is possible to introduce priors with multiple parameters that adapt to different aspects of the data. Numerical experiments are presented where the proposed MAP algorithms were tested in various image segmentation scenarios. These experiments demonstrate that the proposed segmentation scheme compares favorably to both standard and previous spatially constrained mixture model-based segmentation.

  12. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  13. A Model-Selection-Based Self-Splitting Gaussian Mixture Learning with Application to Speaker Identification

    Directory of Open Access Journals (Sweden)

    Shih-Sian Cheng

    2004-12-01

    Full Text Available We propose a self-splitting Gaussian mixture learning (SGML algorithm for Gaussian mixture modelling. The SGML algorithm is deterministic and is able to find an appropriate number of components of the Gaussian mixture model (GMM based on a self-splitting validity measure, Bayesian information criterion (BIC. It starts with a single component in the feature space and splits adaptively during the learning process until the most appropriate number of components is found. The SGML algorithm also performs well in learning the GMM with a given component number. In our experiments on clustering of a synthetic data set and the text-independent speaker identification task, we have observed the ability of the SGML for model-based clustering and automatically determining the model complexity of the speaker GMMs for speaker identification.

  14. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  15. Isothermal (vapour + liquid) equilibrium of (cyclic ethers + chlorohexane) mixtures: Experimental results and SAFT modelling

    Energy Technology Data Exchange (ETDEWEB)

    Bandres, I.; Giner, B.; Lopez, M.C.; Artigas, H. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain); Lafuente, C. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)], E-mail: celadi@unizar.es

    2008-08-15

    Experimental data for the isothermal (vapour + liquid) equilibrium of mixtures formed by several cyclic ethers (tetrahydrofuran, tetrahydropyran, 1,3-dioxolane, and 1,4-dioxane) and chlorohexane at temperatures of (298.15 and 328.15) K are presented. Experimental results have been discussed in terms of both, molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. Furthermore, the influence of the temperature on the (vapour + liquid) equilibrium of these mixtures has been explored and discussed. Transferable parameters of the SAFT-VR approach together with standard combining rules have been used to model the phase equilibrium of the mixtures and a description of the (vapour + liquid) equilibrium of them that is in excellent agreement with the experimental data are provided.

  16. Modeling dependence based on mixture copulas and its application in risk management

    Institute of Scientific and Technical Information of China (English)

    OUYANG Zi-sheng; LIAO Hui; YANG Xiang-qun

    2009-01-01

    This paper is concerned with the statistical modeling of the dependence structure of multivariate financial data using the copula, and the application of copula functions in VaR valuation. After the introduction of the pure copula method and the maximum and minimum mixture copula method, authors present a new algorithm based on the more generalized mixture copula functions and the dependence measure, and apply the method to the portfolio of Shanghai stock composite index and Shenzhen stock component index. Comparing with the results from various methods, one can find that the mixture copula method is better than the pure Gaussia copula method and the maximum and minimum mixture copula method on different VaR level.

  17. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  18. Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.

    Science.gov (United States)

    Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z

    2007-08-15

    Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.

  19. Motif Yggdrasil: sampling sequence motifs from a tree mixture model.

    Science.gov (United States)

    Andersson, Samuel A; Lagergren, Jens

    2007-06-01

    In phylogenetic foot-printing, putative regulatory elements are found in upstream regions of orthologous genes by searching for common motifs. Motifs in different upstream sequences are subject to mutations along the edges of the corresponding phylogenetic tree, consequently taking advantage of the tree in the motif search is an appealing idea. We describe the Motif Yggdrasil sampler; the first Gibbs sampler based on a general tree that uses unaligned sequences. Previous tree-based Gibbs samplers have assumed a star-shaped tree or partially aligned upstream regions. We give a probabilistic model (MY model) describing upstream sequences with regulatory elements and build a Gibbs sampler with respect to this model. The model allows toggling, i.e., the restriction of a position to a subset of nucleotides, but does not require aligned sequences nor edge lengths, which may be difficult to come by. We apply the collapsing technique to eliminate the need to sample nuisance parameters, and give a derivation of the predictive update formula. We show that the MY model improves the modeling of difficult motif instances and that the use of the tree achieves a substantial increase in nucleotide level correlation coefficient both for synthetic data and 37 bacterial lexA genes. We investigate the sensitivity to errors in the tree and show that using random trees MY sampler still has a performance similar to the original version.

  20. Solvable model of a trapped mixture of Bose-Einstein condensates

    Science.gov (United States)

    Klaiman, Shachar; Streltsov, Alexej I.; Alon, Ofir E.

    2017-01-01

    A mixture of two kinds of identical bosons held in a harmonic potential and interacting by harmonic particle-particle interactions is discussed. This is an exactly-solvable model of a mixture of two trapped Bose-Einstein condensates which allows us to examine analytically various properties. Generalizing the treatments in Cohen and Lee (1985) and Osadchii and Muraktanov (1991), closed form expressions for the mixture's frequencies and ground-state energy and wave-function, and the lowest-order densities are obtained and analyzed for attractive and repulsive intra-species and inter-species particle-particle interactions. A particular mean-field solution of the corresponding Gross-Pitaevskii theory is also found analytically. This allows us to compare properties of the mixture at the exact, many-body and mean-field levels, both for finite systems and at the limit of an infinite number of particles. We discuss the renormalization of the mixture's frequencies at the mean-field level. Mainly, we hereby prove that the exact ground-state energy per particle and lowest-order intra-species and inter-species densities per particle converge at the infinite-particle limit (when the products of the number of particles times the intra-species and inter-species interaction strengths are held fixed) to the results of the Gross-Pitaevskii theory for the mixture. Finally and on the other end, we use the mixture's and each species' center-of-mass operators to show that the Gross-Pitaevskii theory for mixtures is unable to describe the variance of many-particle operators in the mixture, even in the infinite-particle limit. The variances are computed both in position and momentum space and the respective uncertainty products compared and discussed. The role of the center-of-mass separability and, for generically trapped mixtures, inseparability is elucidated when contrasting the variance at the many-body and mean-field levels in a mixture. Our analytical results show that many

  1. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  2. Use of a Modified Vector Model for Odor Intensity Prediction of Odorant Mixtures

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2015-03-01

    Full Text Available Odor intensity (OI indicates the perceived intensity of an odor by the human nose, and it is usually rated by specialized assessors. In order to avoid restrictions on assessor participation in OI evaluations, the Vector Model which calculates the OI of a mixture as the vector sum of its unmixed components’ odor intensities was modified. Based on a detected linear relation between the OI and the logarithm of odor activity value (OAV—a ratio between chemical concentration and odor threshold of individual odorants, OI of the unmixed component was replaced with its corresponding logarithm of OAV. The interaction coefficient (cosα which represented the degree of interaction between two constituents was also measured in a simplified way. Through a series of odor intensity matching tests for binary, ternary and quaternary odor mixtures, the modified Vector Model provided an effective way of relating the OI of an odor mixture with the lnOAV values of its constituents. Thus, OI of an odor mixture could be directly predicted by employing the modified Vector Model after usual quantitative analysis. Besides, it was considered that the modified Vector Model was applicable for odor mixtures which consisted of odorants with the same chemical functional groups and similar molecular structures.

  3. A homogenized constrained mixture (and mechanical analog) model for growth and remodeling of soft tissue.

    Science.gov (United States)

    Cyron, C J; Aydin, R C; Humphrey, J D

    2016-12-01

    Most mathematical models of the growth and remodeling of load-bearing soft tissues are based on one of two major approaches: a kinematic theory that specifies an evolution equation for the stress-free configuration of the tissue as a whole or a constrained mixture theory that specifies rates of mass production and removal of individual constituents within stressed configurations. The former is popular because of its conceptual simplicity, but relies largely on heuristic definitions of growth; the latter is based on biologically motivated micromechanical models, but suffers from higher computational costs due to the need to track all past configurations. In this paper, we present a temporally homogenized constrained mixture model that combines advantages of both classical approaches, namely a biologically motivated micromechanical foundation, a simple computational implementation, and low computational cost. As illustrative examples, we show that this approach describes well both cell-mediated remodeling of tissue equivalents in vitro and the growth and remodeling of aneurysms in vivo. We also show that this homogenized constrained mixture model suggests an intimate relationship between models of growth and remodeling and viscoelasticity. That is, important aspects of tissue adaptation can be understood in terms of a simple mechanical analog model, a Maxwell fluid (i.e., spring and dashpot in series) in parallel with a "motor element" that represents cell-mediated mechanoregulation of extracellular matrix. This analogy allows a simple implementation of homogenized constrained mixture models within commercially available simulation codes by exploiting available models of viscoelasticity.

  4. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  5. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    CERN Document Server

    Scalas, E

    2006-01-01

    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  6. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    Science.gov (United States)

    Scalas, Enrico

    2007-10-01

    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  7. Solvatochromic and Kinetic Response Models in (Ethyl Acetate + Chloroform or Methanol Solvent Mixtures

    Directory of Open Access Journals (Sweden)

    L. R. Vottero

    2000-03-01

    Full Text Available The present work analyzes the solvent effects upon the solvatochromic response models for a set of chemical probes and the kinetic response models for an aromatic nucleophilic substitution reaction, in binary mixtures in which both pure components are able to form intersolvent complexes by hydrogen bonding.

  8. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  9. Detecting Gustatory–Olfactory Flavor Mixtures: Models of Probability Summation

    Science.gov (United States)

    Veldhuizen, Maria G.; Shepard, Timothy G.; Shavit, Adam Y.

    2012-01-01

    Odorants and flavorants typically contain many components. It is generally easier to detect multicomponent stimuli than to detect a single component, through either neural integration or probability summation (PS) (or both). PS assumes that the sensory effects of 2 (or more) stimulus components (e.g., gustatory and olfactory components of a flavorant) are detected in statistically independent channels, that each channel makes a separate decision whether a component is detected, and that the behavioral response depends solely on the separate decisions. Models of PS traditionally assume high thresholds for detecting each component, noise being irrelevant. The core assumptions may be adapted, however, to signal-detection theory, where noise limits detection. The present article derives predictions of high-threshold and signal-detection models of independent-decision PS in detecting gustatory–olfactory flavorants, comparing predictions in yes/no and 2-alternative forced-choice tasks using blocked and intermixed stimulus designs. The models also extend to measures of response times to suprathreshold flavorants. Predictions derived from high-threshold and signal-detection models differ markedly. Available empirical evidence on gustatory–olfactory flavor detection suggests that neither the high-threshold nor the signal-detection versions of PS can readily account for the results, which likely reflect neural integration in the flavor system. PMID:22075720

  10. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  11. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx.

    Science.gov (United States)

    Grimm, Kevin J; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use.

  12. Memoized Online Variational Inference for Dirichlet Process Mixture Models

    Science.gov (United States)

    2014-06-27

    for unsupervised modeling of struc- tured data like text documents, time series, and images. They are especially promising for large datasets, as...non-convex unsupervised learning problems, frequently yielding poor solutions (see Fig. 2). While taking the best of multiple runs is possible, this is...16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 9 19a. NAME OF RESPONSIBLE PERSON a. REPORT

  13. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter

    Science.gov (United States)

    Kruskopf, Ari

    2017-02-01

    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{{p}} , which include values that are similar for industrial-scale steel converter. The k- ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  14. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter

    Science.gov (United States)

    Kruskopf, Ari

    2016-11-01

    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{p} , which include values that are similar for industrial-scale steel converter. The k-ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  15. A generalized physiologically-based toxicokinetic modeling system for chemical mixtures containing metals

    Directory of Open Access Journals (Sweden)

    Isukapalli Sastry S

    2010-06-01

    Full Text Available Abstract Background Humans are routinely and concurrently exposed to multiple toxic chemicals, including various metals and organics, often at levels that can cause adverse and potentially synergistic effects. However, toxicokinetic modeling studies of exposures to these chemicals are typically performed on a single chemical basis. Furthermore, the attributes of available models for individual chemicals are commonly estimated specifically for the compound studied. As a result, the available models usually have parameters and even structures that are not consistent or compatible across the range of chemicals of concern. This fact precludes the systematic consideration of synergistic effects, and may also lead to inconsistencies in calculations of co-occurring exposures and corresponding risks. There is a need, therefore, for a consistent modeling framework that would allow the systematic study of cumulative risks from complex mixtures of contaminants. Methods A Generalized Toxicokinetic Modeling system for Mixtures (GTMM was developed and evaluated with case studies. The GTMM is physiologically-based and uses a consistent, chemical-independent physiological description for integrating widely varying toxicokinetic models. It is modular and can be directly "mapped" to individual toxicokinetic models, while maintaining physiological consistency across different chemicals. Interaction effects of complex mixtures can be directly incorporated into the GTMM. Conclusions The application of GTMM to different individual metals and metal compounds showed that it explains available observational data as well as replicates the results from models that have been optimized for individual chemicals. The GTMM also made it feasible to model toxicokinetics of complex, interacting mixtures of multiple metals and nonmetals in humans, based on available literature information. The GTMM provides a central component in the development of a "source

  16. Theory of phase equilibria for model mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants

    Science.gov (United States)

    Dos Ramos, María Carolina; Blas, Felipe J.

    2007-05-01

    An extension of the SAFT-VR equation of state, the so-called hetero-SAFT approach [Y. Peng, H. Zhao, and C. McCabe, Molec. Phys. 104, 571 (2006)], is used to examine the phase equilibria exhibited by a number of model binary mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants. Despite the increasing recent interest in semifluorinated alkanes (or perfluoroalkylalkane diblock molecules), the phase behaviour of mixtures involving these molecules with n-alkanes or perfluoroalkanes is practically unknown from the experimental point of view. In this work, we use simple molecular models for n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock molecules to predict, from a molecular perspective, the phase behaviour of selected model mixtures of perfluoroalkylalkanes with n-alkanes and perfluoroalkanes. In particular, we focus our interest on the understanding of the microscopic conditions that control the liquid-liquid separation and the stabilization of these mixtures. n-Alkanes and perfluoroalkanes are modelled as tangentially bonded monomer segments with molecular parameters taken from the literature. The perfluoroalkylalkane diblock molecules are modelled as heterosegmented diblock chains, with parameters for the alkyl and perfluoroalkyl segments developed in earlier work. This simple approach, which was proposed in previous work [P. Morgado, H. Zhao, F. J. Blas, C. McCabe, L. P. N. Rebelo, and E. J. M. Filipe, J. Phys. Chem. B, 111, 2856], is now extended to describe model n-alkane (or perfluoroalkane) + perfluroalkylalkane binary mixtures. We have obtained the phase behaviour of different mixtures and studied the effect of the molecular weight of n-alkanes and perfluoroalkanes on the type of phase behaviour observed in these mixtures. We have also analysed the effect of the number of alkyl and perfluoroalkyl chemical groups in the surfactant molecule on the phase behaviour. In addition to the usual vapour-liquid phase

  17. EXISTENCE AND REGULARITY OF SOLUTIONS TO MODEL FOR LIQUID MIXTURE OF 3HE-4HE

    Institute of Scientific and Technical Information of China (English)

    Luo Hong; Pu Zhilin

    2012-01-01

    Existence and regularity of solutions to model for liquid mixture of 3He-4He is considered in this paper.First,it is proved that this system possesses a unique global weak solution in H1(Ω,C × R) by using Galerkin method.Secondly,by using an iteration procedure,regularity estimates for the linear semigroups,it is proved that the model for liquid mixture of 3He-4He has a unique solution in Hk(Ω,C × R) for all k ≥ 1.

  18. Non-racemic mixture model: a computational approach.

    Science.gov (United States)

    Polanco, Carlos; Buhse, Thomas

    2017-01-01

    The behavior of a slight chiral bias in favor of l-amino acids over d-amino acids was studied in an evolutionary mathematical model generating mixed chiral peptide hexamers. The simulations aimed to reproduce a very generalized prebiotic scenario involving a specified couple of amino acid enantiomers and a possible asymmetric amplification through autocatalytic peptide self-replication while forming small multimers of a defined length. Our simplified model allowed the observation of a small ascending but not conclusive tendency in the l-amino acid over the d-amino acid profile for the resulting mixed chiral hexamers in computer simulations of 100 peptide generations. This simulation was carried out by changing the chiral bias from 1% to 3%, in three stages of 15, 50 and 100 generations to observe any alteration that could mean a drastic change in behavior. So far, our simulations lead to the assumption that under the exposure of very slight non-racemic conditions, a significant bias between l- and d-amino acids, as present in our biosphere, was unlikely generated under prebiotic conditions if autocatalytic peptide self-replication was the main or the only driving force of chiral auto-amplification.

  19. A multiscale transport model for binary Lennard Jones mixtures in slit nanopores

    Science.gov (United States)

    Bhadauria, Ravi; Aluru, N. R.

    2016-11-01

    We present a quasi-continuum multiscale hydrodynamic transport model for one dimensional isothermal, non-reacting binary mixture confined in slit shaped nanochannels. We focus on species transport equation that includes the viscous dissipation and interspecies diffusion term of the Maxwell-Stefan form. Partial viscosity variation is modeled by van der Waals one fluid approximation and the Local Average Density Method. We use friction boundary conditions where the wall-species friction parameter is computed using a novel species specific Generalized Langevin Equation model. The transport model accuracy is tested by predicting the velocity profiles of Lennard-Jones (LJ) methane-hydrogen and LJ methane-argon mixtures in graphene slit channels of different width. The resultant slip length from the continuum model is found to be invariant of channel width for a fixed mixture molar concentration. The mixtures considered are observed to behave as single species pseudo fluid, with the friction parameter displaying a linear dependence on the molar composition. The proposed model yields atomistic level accuracy with continuum scale efficiency.

  20. A Finite Mixture of Nonlinear Random Coefficient Models for Continuous Repeated Measures Data.

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R; Zopluoglu, Cengiz

    2016-09-01

    Nonlinear random coefficient models (NRCMs) for continuous longitudinal data are often used for examining individual behaviors that display nonlinear patterns of development (or growth) over time in measured variables. As an extension of this model, this study considers the finite mixture of NRCMs that combine features of NRCMs with the idea of finite mixture (or latent class) models. The efficacy of this model is that it allows the integration of intrinsically nonlinear functions where the data come from a mixture of two or more unobserved subpopulations, thus allowing the simultaneous investigation of intra-individual (within-person) variability, inter-individual (between-person) variability, and subpopulation heterogeneity. Effectiveness of this model to work under real data analytic conditions was examined by executing a Monte Carlo simulation study. The simulation study was carried out using an R routine specifically developed for the purpose of this study. The R routine used maximum likelihood with the expectation-maximization algorithm. The design of the study mimicked the output obtained from running a two-class mixture model on task completion data.

  1. 基于贝塔二项分布的零售贷款组合违约计量模型与经济资本配置%Default Model and Economic Capital Allocation for Retail Loan Portfolios Based on Beta-binomial Distribution

    Institute of Scientific and Technical Information of China (English)

    郑玉华; 崔晓东

    2013-01-01

    对零售贷款组合进行风险评估时,违约率和违约相关性是研究的重点内容.为了克服结构化思想和单因子模型的不足,本文将违约率和违约相关性的研究转化为对组合违约数目的分布进行研完,并根据违约数目的统计特征,建立基于贝塔二项分布的违约计量模型,在此基础上,进一步分析该分布假设下的经济资本配置问题.研究结论表明,设定违约数目服从贝塔二项分布,并通过分布参数的合理设置,既能反映各单项贷款的违约信息,又能体现零售贷款中各贷款违约之间的相关性,从而保证经济资本计算的可靠性与适用性.%Probability of default and default correlation is the key problem in risk assessment for retail loan portfolios.In order to overcome the shortage of structure model and single-factor model,we study the distribution of default number of portfolios stead of probability of default and default correlation.Default model based on beta-binomial distribution is put forward according to study on the statistical characteristics of the default number of retail loan portfolios.We also further discussed the issue of economic capital allocation in this assumption of beta-binomial distribution.The results show that introducing beta-binomial distribution with a reasonable parameter set for default number can not only reflect the default information of single loan,but also embody the default correlation of retail loans.All these work would be helpful to improve the reliability and applicability of the economic capital allocation.

  2. Use of a negative binomial distribution to describe the presence of Sphyrion laevigatum in Genypterus blacodes

    Directory of Open Access Journals (Sweden)

    Patricio Peña-Rehbein

    Full Text Available This paper describes the frequency and number of Sphyrion laevigatum in the skin of Genypterus blacodes, an important economic resource in Chile. The analysis of a spatial distribution model indicated that the parasites tended to cluster. Variations in the number of parasites per host could be described by a negative binomial distribution. The maximum number of parasites observed per host was two.

  3. EEG Signal Classification With Super-Dirichlet Mixture Model

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Tan, Zheng-Hua; Prasad, Swati

    2012-01-01

    Classification of the Electroencephalogram (EEG) signal is a challengeable task in the brain-computer interface systems. The marginalized discrete wavelet transform (mDWT) coefficients extracted from the EEG signals have been frequently used in researches since they reveal features related to the...... vector machine (SVM) based classifier, the SDMM based classifier performs more stable and shows a promising improvement, with both channel selection strategies....... by the Dirichlet distribution and the distribution of the mDWT coefficients from more than one channels is described by a super-Dirichletmixture model (SDMM). The Fisher ratio and the generalization error estimation are applied to select relevant channels, respectively. Compared to the state-of-the-art support...

  4. Land Cover Classification for Polarimetric SAR Images Based on Mixture Models

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2014-04-01

    Full Text Available In this paper, two mixture models are proposed for modeling heterogeneous regions in single-look and multi-look polarimetric SAR images, along with their corresponding maximum likelihood classifiers for land cover classification. The classical Gaussian and Wishart models are suitable for modeling scattering vectors and covariance matrices from homogeneous regions, while their performance deteriorates for regions that are heterogeneous. By comparison, the proposed mixture models reduce the modeling error by expressing the data distribution as a weighted sum of multiple component distributions. For single-look and multi-look polarimetric SAR data, complex Gaussian and complex Wishart components are adopted, respectively. Model parameters are determined by employing the expectation-maximization (EM algorithm. Two maximum likelihood classifiers are then constructed based on the proposed mixture models. These classifiers are assessed using polarimetric SAR images from the RADARSAT-2 sensor of the Canadian Space Agency (CSA, the AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the EMISAR sensor of the Technical University of Denmark (DTU. Experiment results demonstrate that the new models fit heterogeneous regions preferably to the classical models and are especially appropriate for extremely heterogeneous regions, such as urban areas. The overall accuracy of land cover classification is also improved due to the more refined modeling.

  5. Kinetic model for astaxanthin aggregation in water-methanol mixtures

    Science.gov (United States)

    Giovannetti, Rita; Alibabaei, Leila; Pucciarelli, Filippo

    2009-07-01

    The aggregation of astaxanthin in hydrated methanol was kinetically studied in the temperature range from 10 °C to 50 °C, at different astaxanthin concentrations and solvent composition. A kinetic model for the formation and transformation of astaxanthin aggregated has been proposed. Spectrophotometric studies showed that monomeric astaxanthin decayed to H-aggregates that after-wards formed J-aggregates when water content was 50% and the temperature lower than 20 °C; at higher temperatures, very stable J-aggregates were formed directly. Monomer formed very stable H-aggregates when the water content was greater than 60%; in these conditions H-aggregates decayed into J-aggregates only when the temperature was at least 50 °C. Through these findings it was possible to establish that the aggregation reactions took place through a two steps consecutive reaction with first order kinetic constants and that the values of these depended on the solvent composition and temperature.

  6. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    Science.gov (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  7. A Mechanistic Modeling Framework for Predicting Metabolic Interactions in Complex Mixtures

    Science.gov (United States)

    Cheng, Shu

    2011-01-01

    Background: Computational modeling of the absorption, distribution, metabolism, and excretion of chemicals is now theoretically able to describe metabolic interactions in realistic mixtures of tens to hundreds of substances. That framework awaits validation. Objectives: Our objectives were to a) evaluate the conditions of application of such a framework, b) confront the predictions of a physiologically integrated model of benzene, toluene, ethylbenzene, and m-xylene (BTEX) interactions with observed kinetics data on these substances in mixtures and, c) assess whether improving the mechanistic description has the potential to lead to better predictions of interactions. Methods: We developed three joint models of BTEX toxicokinetics and metabolism and calibrated them using Markov chain Monte Carlo simulations and single-substance exposure data. We then checked their predictive capabilities for metabolic interactions by comparison with mixture kinetic data. Results: The simplest joint model (BTEX interacting competitively for cytochrome P450 2E1 access) gives qualitatively correct and quantitatively acceptable predictions (with at most 50% deviations from the data). More complex models with two pathways or back-competition with metabolites have the potential to further improve predictions for BTEX mixtures. Conclusions: A systems biology approach to large-scale prediction of metabolic interactions is advantageous on several counts and technically feasible. However, ways to obtain the required parameters need to be further explored. PMID:21835728

  8. Low reheating temperatures in monomial and binomial inflationary potentials

    CERN Document Server

    Rehagen, Thomas

    2015-01-01

    We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied $\\phi^2$ inflationary potential is no longer favored by current CMB data, as well as $\\phi^p$ with $p>2$, a $\\phi^1$ potential and canonical reheating ($w_{re}=0$) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 $68\\%$ confidence limit upper bound on the spectral index, $n_s$, implies an upper bound on the reheating temperature of $T_{re}\\lesssim 6\\times 10^{10}\\,{\\rm GeV}$, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possiblity that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and mo...

  9. Calculated flame temperature (CFT) modeling of fuel mixture lower flammability limits.

    Science.gov (United States)

    Zhao, Fuman; Rogers, William J; Mannan, M Sam

    2010-02-15

    Heat loss can affect experimental flammability limits, and it becomes indispensable to quantify flammability limits when apparatus quenching effect becomes significant. In this research, the lower flammability limits of binary hydrocarbon mixtures are predicted using calculated flame temperature (CFT) modeling, which is based on the principle of energy conservation. Specifically, the hydrocarbon mixture lower flammability limit is quantitatively correlated to its final flame temperature at non-adiabatic conditions. The modeling predictions are compared with experimental observations to verify the validity of CFT modeling, and the minor deviations between them indicated that CFT modeling can represent experimental measurements very well. Moreover, the CFT modeling results and Le Chatelier's Law predictions are also compared, and the agreement between them indicates that CFT modeling provides a theoretical justification for the Le Chatelier's Law.

  10. A joint finite mixture model for clustering genes from independent Gaussian and beta distributed data

    Directory of Open Access Journals (Sweden)

    Yli-Harja Olli

    2009-05-01

    Full Text Available Abstract Background Cluster analysis has become a standard computational method for gene function discovery as well as for more general explanatory data analysis. A number of different approaches have been proposed for that purpose, out of which different mixture models provide a principled probabilistic framework. Cluster analysis is increasingly often supplemented with multiple data sources nowadays, and these heterogeneous information sources should be made as efficient use of as possible. Results This paper presents a novel Beta-Gaussian mixture model (BGMM for clustering genes based on Gaussian distributed and beta distributed data. The proposed BGMM can be viewed as a natural extension of the beta mixture model (BMM and the Gaussian mixture model (GMM. The proposed BGMM method differs from other mixture model based methods in its integration of two different data types into a single and unified probabilistic modeling framework, which provides a more efficient use of multiple data sources than methods that analyze different data sources separately. Moreover, BGMM provides an exceedingly flexible modeling framework since many data sources can be modeled as Gaussian or beta distributed random variables, and it can also be extended to integrate data that have other parametric distributions as well, which adds even more flexibility to this model-based clustering framework. We developed three types of estimation algorithms for BGMM, the standard expectation maximization (EM algorithm, an approximated EM and a hybrid EM, and propose to tackle the model selection problem by well-known model selection criteria, for which we test the Akaike information criterion (AIC, a modified AIC (AIC3, the Bayesian information criterion (BIC, and the integrated classification likelihood-BIC (ICL-BIC. Conclusion Performance tests with simulated data show that combining two different data sources into a single mixture joint model greatly improves the clustering

  11. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    Science.gov (United States)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model

  12. A hybrid finite mixture model for exploring heterogeneous ordering patterns of driver injury severity.

    Science.gov (United States)

    Ma, Lu; Wang, Guan; Yan, Xuedong; Weng, Jinxian

    2016-04-01

    Debates on the ordering patterns of crash injury severity are ongoing in the literature. Models without proper econometrical structures for accommodating the complex ordering patterns of injury severity could result in biased estimations and misinterpretations of factors. This study proposes a hybrid finite mixture (HFM) model aiming to capture heterogeneous ordering patterns of driver injury severity while enhancing modeling flexibility. It attempts to probabilistically partition samples into two groups in which one group represents an unordered/nominal data-generating process while the other represents an ordered data-generating process. Conceptually, the newly developed model offers flexible coefficient settings for mining additional information from crash data, and more importantly it allows the coexistence of multiple ordering patterns for the dependent variable. A thorough modeling performance comparison is conducted between the HFM model, and the multinomial logit (MNL), ordered logit (OL), finite mixture multinomial logit (FMMNL) and finite mixture ordered logit (FMOL) models. According to the empirical results, the HFM model presents a strong ability to extract information from the data, and more importantly to uncover heterogeneous ordering relationships between factors and driver injury severity. In addition, the estimated weight parameter associated with the MNL component in the HFM model is greater than the one associated with the OL component, which indicates a larger likelihood of the unordered pattern than the ordered pattern for driver injury severity.

  13. A computer graphical user interface for survival mixture modelling of recurrent infections.

    Science.gov (United States)

    Lee, Andy H; Zhao, Yun; Yau, Kelvin K W; Ng, S K

    2009-03-01

    Recurrent infections data are commonly encountered in medical research, where the recurrent events are characterised by an acute phase followed by a stable phase after the index episode. Two-component survival mixture models, in both proportional hazards and accelerated failure time settings, are presented as a flexible method of analysing such data. To account for the inherent dependency of the recurrent observations, random effects are incorporated within the conditional hazard function, in the manner of generalised linear mixed models. Assuming a Weibull or log-logistic baseline hazard in both mixture components of the survival mixture model, an EM algorithm is developed for the residual maximum quasi-likelihood estimation of fixed effect and variance component parameters. The methodology is implemented as a graphical user interface coded using Microsoft visual C++. Application to model recurrent urinary tract infections for elderly women is illustrated, where significant individual variations are evident at both acute and stable phases. The survival mixture methodology developed enable practitioners to identify pertinent risk factors affecting the recurrent times and to draw valid conclusions inferred from these correlated and heterogeneous survival data.

  14. Using the Mixture Rasch Model to Explore Knowledge Resources Students Invoke in Mathematic and Science Assessments

    Science.gov (United States)

    Zhang, Danhui; Orrill, Chandra; Campbell, Todd

    2015-01-01

    The purpose of this study was to investigate whether mixture Rasch models followed by qualitative item-by-item analysis of selected Programme for International Student Assessment (PISA) mathematics and science items offered insight into knowledge students invoke in mathematics and science separately and combined. The researchers administered an…

  15. The Impact of Misspecifying Class-Specific Residual Variances in Growth Mixture Models

    Science.gov (United States)

    Enders, Craig K.; Tofighi, Davood

    2008-01-01

    The purpose of this study was to examine the impact of misspecifying a growth mixture model (GMM) by assuming that Level-1 residual variances are constant across classes, when they do, in fact, vary in each subpopulation. Misspecification produced bias in the within-class growth trajectories and variance components, and estimates were…

  16. Measurement error in earnings data : Using a mixture model approach to combine survey and register data

    NARCIS (Netherlands)

    Meijer, E.; Rohwedder, S.; Wansbeek, T.J.

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the me

  17. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    2002-01-01

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite mix

  18. Comparison of criteria for choosing the number of classes in Bayesian finite mixture models

    NARCIS (Netherlands)

    K. Nasserinejad (Kazem); J.M. van Rosmalen (Joost); W. de Kort (Wim); E.M.E.H. Lesaffre (Emmanuel)

    2017-01-01

    textabstractIdentifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown th

  19. Bayesian Inference for Growth Mixture Models with Latent Class Dependent Missing Data

    Science.gov (United States)

    Lu, Zhenqiu Laura; Zhang, Zhiyong; Lubke, Gitta

    2011-01-01

    "Growth mixture models" (GMMs) with nonignorable missing data have drawn increasing attention in research communities but have not been fully studied. The goal of this article is to propose and to evaluate a Bayesian method to estimate the GMMs with latent class dependent missing data. An extended GMM is first presented in which class…

  20. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  1. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  2. Multivariate compressive sensing for image reconstruction in the wavelet domain: using scale mixture models.

    Science.gov (United States)

    Wu, Jiao; Liu, Fang; Jiao, L C; Wang, Xiaodong; Hou, Biao

    2011-12-01

    Most wavelet-based reconstruction methods of compressive sensing (CS) are developed under the independence assumption of the wavelet coefficients. However, the wavelet coefficients of images have significant statistical dependencies. Lots of multivariate prior models for the wavelet coefficients of images have been proposed and successfully applied to the image estimation problems. In this paper, the statistical structures of the wavelet coefficients are considered for CS reconstruction of images that are sparse or compressive in wavelet domain. A multivariate pursuit algorithm (MPA) based on the multivariate models is developed. Several multivariate scale mixture models are used as the prior distributions of MPA. Our method reconstructs the images by means of modeling the statistical dependencies of the wavelet coefficients in a neighborhood. The proposed algorithm based on these scale mixture models provides superior performance compared with many state-of-the-art compressive sensing reconstruction algorithms.

  3. Growth of Saccharomyces cerevisiae CBS 426 on mixtures of glucose and succinic acid: a model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, J.A.B.A.F.; Koellmann, C.J.W.; Dekkers-de Kok, H.E.; Roels, J.A.

    1984-03-01

    Saccharomyces cerevisiae CBS 426 was grown in continuous culture in a defined medium with a mixture of glucose and succinic acid as the carbon source. Growth on succinic acid was possible after long adaptation periods. The flows of glucose, succinic acid, oxygen, carbon dioxide, and biomass to and from the system were measured. It proved necessary to expand our previous model to accommodate the active transport of succinic acid by the cell. The values found for the efficiency of the oxidative phosphorylation (PIO) and the amount of ATP needed for production of biomass from monomers gave the same values as found for substrate mixtures taken up passively. (Refs. 13).

  4. Numerical Investigation of Nanofluid Thermocapillary Convection Based on Two-Phase Mixture Model

    Science.gov (United States)

    Jiang, Yanni; Xu, Zelin

    2017-08-01

    Numerical investigation of nanofluid thermocapillary convection in a two-dimensional rectangular cavity was carried out, in which the two-phase mixture model was used to simulate the nanoparticles-fluid mixture flow, and the influences of volume fraction of nanoparticles on the flow characteristics and heat transfer performance were discussed. The results show that, with the increase of nanoparticle volume fraction, thermocapillary convection intensity weakens gradually, and the heat conduction effect strengthens; meanwhile, the temperature gradient at free surface increases but the free surface velocity decreases gradually. The average Nusselt number of hot wall and the total entropy generation decrease with nanoparticle volume fraction increasing.

  5. Infrared image segmentation based on region of interest extraction with Gaussian mixture modeling

    Science.gov (United States)

    Yeom, Seokwon

    2017-05-01

    Infrared (IR) imaging has the capability to detect thermal characteristics of objects under low-light conditions. This paper addresses IR image segmentation with Gaussian mixture modeling. An IR image is segmented with Expectation Maximization (EM) method assuming the image histogram follows the Gaussian mixture distribution. Multi-level segmentation is applied to extract the region of interest (ROI). Each level of the multi-level segmentation is composed of the k-means clustering, the EM algorithm, and a decision process. The foreground objects are individually segmented from the ROI windows. In the experiments, various methods are applied to the IR image capturing several humans at night.

  6. GIS disconnector model performance with SF{sub 6}/N{sub 2} mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Gaillac, C. [Schneider Electric (France)

    1999-07-01

    The lightning impulse breakdown voltage of a model, 145 kV GIS disconnector was studied using SF{sub 6}/N{sub 2} mixtures. Mixtures with between 0% and 15% SF{sub 6} were used. Sphere-sphere, point-plane and sphere-rod geometrics were studied. In most cases, breakdown strength increased with both SF{sub 6} content and pressure. In the case of surface flashover, a pressure of about 8 bar with 15% SF{sub 6}, gave roughly equivalent results to that of 4 bar pure SF{sub 6}. (author)

  7. Comparison of multinomial and binomial proportion methods for analysis of multinomial count data.

    Science.gov (United States)

    Galyean, M L; Wester, D B

    2010-10-01

    Simulation methods were used to generate 1,000 experiments, each with 3 treatments and 10 experimental units/treatment, in completely randomized (CRD) and randomized complete block designs. Data were counts in 3 ordered or 4 nominal categories from multinomial distributions. For the 3-category analyses, category probabilities were 0.6, 0.3, and 0.1, respectively, for 2 of the treatments, and 0.5, 0.35, and 0.15 for the third treatment. In the 4-category analysis (CRD only), probabilities were 0.3, 0.3, 0.2, and 0.2 for treatments 1 and 2 vs. 0.4, 0.4, 0.1, and 0.1 for treatment 3. The 3-category data were analyzed with generalized linear mixed models as an ordered multinomial distribution with a cumulative logit link or by regrouping the data (e.g., counts in 1 category/sum of counts in all categories), followed by analysis of single categories as binomial proportions. Similarly, the 4-category data were analyzed as a nominal multinomial distribution with a glogit link or by grouping data as binomial proportions. For the 3-category CRD analyses, empirically determined type I error rates based on pair-wise comparisons (F- and Wald chi(2) tests) did not differ between multinomial and individual binomial category analyses with 10 (P = 0.38 to 0.60) or 50 (P = 0.19 to 0.67) sampling units/experimental unit. When analyzed as binomial proportions, power estimates varied among categories, with analysis of the category with the greatest counts yielding power similar to the multinomial analysis. Agreement between methods (percentage of experiments with the same results for the overall test for treatment effects) varied considerably among categories analyzed and sampling unit scenarios for the 3-category CRD analyses. Power (F-test) was 24.3, 49.1, 66.9, 83.5, 86.8, and 99.7% for 10, 20, 30, 40, 50, and 100 sampling units/experimental unit for the 3-category multinomial CRD analyses. Results with randomized complete block design simulations were similar to those with the CRD

  8. Development and application of a multimetal multibiotic ligand model for assessing aquatic toxicity of metal mixtures.

    Science.gov (United States)

    Santore, Robert C; Ryan, Adam C

    2015-04-01

    A multimetal, multiple binding site version of the biotic ligand model (mBLM) has been developed for predicting and explaining the bioavailability and toxicity of mixtures of metals to aquatic organisms. The mBLM was constructed by combining information from single-metal BLMs to preserve compatibility between the single-metal and multiple-metal approaches. The toxicities from individual metals were predicted by assuming additivity of the individual responses. Mixture toxicity was predicted based on both dissolved metal and mBLM-normalized bioavailable metal. Comparison of the 2 prediction methods indicates that metal mixtures frequently appear to have greater toxicity than an additive estimation of individual effects on a dissolved metal basis. However, on an mBLM-normalized basis, mixtures of metals appear to be additive or less than additive. This difference results from interactions between metals and ligands in solutions including natural organic matter, processes that are accounted for in the mBLM. As part of the mBLM approach, a technique for considering variability was developed to calculate confidence bounds (called response envelopes) around the central concentration-response relationship. Predictions using the mBLM and response envelope were compared with observed toxicity for a number of invertebrate and fish species. The results show that the mBLM is a useful tool for considering bioavailability when assessing the toxicity of metal mixtures.

  9. Dynamic mean field theory for lattice gas models of fluid mixtures confined in mesoporous materials.

    Science.gov (United States)

    Edison, J R; Monson, P A

    2013-11-12

    We present the extension of dynamic mean field theory (DMFT) for fluids in porous materials (Monson, P. A. J. Chem. Phys. 2008, 128, 084701) to the case of mixtures. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable equilibrium states for fluids in pores after a change in the bulk pressure or composition. It is especially useful for studying systems where there are capillary condensation or evaporation transitions. Nucleation processes associated with these transitions are emergent features of the theory and can be visualized via the time dependence of the density distribution and composition distribution in the system. For mixtures an important component of the dynamics is relaxation of the composition distribution in the system, especially in the neighborhood of vapor-liquid interfaces. We consider two different types of mixtures, modeling hydrocarbon adsorption in carbon-like slit pores. We first present results on bulk phase equilibria of the mixtures and then the equilibrium (stable/metastable) behavior of these mixtures in a finite slit pore and an inkbottle pore. We then use DMFT to describe the evolution of the density and composition in the pore in the approach to equilibrium after changing the state of the bulk fluid via composition or pressure changes.

  10. A polynomial hyperelastic model for the mixture of fat and glandular tissue in female breast.

    Science.gov (United States)

    Calvo-Gallego, Jose L; Martínez-Reina, Javier; Domínguez, Jaime

    2015-09-01

    In the breast of adult women, glandular and fat tissues are intermingled and cannot be clearly distinguished. This work studies if this mixture can be treated as a homogenized tissue. A mechanical model is proposed for the mixture of tissues as a function of the fat content. Different distributions of individual tissues and geometries have been tried to verify the validity of the mixture model. A multiscale modelling approach was applied in a finite element model of a representative volume element (RVE) of tissue, formed by randomly assigning fat or glandular elements to the mesh. Both types of tissues have been assumed as isotropic, quasi-incompressible hyperelastic materials, modelled with a polynomial strain energy function, like the homogenized model. The RVE was subjected to several load cases from which the constants of the polynomial function of the homogenized tissue were fitted in the least squares sense. The results confirm that the fat volume ratio is a key factor in determining the properties of the homogenized tissue, but the spatial distribution of fat is not so important. Finally, a simplified model of a breast was developed to check the validity of the homogenized model in a geometry similar to the actual one.

  11. Sleep-promoting effects of the GABA/5-HTP mixture in vertebrate models.

    Science.gov (United States)

    Hong, Ki-Bae; Park, Yooheon; Suh, Hyung Joo

    2016-09-01

    The aim of this study was to investigate the sleep-promoting effect of combined γ-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) on sleep quality and quantity in vertebrate models. Pentobarbital-induced sleep test and electroencephalogram (EEG) analysis were applied to investigate sleep latency, duration, total sleeping time and sleep quality of two amino acids and GABA/5-HTP mixture. In addition, real-time PCR and HPLC analysis were applied to analyze the signaling pathway. The GABA/5-HTP mixture significantly regulated the sleep latency, duration (pHTP mixture modulates both GABAergic and serotonergic signaling. Moreover, the sleep architecture can be controlled by the regulation of GABAA receptor and GABA content with 5-HTP.

  12. Mixtures of endocrine disrupting contaminants modelled on human high end exposures

    DEFF Research Database (Denmark)

    Christiansen, Sofie; Kortenkamp, A.; Petersen, Marta Axelstad

    2012-01-01

    in vivo endocrine disrupting effects and information about human exposures was available, including phthalates, pesticides, UV‐filters, bisphenol A, parabens and the drug paracetamol. The mixture ratio was chosen to reflect high end human intakes. To make decisions about the dose levels for studies...... though each individual chemical is present at low, ineffective doses, but the effects of mixtures modelled based on human intakes have not previously been investigated. To address this issue for the first time, we selected 13 chemicals for a developmental mixture toxicity study in rats where data about...... in the rat, we employed the point of departure index (PODI) approach, which sums up ratios between estimated exposure levels and no‐observed‐adverse‐effect‐level (NOAEL) values of individual substances. For high end human exposures to the 13 selected chemicals, we calculated a PODI of 0.016. As only a PODI...

  13. Modelling of phase equilibria and related properties of mixtures involving lipids

    DEFF Research Database (Denmark)

    Cunico, Larissa

    Many challenges involving physical and thermodynamic properties in the production of edible oils and biodiesel are observed, such as availability of experimental data and realiable prediction. In the case of lipids, a lack of experimental data for pure components and also for their mixtures in open...... literature was observed, what makes it necessary to development reliable predictive models from limited data. One of the first steps of this project was the creation of a database containing properties of mixtures involved in tasks related to process design, simulation, and optimization as well as design...... of chemicals based products. This database was combined with the existing lipids database of pure component properties. To contribute to the missing data, measurements of isobaric vapour-liquid equilibrium (VLE) data of two binary mixtures at two different pressures were performed using Differential Scanning...

  14. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  15. Estimation of Log-Linear-Binomial Distribution with Applications

    Directory of Open Access Journals (Sweden)

    Elsayed Ali Habib

    2010-01-01

    Full Text Available Log-linear-binomial distribution was introduced for describing the behavior of the sum of dependent Bernoulli random variables. The distribution is a generalization of binomial distribution that allows construction of a broad class of distributions. In this paper, we consider the problem of estimating the two parameters of log-linearbinomial distribution by moment and maximum likelihood methods. The distribution is used to fit genetic data and to obtain the sampling distribution of the sign test under dependence among trials.

  16. Una prueba de razón de verosimilitudes para discriminar entre la distribución Poisson, Binomial y Binomial Negativa.

    OpenAIRE

    López Martínez, Laura Elizabeth

    2010-01-01

    En este trabajo se realiza inferencia estadística en la distribución Binomial Negativa Generalizada (BNG) y los modelos que anida, los cuales son Binomial, Binomial Negativa y Poisson. Se aborda el problema de estimación de parámetros en la distribución BNG y se propone una prueba de razón de verosimilitud generalizada para discernir si un conjunto de datos se ajusta en particular al modelo Binomial, Binomial Negativa o Poisson. Además, se estudian las potencias y tamaños de la prueba p...

  17. M3B: A coarse grain model for the simulation of oligosaccharides and their water mixtures.

    Science.gov (United States)

    Goddard, William A.; Cagin, Tahir; Molinero, Valeria

    2003-03-01

    Water and sugar dynamics in concentrated carbohydrate solutions is of utmost importance in food and pharmaceutical technology. Water diffusion in concentrated sugar mixtures can be slowed down many orders of magnitude with respect to bulk water [1], making extremely expensive the simulation of these systems with atomistic detail for the required time-scales. We present a coarse grain model (M3B) for malto-oligosaccharides and their water mixtures. M3B speeds up molecular dynamics simulations about 500-1000 times with respect to the atomistic model while retaining enough detail to be mapped back to the atomistic structures with low uncertainty in the positions. The former characteristic allows the study of water and carbohydrate dynamics in supercooled and polydisperse mixtures with characteristic time scales above the nanosecond. The latter makes M3B well suited for combined atomistic-mesoscale simulations. We present the parameterization of M3B force field for water and a family of technologically relevant glucose oligosaccharides, the alpha-(1->4) glucans. The coarse grain force field is completely parameterized from atomistic simulations to reproduce the density, cohesive energy and structural parameters of amorphous sugars. We will show that M3B is capable to describe the helical character of the higher oligosaccharides, and that the water structure in low moisture mixtures shows the same features obtained with the atomistic and M3B models. [1] R Parker, SG Ring: Carbohydr. Res. 273 (1995) 147-55.

  18. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  19. Using a factor mixture modeling approach in alcohol dependence in a general population sample.

    Science.gov (United States)

    Kuo, Po-Hsiu; Aggen, Steven H; Prescott, Carol A; Kendler, Kenneth S; Neale, Michael C

    2008-11-01

    Alcohol dependence (AD) is a complex and heterogeneous disorder. The identification of more homogeneous subgroups of individuals with drinking problems and the refinement of the diagnostic criteria are inter-related research goals. They have the potential to improve our knowledge of etiology and treatment effects, and to assist in the identification of risk factors or specific genetic factors. Mixture modeling has advantages over traditional modeling that focuses on either the dimensional or categorical latent structure. The mixture modeling combines both latent class and latent trait models, but has not been widely applied in substance use research. The goal of the present study is to assess whether the AD criteria in the population could be better characterized by a continuous dimension, a few discrete subgroups, or a combination of the two. More than seven thousand participants were recruited from the population-based Virginia Twin Registry, and were interviewed to obtain DSM-IV (Diagnostic and Statistical Manual of Mental Disorder, version IV) symptoms and diagnosis of AD. We applied factor analysis, latent class analysis, and factor mixture models for symptom items based on the DSM-IV criteria. Our results showed that a mixture model with 1 factor and 3 classes for both genders fit well. The 3 classes were a non-problem drinking group and severe and moderate drinking problem groups. By contrast, models constrained to conform to DSM-IV diagnostic criteria were rejected by model fitting indices providing empirical evidence for heterogeneity in the AD diagnosis. Classification analysis showed different characteristics across subgroups, including alcohol-caused behavioral problems, comorbid disorders, age at onset for alcohol-related milestones, and personality. Clinically, the expanded classification of AD may aid in identifying suitable treatments, interventions and additional sources of comorbidity based on these more homogenous subgroups of alcohol use

  20. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    OpenAIRE

    Hiroki Yoshioka; Kenta Obata

    2011-01-01

    The fraction of vegetation cover (FVC) is often estimated by unmixing a linear mixture model (LMM) to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could ...

  1. Modulational instability, solitons and periodic waves in a model of quantum degenerate boson-fermion mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Belmonte-Beitia, Juan [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Perez-Garcia, Victor M. [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Vekslerchik, Vadym [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain)

    2007-05-15

    In this paper, we study a system of coupled nonlinear Schroedinger equations modelling a quantum degenerate mixture of bosons and fermions. We analyze the stability of plane waves, give precise conditions for the existence of solitons and write explicit solutions in the form of periodic waves. We also check that the solitons observed previously in numerical simulations of the model correspond exactly to our explicit solutions and see how plane waves destabilize to form periodic waves.

  2. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli

    2015-03-01

    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  3. Binomial and enumerative sampling of Tetranychus urticae (Acari: Tetranychidae) on peppermint in California.

    Science.gov (United States)

    Tollerup, Kris E; Marcum, Daniel; Wilson, Rob; Godfrey, Larry

    2013-08-01

    The two-spotted spider mite, Tetranychus urticae Koch, is an economic pest on peppermint [Mentha x piperita (L.), 'Black Mitcham'] grown in California. A sampling plan for T. urticae was developed under Pacific Northwest conditions in the early 1980s and has been used by California growers since approximately 1998. This sampling plan, however, is cumbersome and a poor predictor of T. urticae densities in California. Between June and August, the numbers of immature and adult T. urticae were counted on leaves at three commercial peppermint fields (sites) in 2010 and a single field in 2011. In each of seven locations per site, 45 leaves were sampled, that is, 9 leaves per five stems. Leaf samples were stratified by collecting three leaves from the top, middle, and bottom strata per stem. The on-plant distribution of T. urticae did not significantly differ among the stem strata through the growing season. Binomial and enumerative sampling plans were developed using generic Taylor's power law coefficient values. The best fit of our data for binomial sampling occurred using a tally threshold of T = 0. The optimum number of leaves required for T urticae at the critical density of five mites per leaf was 20 for the binomial and 23 for the enumerative sampling plans, respectively. Sampling models were validated using Resampling for Validation of Sampling Plan Software.

  4. Mixture models of geometric distributions in genomic analysis of inter-nucleotide distances

    Directory of Open Access Journals (Sweden)

    Adelaide Valente Freitas

    2013-11-01

    Full Text Available The mapping defined by inter-nucleotide distances (InD provides a reversible numerical representation of the primary structure of DNA. If nucleotides were independently placed along the genome, a finite mixture model of four geometric distributions could be fitted to the InD where the four marginal distributions would be the expected distributions of the four nucleotide types. We analyze a finite mixture model of geometric distributions (f_2, with marginals not explicitly addressed to the nucleotide types, as an approximation to the InD. We use BIC in the composite likelihood framework for choosing the number of components of the mixture and the EM algorithm for estimating the model parameters. Based on divergence profiles, an experimental study was carried out on the complete genomes of 45 species to evaluate f_2. Although the proposed model is not suited to the InD, our analysis shows that divergence profiles involving the empirical distribution of the InD are also exhibited by profiles involving f_2. It suggests that statistical regularities of the InD can be described by the model f_2. Some characteristics of the DNA sequences captured by the model f_2 are illustrated. In particular, clusterings of subgroups of eukaryotes (primates, mammalians, animals and plants are detected.

  5. A WYNER-ZIV VIDEO CODING METHOD UTILIZING MIXTURE CORRELATION NOISE MODEL

    Institute of Scientific and Technical Information of China (English)

    Hu Xiaofei; Zhu Xiuchang

    2012-01-01

    In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).

  6. Applicability of linearized Dusty Gas Model for multicomponent diffusion of gas mixtures in porous solids

    Directory of Open Access Journals (Sweden)

    Marković Jelena

    2007-01-01

    Full Text Available The transport of gaseous components through porous media could be described according to the well-known Fick model and its modifications. It is also known that Fick’s law is not suitable for predicting the fluxes in multicomponent gas mixtures, excluding binary mixtures. This model is still frequently used in chemical engineering because of its simplicity. Unfortunately, besides the Fick’s model there is no generally accepted model for mass transport through porous media (membranes, catalysts etc.. Numerous studies on transport through porous media reveal that Dusty Gas Model (DGM is superior in its ability to predict fluxes in multicomponent mixtures. Its wider application is limited by more complicated calculation procedures comparing to Fick’s model. It should be noted that there were efforts to simplify DGM in order to obtain satisfactory accurate results. In this paper linearized DGM, as the simplest form of DGM, is tested under conditions of zero system pressure drop, small pressure drop, and different temperatures. Published experimental data are used in testing the accuracy of the linearized procedure. It is shown that this simplified procedure is accurate enough compared to the standard more complicated calculations.

  7. Comparison of activity coefficient models for atmospheric aerosols containing mixtures of electrolytes, organics, and water

    Science.gov (United States)

    Tong, Chinghang; Clegg, Simon L.; Seinfeld, John H.

    Atmospheric aerosols generally comprise a mixture of electrolytes, organic compounds, and water. Determining the gas-particle distribution of volatile compounds, including water, requires equilibrium or mass transfer calculations, at the heart of which are models for the activity coefficients of the particle-phase components. We evaluate here the performance of four recent activity coefficient models developed for electrolyte/organic/water mixtures typical of atmospheric aerosols. Two of the models, the CSB model [Clegg, S.L., Seinfeld, J.H., Brimblecombe, P., 2001. Thermodynamic modelling of aqueous aerosols containing electrolytes and dissolved organic compounds. Journal of Aerosol Science 32, 713-738] and the aerosol diameter dependent equilibrium model (ADDEM) [Topping, D.O., McFiggans, G.B., Coe, H., 2005. A curved multi-component aerosol hygroscopicity model framework: part 2—including organic compounds. Atmospheric Chemistry and Physics 5, 1223-1242] treat ion-water and organic-water interactions but do not include ion-organic interactions; these can be referred to as "decoupled" models. The other two models, reparameterized Ming and Russell model 2005 [Raatikainen, T., Laaksonen, A., 2005. Application of several activity coefficient models to water-organic-electrolyte aerosols of atmospheric interest. Atmospheric Chemistry and Physics 5, 2475-2495] and X-UNIFAC.3 [Erdakos, G.B., Change, E.I., Pandow, J.F., Seinfeld, J.H., 2006. Prediction of activity coefficients in liquid aerosol particles containing organic compounds, dissolved inorganic salts, and water—Part 3: Organic compounds, water, and ionic constituents by consideration of short-, mid-, and long-range effects using X-UNIFAC.3. Atmospheric Environment 40, 6437-6452], include ion-organic interactions; these are referred to as "coupled" models. We address the question—Does the inclusion of a treatment of ion-organic interactions substantially improve the performance of the coupled models over

  8. A Note Comparing Component-Slope, Scheffé, and Cox Parameterizations of the Linear Mixture Experiment Model

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2006-05-01

    A mixture experiment involves combining two or more components in various proportions and collecting data on one or more responses. A linear mixture model may adequately represent the relationship between a response and mixture component proportions and be useful in screening the mixture components. The Scheffé and Cox parameterizations of the linear mixture model are commonly used for analyzing mixture experiment data. With the Scheffé parameterization, the fitted coefficient for a component is the predicted response at that pure component (i.e., single-component mixture). With the Cox parameterization, the fitted coefficient for a mixture component is the predicted difference in response at that pure component and at a pre-specified reference composition. This paper presents a new component-slope parameterization, in which the fitted coefficient for a mixture component is the predicted slope of the linear response surface along the direction determined by that pure component and at a pre-specified reference composition. The component-slope, Scheffé, and Cox parameterizations of the linear mixture model are compared and their advantages and disadvantages are discussed.

  9. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro

    DEFF Research Database (Denmark)

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;

    2013-01-01

    , antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...

  10. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    Science.gov (United States)

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  11. Fitting a mixture model by expectation maximization to discover motifs in biopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, T.L.; Elkan, C. [Univ. of California, La Jolla, CA (United States)

    1994-12-31

    The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.

  12. Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples

    CERN Document Server

    Melchior, Peter

    2016-01-01

    We extend the common mixtures-of-Gaussians density estimation approach to account for a known sample incompleteness by simultaneous imputation from the current model. The method called GMMis generalizes existing Expectation-Maximization techniques for truncated data to arbitrary truncation geometries and probabilistic rejection. It can incorporate an uniform background distribution as well as independent multivariate normal measurement errors for each of the observed samples, and recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. We compare GMMis to the standard Gaussian mixture model for simple test cases with different types of incompleteness, and apply it to observational data from the NASA Chandra X-ray telescope. The python code is capable of performing density estimation with millions of samples and thousands of model components and is released as an open-source package at https://github.com/pmelchior/pyGMMis

  13. Modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C

    Energy Technology Data Exchange (ETDEWEB)

    Necula, H.; Badea, A. [Universite Politecnica de Bucarest (Romania). Faculte d' Energetique; Lallemand, M. [INSA, Villeurbanne (France). Centre de Thermique de Lyon; Marvillet, C. [CEA-Grenoble (France)

    2001-11-01

    This study concerns the steady state modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C. In this local type model, the control volumes are a function of the geometric configuration of the evaporator in which baffles are fitted. The validation of the model has been made by comparison between theoretical and experimental results obtained from an experimental investigation with a refrigerating machine. For test conditions, the flow pattern has been identified from a flow pattern map as being stratified. Theoretical results show the effect of different parameters such as the saturation pressure, the inlet quality, etc. on the local variables (temperature, slip ratio). The effect of leakage on the mixture composition has also been investigated. (author)

  14. A lattice traffic model with consideration of preceding mixture traffic information

    Institute of Scientific and Technical Information of China (English)

    Li Zhi-Peng; Liu Fu-Qiang; Sun Jian

    2011-01-01

    In this paper,the lattice model is presented,incorporating not only site information about preceding cars but also relative currents in front.We derive the stability condition of the extended model by considering a small perturbation around the homogeneous flow solution and find that the improvement in the stability of traffic flow is obtained by taking into account preceding mixture traffic information.Direct simulations also confirm that the traffic jam can be suppressed efficiently by considering the relative currents ahead,just like incorporating site information in front.Moreover,from the nonlinear analysis of the extended models,the preceding mixture traffic information dependence of the propagating kink solutions for traffic jams is obtained by deriving the modified KdV equation near the critical point using the reductive perturbation method.

  15. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to

  16. Comparison of Criteria for Choosing the Number of Classes in Bayesian Finite Mixture Models.

    Science.gov (United States)

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Lesaffre, Emmanuel

    2017-01-01

    Identifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown that in overfitted mixture models, the overfitted latent classes will asymptotically become empty under specific conditions for the prior of the class proportions. This result may be used to construct a criterion for finding the true number of latent classes, based on the removal of latent classes that have negligible proportions. Unlike some alternative criteria, this criterion can easily be implemented in complex statistical models such as latent class mixed-effects models and multivariate mixture models using standard Bayesian software. We performed an extensive simulation study to develop practical guidelines to determine the appropriate number of latent classes based on the posterior distribution of the class proportions, and to compare this criterion with alternative criteria. The performance of the proposed criterion is illustrated using a data set of repeatedly measured hemoglobin values of blood donors.

  17. A Bayesian threshold-normal mixture model for analysis of a continuous mastitis-related trait.

    Science.gov (United States)

    Ødegård, J; Madsen, P; Gianola, D; Klemetsdal, G; Jensen, J; Heringstad, B; Korsgaard, I R

    2005-07-01

    Mastitis is associated with elevated somatic cell count in milk, inducing a positive correlation between milk somatic cell score (SCS) and the absence or presence of the disease. In most countries, selection against mastitis has focused on selecting parents with genetic evaluations that have low SCS. Univariate or multivariate mixed linear models have been used for statistical description of SCS. However, an observation of SCS can be regarded as drawn from a 2- (or more) component mixture defined by the (usually) unknown health status of a cow at the test-day on which SCS is recorded. A hierarchical 2-component mixture model was developed, assuming that the health status affecting the recorded test-day SCS is completely specified by an underlying liability variable. Based on the observed SCS, inferences can be drawn about disease status and parameters of both SCS and liability to mastitis. The prior probability of putative mastitis was allowed to vary between subgroups (e.g., herds, families), by specifying fixed and random effects affecting both SCS and liability. Using simulation, it was found that a Bayesian model fitted to the data yielded parameter estimates close to their true values. The model provides selection criteria that are more appealing than selection for lower SCS. The proposed model can be extended to handle a wide range of problems related to genetic analyses of mixture traits.

  18. The Effects of Violating the Beta-Binomial Assumption on Huynh's Estimates of Decision Consistency for Mastery Tests.

    Science.gov (United States)

    Johnston, Shirley H.; And Others

    A computer simulation was undertaken to determine the effects of using Huynh's single-administration estimates of the decision consistency indices for agreement and coefficient kappa, under conditions that violated the beta-binomial assumption. Included in the investigation were two unimodal score distributions that fit the model and two bimodal…

  19. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  20. Improved binomial charts for monitoring high-quality processes

    NARCIS (Netherlands)

    Albers, Willem/Wim

    2009-01-01

    For processes concerning attribute data with (very) small failure rate p, often negative binomial control charts are used. The decision whether to stop or continue is made each time r failures have occurred, for some r≥1. Finding the optimal r for detecting a given increase of p first requires align

  1. Improved binomial charts for high-quality processes

    NARCIS (Netherlands)

    Albers, Willem/Wim

    2011-01-01

    For processes concerning attribute data with (very) small failure rate p, often negative binomial control charts are used. The decision whether to stop or continue is made each time r failures have occurred, for some r≥1. Finding the optimal r for detecting a given increase of p first requires align

  2. On the Mean Absolute Error in Inverse Binomial Sampling

    OpenAIRE

    Mendo, Luis

    2009-01-01

    A closed-form expression and an upper bound are obtained for the mean absolute error of the unbiased estimator of a probability in inverse binomial sampling. The results given permit the estimation of an arbitrary probability with a prescribed level of the normalized mean absolute error.

  3. A Neutrosophic Binomial Factorial Theorem with their Refrains

    OpenAIRE

    Khalid, Huda; Smarandache, Florentin; Essa, Ahmed

    2016-01-01

    The Neutrosophic Precalculus and the Neutrosophic Calculus can be developed in many ways, depending on the types of indeterminacy one has and on the method used to deal with such indeterminacy. This article is innovative since the form of neutrosophic binomial factorial theorem was constructed in addition to its refrains.

  4. Estimating the Parameters of the Beta-Binomial Distribution.

    Science.gov (United States)

    Wilcox, Rand R.

    1979-01-01

    For some situations the beta-binomial distribution might be used to describe the marginal distribution of test scores for a particular population of examinees. Several different methods of approximating the maximum likelihood estimate were investigated, and it was found that the Newton-Raphson method should be used when it yields admissable…

  5. Extensions to Multivariate Space Time Mixture Modeling of Small Area Cancer Data

    Directory of Open Access Journals (Sweden)

    Rachel Carroll

    2017-05-01

    Full Text Available Oral cavity and pharynx cancer, even when considered together, is a fairly rare disease. Implementation of multivariate modeling with lung and bronchus cancer, as well as melanoma cancer of the skin, could lead to better inference for oral cavity and pharynx cancer. The multivariate structure of these models is accomplished via the use of shared random effects, as well as other multivariate prior distributions. The results in this paper indicate that care should be taken when executing these types of models, and that multivariate mixture models may not always be the ideal option, depending on the data of interest.

  6. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... a simplified gradient theory (SGT) model for computing surface tensions. With this model, it is not required to solve the time-consuming density profile equations of the gradient theory model. The SRK EOS was applied to calculate the properties of the homogeneous fluid. First, the SGT model was used to predict...

  7. Analysis of Two-sample Censored Data Using a Semiparametric Mixture Model

    Institute of Scientific and Technical Information of China (English)

    Gang Li; Chien-tai Lin

    2009-01-01

    In this article we study a semiparametric mixture model for the two-sample problem with right censored data. The model implies that the densities for the continuous outcomes are related by a parametric tilt but otherwise unspecified. It provides a useful alternative to the Cox (1972) proportional hazards model for the comparison of treatments based on right censored survival data. We propose an iterative algorithm for the semiparametric maximum likelihood estimates of the parametric and nonparametric components of the model. The performance of the proposed method is studied using simulation. We illustrate our method in an application to melanoma.

  8. Using a Genetic mixture model to study Phenotypic traits: Differential fecundity among Yukon river Chinook Salmon

    Science.gov (United States)

    Bromaghin, J.F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.

    2011-01-01

    Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length-adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large-mesh gill-net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size-selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock-specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error.The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length-adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted

  9. Stochastic analysis of complex reaction networks using binomial moment equations.

    Science.gov (United States)

    Barzel, Baruch; Biham, Ofer

    2012-09-01

    The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.

  10. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  11. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Directory of Open Access Journals (Sweden)

    A. N. Schwier

    2013-01-01

    Full Text Available Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2–6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski–Langmuir (S–L model which was first presented by Henning et al. (2005. Two approaches for modeling the effects of salt were tested: (1 the Tuckermann approach (an extension of the Henning model with an additional explicit salt term, and (2 a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2 for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems and Tuckermann approach provide similar modeling fits and goodness of fit (χ2 values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  12. A dirichlet process covarion mixture model and its assessments using posterior predictive discrepancy tests.

    Science.gov (United States)

    Zhou, Yan; Brinkmann, Henner; Rodrigue, Nicolas; Lartillot, Nicolas; Philippe, Hervé

    2010-02-01

    Heterotachy, the variation of substitution rate at a site across time, is a prevalent phenomenon in nucleotide and amino acid alignments, which may mislead probabilistic-based phylogenetic inferences. The covarion model is a special case of heterotachy, in which sites change between the "ON" state (allowing substitutions according to any particular model of sequence evolution) and the "OFF" state (prohibiting substitutions). In current implementations, the switch rates between ON and OFF states are homogeneous across sites, a hypothesis that has never been tested. In this study, we developed an infinite mixture model, called the covarion mixture (CM) model, which allows the covarion parameters to vary across sites, controlled by a Dirichlet process prior. Moreover, we combine the CM model with other approaches. We use a second independent Dirichlet process that models the heterogeneities of amino acid equilibrium frequencies across sites, known as the CAT model, and general rate-across-site heterogeneity is modeled by a gamma distribution. The application of the CM model to several large alignments demonstrates that the covarion parameters are significantly heterogeneous across sites. We describe posterior predictive discrepancy tests and use these to demonstrate the importance of these different elements of the models.

  13. Cure fraction estimation from the mixture cure models for grouped survival data.

    Science.gov (United States)

    Yu, Binbing; Tiwari, Ram C; Cronin, Kathleen A; Feuer, Eric J

    2004-06-15

    Mixture cure models are usually used to model failure time data with long-term survivors. These models have been applied to grouped survival data. The models provide simultaneous estimates of the proportion of the patients cured from disease and the distribution of the survival times for uncured patients (latency distribution). However, a crucial issue with mixture cure models is the identifiability of the cure fraction and parameters of kernel distribution. Cure fraction estimates can be quite sensitive to the choice of latency distributions and length of follow-up time. In this paper, sensitivity of parameter estimates under semi-parametric model and several most commonly used parametric models, namely lognormal, loglogistic, Weibull and generalized Gamma distributions, is explored. The cure fraction estimates from the model with generalized Gamma distribution is found to be quite robust. A simulation study was carried out to examine the effect of follow-up time and latency distribution specification on cure fraction estimation. The cure models with generalized Gamma latency distribution are applied to the population-based survival data for several cancer sites from the Surveillance, Epidemiology and End Results (SEER) Program. Several cautions on the general use of cure model are advised.

  14. Symmetrization of excess Gibbs free energy: A simple model for binary liquid mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Castellanos-Suarez, Aly J., E-mail: acastell@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of); Garcia-Sucre, Maximo, E-mail: mgs@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of)

    2011-03-15

    A symmetric expression for the excess Gibbs free energy of liquid binary mixtures is obtained using an appropriate definition for the effective contact fraction. We have identified a mechanism of local segregation as the main cause of the contact fraction variation with the concentration. Starting from this mechanism we develop a simple model for describing binary liquid mixtures. In this model two parameters appear: one adjustable, and the other parameter depending on the first one. Following this procedure we reproduce the experimental data of (liquid + vapor) equilibrium with a degree of accuracy comparable to well-known more elaborated models. The way in which we take into account the effective contacts between molecules allows identifying the compound which may be considered to induce one of the following processes: segregation, anti-segregation and dispersion of the components in the liquid mixture. Finally, the simplicity of the model allows one to obtain only one resulting interaction energy parameter, which makes easier the physical interpretation of the results.

  15. Reconstruction of coronary artery centrelines from x-ray rotational angiography using a probabilistic mixture model

    Science.gov (United States)

    Ćimen, Serkan; Gooya, Ali; Frangi, Alejandro F.

    2016-03-01

    Three-dimensional reconstructions of coronary arterial trees from X-ray rotational angiography (RA) images have the potential to compensate the limitations of RA due to projective imaging. Most of the existing model based reconstruction algorithms are either based on forward-projection of a 3D deformable model onto X-ray angiography images or back-projection of 2D information extracted from X-ray angiography images to 3D space for further processing. All of these methods have their shortcomings such as dependency on accurate 2D centerline segmentations. In this paper, the reconstruction is approached from a novel perspective, and is formulated as a probabilistic reconstruction method based on mixture model (MM) representation of point sets describing the coronary arteries. Specifically, it is assumed that the coronary arteries could be represented by a set of 3D points, whose spatial locations denote the Gaussian components in the MM. Additionally, an extra uniform distribution is incorporated in the mixture model to accommodate outliers (noise, over-segmentation etc.) in the 2D centerline segmentations. Treating the given 2D centreline segmentations as data points generated from MM, the 3D means, isotropic variance, and mixture weights of the Gaussian components are estimated by maximizing a likelihood function. Initial results from a phantom study show that the proposed method is able to handle outliers in 2D centreline segmentations, which indicates the potential of our formulation. Preliminary reconstruction results in the clinical data are also presented.

  16. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2008-06-01

    Full Text Available MODIS (Moderate Resolution Imaging Spectroradiometer is a key instrument aboard the Terra (EOS AM and Aqua (EOS PM satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day, which implies that MODIS data could be used as satellite data source for rice cultivation area estimation, possibly rice growth monitoring and yield forecasting on the regional scale.

  17. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power...... for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome...... of variance explained by genotyped SNPs, CD and SZ have a broadly dissimilar genetic architecture, due to differing mean effect size and proportion of non-null loci....

  18. Automated sleep spindle detection using IIR filters and a Gaussian Mixture Model.

    Science.gov (United States)

    Patti, Chanakya Reddy; Penzel, Thomas; Cvetkovic, Dean

    2015-08-01

    Sleep spindle detection using modern signal processing techniques such as the Short-Time Fourier Transform and Wavelet Analysis are common research methods. These methods are computationally intensive, especially when analysing data from overnight sleep recordings. The authors of this paper propose an alternative using pre-designed IIR filters and a multivariate Gaussian Mixture Model. Features extracted with IIR filters are clustered using a Gaussian Mixture Model without the use of any subject independent thresholds. The Algorithm was tested on a database consisting of overnight sleep PSG of 5 subjects and an online public spindles database consisting of six 30 minute sleep excerpts. An overall sensitivity of 57% and a specificity of 98.24% was achieved in the overnight database group and a sensitivity of 65.19% at a 16.9% False Positive proportion for the 6 sleep excerpts.

  19. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites.Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers.Shaoxing county of Zhcjiang Province in China was chosen to be the study site and early rice was selected as the study crop.The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day,which implies that MODIS data could be used as satellite data source for rice cultivation area estimation,possibly rice growth monitoring and yield forecasting on the regional scale.

  20. A cross-association model for CO2-methanol and CO2-ethanol mixtures

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A cross-association model was proposed for CO2-alcohol mixtures based on the statistical associating fluid theory (SAFT).CO2 was treated as a pseudo-associating molecule and both the self-association between alcohol hydroxyls and the cross-association between CO2 and alcohol hydroxyls were considered.The equilibrium properties from low temperature-pressure to high temperature-pressure were investigated using this model.The calculated p-x and p-p diagrams of CO2-methanol and CO2-ethanol mixtures agreed with the experimental data.The results showed that when the cross-association was taken into account for Helmholtz free energy,the calculated equilibrium properties could be significantly improved,and the error prediction of the three phase equilibria and triple points in low temperature regions could be avoided.

  1. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.

  2. A generalized longitudinal mixture IRT model for measuring differential growth in learning environments.

    Science.gov (United States)

    Kadengye, Damazo T; Ceulemans, Eva; Van den Noortgate, Wim

    2014-09-01

    This article describes a generalized longitudinal mixture item response theory (IRT) model that allows for detecting latent group differences in item response data obtained from electronic learning (e-learning) environments or other learning environments that result in large numbers of items. The described model can be viewed as a combination of a longitudinal Rasch model, a mixture Rasch model, and a random-item IRT model, and it includes some features of the explanatory IRT modeling framework. The model assumes the possible presence of latent classes in item response patterns, due to initial person-level differences before learning takes place, to latent class-specific learning trajectories, or to a combination of both. Moreover, it allows for differential item functioning over the classes. A Bayesian model estimation procedure is described, and the results of a simulation study are presented that indicate that the parameters are recovered well, particularly for conditions with large item sample sizes. The model is also illustrated with an empirical sample data set from a Web-based e-learning environment.

  3. Phase Equilibria of Water/CO2 and Water/n-Alkane Mixtures from Polarizable Models.

    Science.gov (United States)

    Jiang, Hao; Economou, Ioannis G; Panagiotopoulos, Athanassios Z

    2017-02-16

    Phase equilibria of water/CO2 and water/n-alkane mixtures over a range of temperatures and pressures were obtained from Monte Carlo simulations in the Gibbs ensemble. Three sets of Drude-type polarizable models for water, namely the BK3, GCP, and HBP models, were combined with a polarizable Gaussian charge CO2 (PGC) model to represent the water/CO2 mixture. The HBP water model describes hydrogen bonds between water and CO2 explicitly. All models underestimate CO2 solubility in water if standard combining rules are used for the dispersion interactions between water and CO2. With the dispersion parameters optimized to phase compositions, the BK3 and GCP models were able to represent the CO2 solubility in water, however, the water composition in CO2-rich phase is systematically underestimated. Accurate representation of compositions for both water- and CO2-rich phases cannot be achieved even after optimizing the cross interaction parameters. By contrast, accurate compositions for both water- and CO2-rich phases were obtained with hydrogen bonding parameters determined from the second virial coefficient for water/CO2. Phase equilibria of water/n-alkane mixtures were also studied using the HBP water and an exponenial-6 united-atom n-alkanes model. The dispersion interactions between water and n-alkanes were optimized to Henry's constants of methane and ethane in water. The HBP water and united-atom n-alkane models underestimate water content in the n-alkane-rich phase; this underestimation is likely due to the neglect of electrostatic and induction energies in the united-atom model.

  4. The Precise Measurement of Vapor-Liquid Equilibrium Properties of the CO2/Isopentane Binary Mixture, and Fitted Parameters for a Helmholtz Energy Mixture Model

    Science.gov (United States)

    Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.

    2017-10-01

    Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.

  5. Catalytically stabilized combustion of lean methane-air-mixtures: a numerical model

    Energy Technology Data Exchange (ETDEWEB)

    Dogwiler, U.; Benz, P.; Mantharas, I. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-06-01

    The catalytically stabilized combustion of lean methane/air mixtures has been studied numerically under conditions closely resembling the ones prevailing in technical devices. A detailed numerical model has been developed for a laminar, stationary, 2-D channel flow with full heterogeneous and homogeneous reaction mechanisms. The computations provide direct information on the coupling between heterogeneous-homogeneous combustion and in particular on the means of homogeneous ignitions and stabilization. (author) 4 figs., 3 refs.

  6. Condition monitoring of oil-impregnated paper bushings using extension neural network, Gaussian mixture and hidden Markov models

    CSIR Research Space (South Africa)

    Miya, WS

    2008-10-01

    Full Text Available In this paper, a comparison between Extension Neural Network (ENN), Gaussian Mixture Model (GMM) and Hidden Markov model (HMM) is conducted for bushing condition monitoring. The monitoring process is a two-stage implementation of a classification...

  7. Generalized Binomial Convolution of the mth Powers of the Consecutive Integers with the General Fibonacci Sequence

    Directory of Open Access Journals (Sweden)

    Kılıç Emrah

    2016-12-01

    Full Text Available In this paper, we consider Gauthier’s generalized convolution and then define its binomial analogue as well as alternating binomial analogue. We formulate these convolutions and give some applications of them.

  8. Multivariate Generalizations of the Multiplicative Binomial Distribution: Introducing the MM Package

    OpenAIRE

    Altham, Pat M. E.; Robin K. S. Hankin

    2012-01-01

    We present two natural generalizations of the multinomial and multivariate binomial distributions, which arise from the multiplicative binomial distribution of Altham (1978). The resulting two distributions are discussed and we introduce an R package, MM, which includes associated functionality.

  9. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are

  10. Novel pseudo-divergence of Gaussian mixture models based speaker clustering method

    Institute of Scientific and Technical Information of China (English)

    Wang Bo; Xu Yiqiong; Li Bicheng

    2006-01-01

    Serial structure is applied to speaker recognition to reduce the algorithm delay and computational complexity. The speech is first classified into speaker class, and then searches the most likely one inside the class.Difference between Gaussian Mixture Models (GMMs) is widely applied in speaker classification. The paper proposes a novel mean of pseudo-divergence, the ratio of Inter-Model dispersion to Intra-Model dispersion, to present the difference between GMMs, to perform speaker cluster. Weight, mean and variance, GMM's components, are involved in the dispersion. Experiments indicate that the measurement can well present the difference of GMMs and has improved performance of speaker clustering.

  11. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    Directory of Open Access Journals (Sweden)

    Jae Young Lee

    2013-01-01

    Full Text Available Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water and glutaric acid solutions (0–5 mol/kg water agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%. However, for solutions containing mixtures of malonic/glutaric acids, malonic/succinic acids, and glutaric/succinic acids, the disagreements between the measurements and predictions using the ZSR model or Peng’s modified UNIFAC model are higher than those for the single-component cases. Measurements of the overall water vapor pressure for 50 : 50 molar mixtures of malonic/glutaric acids closely followed that for malonic acid alone. For mixtures of malonic/succinic acids and glutaric/succinic acids, the influence of a constant concentration of succinic acid on water uptake became more significant as the concentration of malonic acid or glutaric acid was increased.

  12. Performance of growth mixture models in the presence of time-varying covariates.

    Science.gov (United States)

    Diallo, Thierno M O; Morin, Alexandre J S; Lu, HuiZhong

    2016-10-31

    Growth mixture modeling is often used to identify unobserved heterogeneity in populations. Despite the usefulness of growth mixture modeling in practice, little is known about the performance of this data analysis technique in the presence of time-varying covariates. In the present simulation study, we examined the impacts of five design factors: the proportion of the total variance of the outcome explained by the time-varying covariates, the number of time points, the error structure, the sample size, and the mixing ratio. More precisely, we examined the impact of these factors on the accuracy of parameter and standard error estimates, as well as on the class enumeration accuracy. Our results showed that the consistent Akaike information criterion (CAIC), the sample-size-adjusted CAIC (SCAIC), the Bayesian information criterion (BIC), and the integrated completed likelihood criterion (ICL-BIC) proved to be highly reliable indicators of the true number of latent classes in the data, across design conditions, and that the sample-size-adjusted BIC (SBIC) also proved quite accurate, especially in larger samples. In contrast, the Akaike information criterion (AIC), the entropy, the normalized entropy criterion (NEC), and the classification likelihood criterion (CLC) proved to be unreliable indicators of the true number of latent classes in the data. Our results also showed that substantial biases in the parameter and standard error estimates tended to be associated with growth mixture models that included only four time points.

  13. A Rough Set Bounded Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation.

    Science.gov (United States)

    Ji, Zexuan; Huang, Yubo; Sun, Quansen; Cao, Guo; Zheng, Yuhui

    2017-01-01

    Accurate image segmentation is an important issue in image processing, where Gaussian mixture models play an important part and have been proven effective. However, most Gaussian mixture model (GMM) based methods suffer from one or more limitations, such as limited noise robustness, over-smoothness for segmentations, and lack of flexibility to fit data. In order to address these issues, in this paper, we propose a rough set bounded asymmetric Gaussian mixture model with spatial constraint for image segmentation. First, based on our previous work where each cluster is characterized by three automatically determined rough-fuzzy regions, we partition the target image into three rough regions with two adaptively computed thresholds. Second, a new bounded indicator function is proposed to determine the bounded support regions of the observed data. The bounded indicator and posterior probability of a pixel that belongs to each sub-region is estimated with respect to the rough region where the pixel lies. Third, to further reduce over-smoothness for segmentations, two novel prior factors are proposed that incorporate the spatial information among neighborhood pixels, which are constructed based on the prior and posterior probabilities of the within- and between-clusters, and considers the spatial direction. We compare our algorithm to state-of-the-art segmentation approaches in both synthetic and real images to demonstrate the superior performance of the proposed algorithm.

  14. Psychophysical model of chromatic perceptual transparency based on substractive color mixture.

    Science.gov (United States)

    Faul, Franz; Ekroll, Vebjørn

    2002-06-01

    Variants of Metelli's episcotister model, which are based on additive color mixture, have been found to describe the luminance conditions for perceptual transparency very accurately. However, the findings in the chromatic domain are not that clear-cut, since there exist chromatic stimuli that conform to the additive model but do not appear transparent. We present evidence that such failures are of a systematic nature, and we propose an alternative psychophysical model based on subtractive color mixture. Results of a computer simulation revealed that this model approximately describes color changes that occur when a surface is covered by a filter. We present the results of two psychophysical experiments with chromatic stimuli, in which we directly compared the predictions of the additive model and the predictions of the new model. These results show that the color relations leading to the perception of a homogeneous transparent layer conform very closely to the predictions of the new model and deviate systematically from the predictions of the additive model.

  15. Modelling plant interspecific interactions from experiments of perennial crop mixtures to predict optimal combinations.

    Science.gov (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-07-28

    The contribution of plant species richness to productivity and ecosystem functioning is a long standing issue in Ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modelling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modelled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e. a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficientsfrom, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modelling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. Development of a binomial sampling plan for the carob moth (Lepidoptera: Pyralidae), a pest of California dates.

    Science.gov (United States)

    Park, Jung-Joon; Perring, Thomas M

    2010-08-01

    The seasonal density fluctuations of the carob moth, Ectomyelois ceratoniae (Zeller) (Lepidoptera: Pyralidae), were determined in a commercial date, Phoenix dactylifera L. garden. Four fruit categories (axil, ground, abscised green, and abscised brown) were sampled, and two carob moth life stages, eggs and immatures (larvae and pupae combined), were evaluated on these fruits. Based on the relative consistency of these eight sampling units (four fruit categories and two carob moth stages), four were used for the development of a binomial sampling plan. The average number of carob moth eggs and immatures on ground and abscised brown fruit was estimated from the proportion of infested fruit, and these binomial models were evaluated for model fitness and precision. These analyses suggested that the best sampling plan should consist of abscised brown dates and carob moth immatures by using a sample size of 100 dates. The performance of this binomial plan was evaluated further using a resampling protocol with 25 independent data sets at action thresholds of 7, 10, and 15% to represent light, medium and severe infestations, respectively. Results from the resampling program suggested that increasing sample size from 100 to 150 dates improved the precision of the binomial sampling plan. Use of this sampling plan will be the cornerstone of an integrated pest management program for carob moth in dates.

  17. Revelation of shrunken or stretched binomial dispersion and public perception of situations which might spread AIDS or HIV

    Directory of Open Access Journals (Sweden)

    Ramalingam Shanmugam

    2014-04-01

    Results: In the survey data about how AIDS/HIV might spread according to fifty respondents in thirteen nations, the functional balance exists only in three cases: and ldquo;needle and rdquo;, and ldquo;blood and rdquo; and and ldquo;sex and rdquo; justifying using the usual binomial model (1. In all other seven cases: and ldquo;glass and rdquo;, and ldquo;eating and rdquo;, and ldquo;object and rdquo;, and ldquo;toilet and rdquo;, and ldquo;hands and rdquo;, and ldquo;kissing and rdquo;, and and ldquo;care and rdquo; of an AIDS or HIV patient, there is a significant imbalance between the dispersion and its functional equivalence in terms of the mean suggesting that the new binomial called imbalanced binomial distribution (6 of this article should be used. The statistical power of this methodology is indeed excellent and hence the practitioners should make use of it. Conclusion: The new model called imbalanced binomial distribution (6 of this article is versatile enough to be useful in other research topics in the disciplines such as medicine, drug assessment, clinical trial outcomes, business, marketing, finance, economics, engineering and public health. [Int J Res Med Sci 2014; 2(2.000: 462-467

  18. Accuracy assessment of linear spectral mixture model due to terrain undulation

    Science.gov (United States)

    Wang, Tianxing; Chen, Songlin; Ma, Ya

    2008-12-01

    Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be

  19. A Neural Network Based Hybrid Mixture Model to Extract Information from Non-linear Mixed Pixels

    Directory of Open Access Journals (Sweden)

    Uttam Kumar

    2012-09-01

    Full Text Available Signals acquired by sensors in the real world are non-linear combinations, requiring non-linear mixture models to describe the resultant mixture spectra for the endmember’s (pure pixel’s distribution. This communication discusses inferring class fraction through a novel hybrid mixture model (HMM. HMM is a three-step process, where the endmembers are first derived from the images themselves using the N-FINDR algorithm. These endmembers are used by the linear mixture model (LMM in the second step that provides an abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual ground proportions are fed into neural network based multi-layer perceptron (MLP architecture as input to train the neurons. The neural output further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. HMM is first implemented and validated on simulated hyper spectral data of 200 bands and subsequently on real time MODIS data with a spatial resolution of 250 m. The results on computer simulated data show that the method gives acceptable results for unmixing pixels with an overall RMSE of 0.0089 ± 0.0022 with LMM and 0.0030 ± 0.0001 with the HMM when compared to actual class proportions. The unmixed MODIS images showed overall RMSE with HMM as 0.0191 ± 0.022 as compared to the LMM output considered alone that had an overall RMSE of 0.2005 ± 0.41, indicating that individual class abundances obtained from HMM are very close to the real observations.

  20. A proposed experimental platform for measuring the properties of warm dense mixtures: Testing the applicability of the linear mixing model

    Science.gov (United States)

    Hawreliak, James

    2017-06-01

    This paper presents a proposed experimental technique for investigating the impact of chemical interactions in warm dense liquid mixtures. It uses experimental equation of state (EOS) measurements of warm dense liquid mixtures with different compositions to determine the deviation from the linear mixing model. Statistical mechanics is used to derive the EOS of a mixture with a constant pressure linear mixing term (Amagat's rule) and an interspecies interaction term. A ratio between the particle density of two different compositions of mixtures, K(P, T)i: ii, is defined. By comparing this ratio for a range of mixtures, the impact of interspecies interactions can be studied. Hydrodynamic simulations of mixtures with different carbon/hydrogen ratios are used to demonstrate the application of this proposed technique to multiple shock and ramp compression experiments. The limit of the pressure correction that can be measured due to interspecies interactions using this methodology is determined by the uncertainty in the density measurement.

  1. Granular mixtures modeled as elastic hard spheres subject to a drag force.

    Science.gov (United States)

    Vega Reyes, Francisco; Garzó, Vicente; Santos, Andrés

    2007-06-01

    Granular gaseous mixtures under rapid flow conditions are usually modeled as a multicomponent system of smooth inelastic hard disks (two dimensions) or spheres (three dimensions) with constant coefficients of normal restitution alpha{ij}. In the low density regime an adequate framework is provided by the set of coupled inelastic Boltzmann equations. Due to the intricacy of the inelastic Boltzmann collision operator, in this paper we propose a simpler model of elastic hard disks or spheres subject to the action of an effective drag force, which mimics the effect of dissipation present in the original granular gas. For each collision term ij, the model has two parameters: a dimensionless factor beta{ij} modifying the collision rate of the elastic hard spheres, and the drag coefficient zeta{ij}. Both parameters are determined by requiring that the model reproduces the collisional transfers of momentum and energy of the true inelastic Boltzmann operator, yielding beta{ij}=(1+alpha{ij})2 and zeta{ij} proportional, variant1-alpha{ij}/{2}, where the proportionality constant is a function of the partial densities, velocities, and temperatures of species i and j. The Navier-Stokes transport coefficients for a binary mixture are obtained from the model by application of the Chapman-Enskog method. The three coefficients associated with the mass flux are the same as those obtained from the inelastic Boltzmann equation, while the remaining four transport coefficients show a general good agreement, especially in the case of the thermal conductivity. The discrepancies between both descriptions are seen to be similar to those found for monocomponent gases. Finally, the approximate decomposition of the inelastic Boltzmann collision operator is exploited to construct a model kinetic equation for granular mixtures as a direct extension of a known kinetic model for elastic collisions.

  2. Binding of Solvent Molecules to a Protein Surface in Binary Mixtures Follows a Competitive Langmuir Model.

    Science.gov (United States)

    Kulschewski, Tobias; Pleiss, Jürgen

    2016-09-06

    The binding of solvent molecules to a protein surface was modeled by molecular dynamics simulations of of Candida antarctica (C. antarctica) lipase B in binary mixtures of water, methanol, and toluene. Two models were analyzed: a competitive Langmuir model which assumes identical solvent binding sites with a different affinity toward water (KWat), methanol (KMet), and toluene (KTol) and a competitive Langmuir model with an additional interaction between free water and already bound water (KWatWat). The numbers of protein-bound molecules of both components of a binary mixture were determined for different compositions as a function of their thermodynamic activities in the bulk phase, and the binding constants were simultaneously fitted to the six binding curves (two components of three different mixtures). For both Langmuir models, the values of KWat, KMet, and KTol were highly correlated. The highest binding affinity was found for methanol, which was almost 4-fold higher than the binding affinities of water and toluene (KMet ≫ KWat ≈ KTol). Binding of water was dominated by the water-water interaction (KWatWat). Even for the three protein surface patches of highest water affinity, the binding affinity of methanol was 2-fold higher than water and 8-fold higher than toluene (KMet > KWat > KTol). The Langmuir model provides insights into the protein destabilizing mechanism of methanol which has a high binding affinity toward the protein surface. Thus, destabilizing solvents compete with intraprotein interactions and disrupt the tertiary structure. In contrast, benign solvents such as water or toluene have a low affinity toward the protein surface. Water is a special solvent: only few water molecules bind directly to the protein; most water molecules bind to already bound water molecules thus forming water patches. A quantitative mechanistic model of protein-solvent interactions that includes competition and miscibility of the components contributes a robust basis

  3. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.;

    2005-01-01

    .15 and for several methane compositions. Although very far from real petroleum fluids, these mixtures are interesting in order to study the potential of extending various models to the simulation of complex fluids with asymmetrical components (light/heavy hydrocarbon). These data (575 data points) have been...... discussed in the framework of recent representative models (hard sphere scheme, friction theory, and free volume model) and with mixing laws and two empirical models (particularly the LBC model which is commonly used in petroleum engineering, and the self-referencing model). This comparative study shows...

  4. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi

    2016-05-01

    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  5. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  6. Highlighting pitfalls in the Maxwell-Stefan modeling of water-alcohol mixture permeation across pervaporation membranes

    NARCIS (Netherlands)

    Krishna, R.; van Baten, J.M.

    2010-01-01

    The Maxwell-Stefan (M-S) equations are widely used for modeling permeation of water-alcohol mixtures across microporous membranes in pervaporation and dehydration process applications. For binary mixtures, for example, the following set of assumptions is commonly invoked, either explicitly or

  7. The Parameterized Complexity Analysis of Partition Sort for Negative Binomial Distribution Inputs

    CERN Document Server

    Singh, Niraj Kumar; Chakraborty, Soubhik

    2012-01-01

    The present paper makes a study on Partition sort algorithm for negative binomial inputs. Comparing the results with those for binomial inputs in our previous work, we find that this algorithm is sensitive to parameters of both distributions. But the main effects as well as the interaction effects involving these parameters and the input size are more significant for negative binomial case.

  8. Microstructural Analysis and Rheological Modeling of Asphalt Mixtures Containing Recycled Asphalt Materials

    Directory of Open Access Journals (Sweden)

    Augusto Cannone Falchetto

    2014-09-01

    Full Text Available The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP, and, more recently, Recycled Asphalt Shingles (RAS on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP and modeling of rheological data obtained with the Bending Beam Rheometer (BBR. Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder.

  9. Computational modeling of photoacoustic signals from mixtures of melanoma and red blood cells.

    Science.gov (United States)

    Saha, Ratan K

    2014-10-01

    A theoretical approach to model photoacoustic (PA) signals from mixtures of melanoma cells (MCs) and red blood cells (RBCs) is discussed. The PA signal from a cell approximated as a fluid sphere was evaluated using a frequency domain method. The tiny signals from individual cells were summed up obtaining the resultant PA signal. The local signal to noise ratio for a MC was about 5.32 and 5.40 for 639 and 822 nm illuminations, respectively. The PA amplitude exhibited a monotonic rise with increasing number of MCs for each incident radiation. The power spectral lines also demonstrated similar variations over a large frequency range (5-200 MHz). For instance, spectral intensity was observed to be 5.5 and 4.0 dB greater at 7.5 MHz for a diseased sample containing 1 MC and 22,952 RBCs than a normal sample composed of 22,958 RBCs at those irradiations, respectively. The envelope histograms generated from PA signals for mixtures of small numbers of MCs and large numbers of RBCs seemed to obey pre-Rayleigh statistics. The generalized gamma distribution found to facilitate better fits to the histograms than the Rayleigh and Nakagami distributions. The model provides a means to study PAs from mixtures of different populations of absorbers.

  10. Cost-effectiveness model for a specific mixture of prebiotics in The Netherlands.

    Science.gov (United States)

    Lenoir-Wijnkoop, I; van Aalderen, W M C; Boehm, G; Klaassen, D; Sprikkelman, A B; Nuijten, M J C

    2012-02-01

    The objective of this study was to assess the cost-effectiveness of the use of prebiotics for the primary prevention of atopic dermatitis in The Netherlands. A model was constructed using decision analytical techniques. The model was developed to estimate the health economic impact of prebiotic preventive disease management of atopic dermatitis. Data sources used include published literature, clinical trials and official price/tariff lists and national population statistics. The comparator was no supplementation with prebiotics. The primary perspective for conducting the economic evaluation was based on the situation in The Netherlands in 2009. The results show that the use of prebiotics infant formula (IMMUNOFORTIS(®)) leads to an additional cost of € 51 and an increase in Quality Adjusted Life Years (QALY) of 0.108, when compared with no prebiotics. Consequently, the use of infant formula with a specific mixture of prebiotics results in an incremental cost-effectiveness ratio (ICER) of € 472. The sensitivity analyses show that the ICER remains in all analyses far below the threshold of € 20,000/QALY. This study shows that the favourable health benefit of the use of a specific mixture of prebiotics results in positive short- and long-term health economic benefits. In addition, this study demonstrates that the use of infant formula with a specific mixture of prebiotics is a highly cost-effective way of preventing atopic dermatitis in The Netherlands.

  11. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    Energy Technology Data Exchange (ETDEWEB)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L. [NeuroSpin/CEA, F-91191 Gif-sur-Yvette (France); Fouque, A.L.; Ciuciu, Ph.; Risser, L. [IFR 49, Institut d' Imagerie Neurofonctionnelle, Paris (France)

    2009-07-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  12. Hyperspectral Small Target Detection by Combining Kernel PCA with Linear Mixture Model

    Institute of Scientific and Technical Information of China (English)

    GUYanfeng; ZHANGYe

    2005-01-01

    In this paper, a kernel-based invariant detection method is proposed for small target detection of hyperspectral images. The method combines Kernel principal component analysis (KPCA) with Iinear mixture model (LMM) together. The LMM is used to describe each pixel in the hyperspectral images as a mixture of target,background and noise. The KPCA is used to build back-ground subspace. Finally, a generalized likelihood ratio test is used to detect whether each pixel in hyperspectral image includes target. The numerical experiments are performed on hyperspectral data with 126 bands collected by Airborne visible/infrared imaging spectrometer (AVIRIS).The experimental results show the effectiveness of the proposed method and prove that this method can commendably overcome spectral variability and sparsity of target in the hyperspectral target detection, and it has great ability to separate target from background.

  13. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  14. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  15. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Science.gov (United States)

    Theis, Lucas; Chagas, Andrè Maia; Arnstein, Daniel; Schwarz, Cornelius; Bethge, Matthias

    2013-01-01

    Generalized linear models (GLMs) represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  16. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  17. Fourth-order strain-gradient phase mixture model for nanocrystalline fcc materials

    Science.gov (United States)

    Klusemann, Benjamin; Bargmann, Swantje; Estrin, Yuri

    2016-12-01

    The proposed modeling approach for nanocrystalline materials is an extension of the local phase mixture model introduced by Kim et al (2000 Acta Mater. 48 493-504). Local models cannot account for any non-uniformities or strain patterns, i.e. such models describe the behavior correctly only as long as it is homogeneous. In order to capture heterogeneities, the phase mixture model is augmented with gradient terms of higher order, namely second and fourth order. Different deformation mechanisms are assumed to operate in grain interior and grain boundaries concurrently. The deformation mechanism in grain boundaries is associated with diffusional mass transport along the boundaries, while in the grain interior dislocation glide as well as diffusion controlled mechanisms are considered. In particular, the mechanical response of nanostructured polycrystals is investigated. The model is capable of correctly predicting the transition of flow stress from Hall-Petch behavior in conventional grain size range to an inverse Hall-Petch relation in the nanocrystalline grain size range. The consideration of second- and fourth-order strain gradients allows non-uniformities within the strain field to represent strain patterns in combination with a regularization effect. Details of the numerical implementation are provided.

  18. Inhalation pressure distributions for medical gas mixtures calculated in an infant airway morphology model.

    Science.gov (United States)

    Gouinaud, Laure; Katz, Ira; Martin, Andrew; Hazebroucq, Jean; Texereau, Joëlle; Caillibotte, Georges

    2015-01-01

    A numerical pressure loss model previously used for adult human airways has been modified to simulate the inhalation pressure distribution in a healthy 9-month-old infant lung morphology model. Pressure distributions are calculated for air as well as helium and xenon mixtures with oxygen to investigate the effects of gas density and viscosity variations for this age group. The results indicate that there are significant pressure losses in infant extrathoracic airways due to inertial effects leading to much higher pressures to drive nominal flows in the infant airway model than for an adult airway model. For example, the pressure drop through the nasopharynx model of the infant is much greater than that for the nasopharynx model of the adult; that is, for the adult-versus-child the pressure differences are 0.08 cm H2O versus 0.4 cm H2O, 0.16 cm H2O versus 1.9 cm H2O and 0.4 cm H2O versus 7.7 cm H2O, breathing helium-oxygen (78/22%), nitrogen-oxygen (78/22%) and xenon-oxygen (60/40%), respectively. Within the healthy lung, viscous losses are of the same order for the three gas mixtures, so the differences in pressure distribution are relatively small.

  19. The Beta-Binomial Distribution for Estimating the Number of False Rejections in Microarray Gene Expression Studies.

    Science.gov (United States)

    Hunt, Daniel L; Cheng, Cheng; Pounds, Stanley

    2009-03-15

    In differential expression analysis of microarray data, it is common to assume independence among null hypotheses (and thus gene expression levels). The independence assumption implies that the number of false rejections V follows a binomial distribution and leads to an estimator of the empirical false discovery rate (eFDR). The number of false rejections V is modeled with the beta-binomial distribution. An estimator of the beta-binomial false discovery rate (bbFDR) is then derived. This approach accounts for how the correlation among non-differentially expressed genes influences the distribution of V. Permutations are used to generate the observed values for V under the null hypotheses and a beta-binomial distribution is fit to the values of V. The bbFDR estimator is compared to the eFDR estimator in simulation studies of correlated non-differentially expressed genes and is found to outperform the eFDR for certain scenarios. As an example, this method is also used to perform an analysis that compares the gene expression of soft tissue sarcoma samples to normal tissue samples.

  20. Parallel Binomial American Option Pricing with (and without) Transaction Costs

    CERN Document Server

    Zhang, Nan; Zastawniak, Tomasz

    2011-01-01

    We present a parallel algorithm that computes the ask and bid prices of an American option when proportional transaction costs apply to the trading of the underlying asset. The algorithm computes the prices on recombining binomial trees, and is designed for modern multi-core processors. Although parallel option pricing has been well studied, none of the existing approaches takes transaction costs into consideration. The algorithm that we propose partitions a binomial tree into blocks. In any round of computation a block is further partitioned into regions which are assigned to distinct processors. To minimise load imbalance the assignment of nodes to processors is dynamically adjusted before each new round starts. Synchronisation is required both within a round and between two successive rounds. The parallel speedup of the algorithm is proportional to the number of processors used. The parallel algorithm was implemented in C/C++ via POSIX Threads, and was tested on a machine with 8 processors. In the pricing ...