WorldWideScience

Sample records for carlo mixture model

  1. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  2. Phase-coexistence simulations of fluid mixtures by the Markov Chain Monte Carlo method using single-particle models

    KAUST Repository

    Li, Jun

    2013-09-01

    We present a single-particle Lennard-Jones (L-J) model for CO2 and N2. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO2 and N2 agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH4, CO2 and N2 are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available. © 2013 Elsevier Inc.

  3. Critical behaviour and interfacial fluctuations in a phase-separating model colloid-polymer mixture: grand canonical Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Vink, R L C; Horbach, J [Institut fuer Physik, Johannes Gutenberg-Universitaet, D-55099 Mainz, Staudinger Weg 7 (Germany)

    2004-09-29

    By using Monte Carlo simulations in the grand canonical ensemble we investigate the bulk phase behaviour of a model colloid-polymer mixture, the so-called Asakura-Oosawa model. In this model the colloids and polymers are considered as spheres with a hard-sphere colloid-colloid and colloid-polymer interaction and a zero interaction between polymers. In order to circumvent the problem of low acceptance rates for colloid insertions, we introduce a cluster move where a cluster of polymers is replaced by a colloid. We consider the transition from a colloid-poor to colloid-rich phase which is analogous to the gas-liquid transition in simple liquids. Successive umbrella sampling, recently introduced by Virnau and Mueller (2003 Preprint cond-mat/0306678), is used to access the phase-separated regime. We calculate the demixing binodal and the interfacial tension, also in the region close to the critical point. Finite size scaling techniques are used to accurately locate the critical point. Also investigated are the colloid density profiles in the phase-separated regime. We extract the interfacial thickness w from the latter profiles and demonstrate that the interfaces are subject to spatial fluctuations that can be understood by capillary wave theory. In particular, we find that, as predicted by capillary wave theory, w{sup 2} diverges logarithmically with the size of the system parallel to the interface.

  4. Monte Carlo mixture model of lifetime cancer incidence risk from radiation exposure on shuttle and international space station

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, L.E. [Chronic Disease Prevention and Control Research Center, Department of Medicine, Baylor College of Medicine, One Baylor Plaza, ST-924 Houston, TX (United States); Cucinotta, F.A. [Space and Life Sciences Directorate, Lyndon B. Johnson Space Center, National Aeronautics and Space Administration, Houston, TX (United States)

    1999-12-06

    Estimating uncertainty in lifetime cancer risk for human exposure to space radiation is a unique challenge. Conventional risk assessment with low-linear-energy-transfer (LET)-based risk from Japanese atomic bomb survivor studies may be inappropriate for relativistic protons and nuclei in space due to track structure effects. This paper develops a Monte Carlo mixture model (MCMM) for transferring additive, National Institutes of Health multiplicative, and multiplicative excess cancer incidence risks based on Japanese atomic bomb survivor data to determine excess incidence risk for various US astronaut exposure profiles. The MCMM serves as an anchor point for future risk projection methods involving biophysical models of DNA damage from space radiation. Lifetime incidence risks of radiation-induced cancer for the MCMM based on low-LET Japanese data for nonleukemia (all cancers except leukemia) were 2.77 (90% confidence limit, 0.75-11.34) for males exposed to 1 Sv at age 45 and 2.20 (90% confidence limit, 0.59-10.12) for males exposed at age 55. For females, mixture model risks for nonleukemia exposed separately to 1 Sv at ages of 45 and 55 were 2.98 (90% confidence limit, 0.90-11.70) and 2.44 (90% confidence limit, 0.70-10.30), respectively. Risks for high-LET 200 MeV protons (LET=0.45 keV/{mu}m), 1 MeV {alpha}-particles (LET=100 keV/{mu}m), and 600 MeV iron particles (LET=180 keV/{mu}m) were scored on a per particle basis by determining the particle fluence required for an average of one particle per cell nucleus of area 100 {mu}m{sup 2}. Lifetime risk per proton was 2.68x10{sup -2}% (90% confidence limit, 0.79x10{sup -3}%-0.514x10{sup -2}%). For {alpha}-particles, lifetime risk was 14.2% (90% confidence limit, 2.5%-31.2%). Conversely, lifetime risk per iron particle was 23.7% (90% confidence limit, 4.5%-53.0%). Uncertainty in the DDREF for high-LET particles may be less than that for low-LET radiation because typically there is very little dose-rate dependence

  5. Monte Carlo mixture model of lifetime cancer incidence risk from radiation exposure on shuttle and international space station

    Science.gov (United States)

    Peterson, L. E.; Cucinotta, F. A.; Wilson, J. W. (Principal Investigator)

    1999-01-01

    Estimating uncertainty in lifetime cancer risk for human exposure to space radiation is a unique challenge. Conventional risk assessment with low-linear-energy-transfer (LET)-based risk from Japanese atomic bomb survivor studies may be inappropriate for relativistic protons and nuclei in space due to track structure effects. This paper develops a Monte Carlo mixture model (MCMM) for transferring additive, National Institutes of Health multiplicative, and multiplicative excess cancer incidence risks based on Japanese atomic bomb survivor data to determine excess incidence risk for various US astronaut exposure profiles. The MCMM serves as an anchor point for future risk projection methods involving biophysical models of DNA damage from space radiation. Lifetime incidence risks of radiation-induced cancer for the MCMM based on low-LET Japanese data for nonleukemia (all cancers except leukemia) were 2.77 (90% confidence limit, 0.75-11.34) for males exposed to 1 Sv at age 45 and 2.20 (90% confidence limit, 0.59-10.12) for males exposed at age 55. For females, mixture model risks for nonleukemia exposed separately to 1 Sv at ages of 45 and 55 were 2.98 (90% confidence limit, 0.90-11.70) and 2.44 (90% confidence limit, 0.70-10.30), respectively. Risks for high-LET 200 MeV protons (LET=0.45 keV/micrometer), 1 MeV alpha-particles (LET=100 keV/micrometer), and 600 MeV iron particles (LET=180 keV/micrometer) were scored on a per particle basis by determining the particle fluence required for an average of one particle per cell nucleus of area 100 micrometer(2). Lifetime risk per proton was 2.68x10(-2)% (90% confidence limit, 0.79x10(-3)%-0. 514x10(-2)%). For alpha-particles, lifetime risk was 14.2% (90% confidence limit, 2.5%-31.2%). Conversely, lifetime risk per iron particle was 23.7% (90% confidence limit, 4.5%-53.0%). Uncertainty in the DDREF for high-LET particles may be less than that for low-LET radiation because typically there is very little dose-rate dependence

  6. Coarse-grained models for fluids and their mixtures: Comparison of Monte Carlo studies of their phase behavior with perturbation theory and experiment

    OpenAIRE

    Mognetti, B. M.; Virnau, P.; Yelash, L.; Paul, W.; Binder, K.; Mueller, M.; MacDowell, L. G.

    2008-01-01

    The prediction of the equation of state and the phase behavior of simple fluids (noble gases, carbon dioxide, benzene, methane, short alkane chains) and their mixtures by Monte Carlo computer simulation and analytic approximations based on thermodynamic perturbation theory is discussed. Molecules are described by coarse grained (CG) models, where either the whole molecule (carbon dioxide, benzene, methane) or a group of a few successive CH_2 groups (in the case of alkanes) are lumped into an ...

  7. Gibbs ensemble Monte Carlo simulation using an optimized potential model: pure acetic acid and a mixture of it with ethylene.

    Science.gov (United States)

    Zhang, Minhua; Chen, Lihang; Yang, Huaming; Sha, Xijiang; Ma, Jing

    2016-07-01

    Gibbs ensemble Monte Carlo simulation with configurational bias was employed to study the vapor-liquid equilibrium (VLE) for pure acetic acid and for a mixture of acetic acid and ethylene. An improved united-atom force field for acetic acid based on a Lennard-Jones functional form was proposed. The Lennard-Jones well depth and size parameters for the carboxyl oxygen and hydroxyl oxygen were determined by fitting the interaction energies of acetic acid dimers to the Lennard-Jones potential function. Four different acetic acid dimers and the proportions of them were considered when the force field was optimized. It was found that the new optimized force field provides a reasonable description of the vapor-liquid phase equilibrium for pure acetic acid and for the mixture of acetic acid and ethylene. Accurate values were obtained for the saturated liquid density of the pure compound (average deviation: 0.84 %) and for the critical points. The new optimized force field demonstrated greater accuracy and reliability in calculations of the solubility of the mixture of acetic acid and ethylene as compared with the results obtained with the original TraPPE-UA force field.

  8. Density anomaly of charged hard spheres of different diameters in a mixture with core-softened model solvent. Monte Carlo simulation results

    Directory of Open Access Journals (Sweden)

    B. Hribar-Lee

    2013-01-01

    Full Text Available Very recently the effect of equisized charged hard sphere solutes in a mixture with core-softened fluid model on the structural and thermodynamic anomalies of the system has been explored in detail by using Monte Carlo simulations and integral equations theory (J. Chem. Phys., Vol. 137, 244502 (2012. Our objective of the present short work is to complement this study by considering univalent ions of unequal diameters in a mixture with the same soft-core fluid model. Specifically, we are interested in the analysis of changes of the temperature of maximum density (TMD lines with ion concentration for three model salt solutes, namely sodium chloride, potassium chloride and rubidium chloride models. We resort to Monte Carlo simulations for this purpose. Our discussion also involves the dependences of the pair contribution to excess entropy and of constant volume heat capacity on the temperature of maximum density line. Some examples of the microscopic structure of mixtures in question in terms of pair distributions functions are given in addition.

  9. A Mixture Rasch Model with a Covariate: A Simulation Study via Bayesian Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Dai, Yunyun

    2013-01-01

    Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…

  10. Coarse-grained models for fluids and their mixtures: Comparison of Monte Carlo studies of their phase behavior with perturbation theory and experiment

    Science.gov (United States)

    Mognetti, B. M.; Virnau, P.; Yelash, L.; Paul, W.; Binder, K.; Müller, M.; MacDowell, L. G.

    2009-01-01

    The prediction of the equation of state and the phase behavior of simple fluids (noble gases, carbon dioxide, benzene, methane, and short alkane chains) and their mixtures by Monte Carlo computer simulation and analytic approximations based on thermodynamic perturbation theory is discussed. Molecules are described by coarse grained models, where either the whole molecule (carbon dioxide, benzene, and methane) or a group of a few successive CH2 groups (in the case of alkanes) are lumped into an effective point particle. Interactions among these point particles are fitted by Lennard-Jones (LJ) potentials such that the vapor-liquid critical point of the fluid is reproduced in agreement with experiment; in the case of quadrupolar molecules a quadrupole-quadrupole interaction is included. These models are shown to provide a satisfactory description of the liquid-vapor phase diagram of these pure fluids. Investigations of mixtures, using the Lorentz-Berthelot (LB) combining rule, also produce satisfactory results if compared with experiment, while in some previous attempts (in which polar solvents were modeled without explicitly taking into account quadrupolar interaction), strong violations of the LB rules were required. For this reason, the present investigation is a step towards predictive modeling of polar mixtures at low computational cost. In many cases Monte Carlo simulations of such models (employing the grand-canonical ensemble together with reweighting techniques, successive umbrella sampling, and finite size scaling) yield accurate results in very good agreement with experimental data. Simulation results are quantitatively compared to an analytical approximation for the equation of state of the same model, which is computationally much more efficient, and some systematic discrepancies are discussed. These very simple coarse-grained models of small molecules developed here should be useful, e.g., for simulations of polymer solutions with such molecules as

  11. Monte Carlo simulations of the X Y vectorial Blume-Emery-Griffiths model in multilayer films for 3He-4He mixtures

    Science.gov (United States)

    Santos-Filho, J. B.; Plascak, J. A.

    2017-09-01

    The X Y vectorial generalization of the Blume-Emery-Griffiths (X Y -VBEG) model, which is suitable to be applied to the study of 3He-4He mixtures, is treated on thin films structure and its thermodynamical properties are analyzed as a function of the film thickness. We employ extensive and up-to-date Monte Carlo simulations consisting of hybrid algorithms combining lattice-gas moves, Metropolis, Wolff, and super-relaxation procedures to overcome the critical slowing down and correlations among different spin configurations of the system. We also make use of single histogram techniques to get the behavior of the thermodynamical quantities close to the corresponding transition temperatures. Thin films of the X Y -VBEG model present a quite rich phase diagram with Berezinskii-Kosterlitz-Thouless (BKT) transitions, BKT endpoints, and isolated critical points. As one varies the impurity concentrations along the layers, and in the limit of infinite film thickness, there is a coalescence of the BKT transition endpoint and the isolated critical point into a single, unique tricritical point. In addition, when mimicking the behavior of thin films of 3He-4He mixtures, one obtains that the concentration of 3He atoms decreases from the outer layers to the inner layers of the film, meaning that the superfluid particles tend to locate in the bulk of the system.

  12. Mixture model modal clustering

    OpenAIRE

    Chacón, José E.

    2016-01-01

    The two most extended density-based approaches to clustering are surely mixture model clustering and modal clustering. In the mixture model approach, the density is represented as a mixture and clusters are associated to the different mixture components. In modal clustering, clusters are understood as regions of high density separated from each other by zones of lower density, so that they are closely related to certain regions around the density modes. If the true density is indeed in the as...

  13. Dynamical Dirichlet Mixture Model

    OpenAIRE

    Chen, Le; Barber, David; Odobez, Jean-Marc

    2007-01-01

    In this report, we propose a statistical model to deal with the discrete-distribution data varying over time. The proposed model -- HMM+DM -- extends the Dirichlet mixture model to the dynamic case: Hidden Markov Model with Dirichlet mixture output. Both the inference and parameter estimation procedures are proposed. Experiments on the generated data verify the proposed algorithms. Finally, we discuss the potential applications of the current model.

  14. Exploring fluctuations and phase equilibria in fluid mixtures via Monte Carlo simulation

    Science.gov (United States)

    Denton, Alan R.; Schmidt, Michael P.

    2013-03-01

    Monte Carlo simulation provides a powerful tool for understanding and exploring thermodynamic phase equilibria in many-particle interacting systems. Among the most physically intuitive simulation methods is Gibbs ensemble Monte Carlo (GEMC), which allows direct computation of phase coexistence curves of model fluids by assigning each phase to its own simulation cell. When one or both of the phases can be modelled virtually via an analytic free energy function (Mehta and Kofke 1993 Mol. Phys. 79 39), the GEMC method takes on new pedagogical significance as an efficient means of analysing fluctuations and illuminating the statistical foundation of phase behaviour in finite systems. Here we extend this virtual GEMC method to binary fluid mixtures and demonstrate its implementation and instructional value with two applications: (1) a lattice model of simple mixtures and polymer blends and (2) a free-volume model of a complex mixture of colloids and polymers. We present algorithms for performing Monte Carlo trial moves in the virtual Gibbs ensemble, validate the method by computing fluid demixing phase diagrams, and analyse the dependence of fluctuations on system size. Our open-source simulation programs, coded in the platform-independent Java language, are suitable for use in classroom, tutorial, or computational laboratory settings.

  15. Phase diagrams of hexadecane-CO 2 mixtures from histogram-reweighting Monte Carlo

    Science.gov (United States)

    Virnau, P.; Müller, M.; González MacDowell, L.; Binder, K.

    2002-08-01

    We investigate the phase behaviour of a hexadecane-CO 2 mixture with a coarse-grained off-lattice model. CO 2 is described by a single Lennard-Jones sphere and hexadecane by a chain of five LJ monomers with additional FENE interactions. Interaction parameters are derived from the critical points of pure hexadecane and CO 2 using a modified Lorentz-Berthelot mixing rule for the mixture. Simulations are based on grand-canonical histogram-reweighting Monte Carlo. A method to calculate interfacial tensions is described in detail. The analysis of the model includes simulated phase diagrams and interfacial tensions for pure hexadecane and CO 2 as well as a general phase diagram with complete critical lines for their mixture. We find evidence that a small change of interaction parameters between different species leads to qualitatively different phase behavior.

  16. Shell model Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  17. Prevalence Incidence Mixture Models

    Science.gov (United States)

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  18. Adsorption equilibrium of light hydrocarbon mixtures by monte carlo simulation

    Directory of Open Access Journals (Sweden)

    V. F. Cabral

    2007-12-01

    Full Text Available The procedure presented by Cabral et al. (2003 was used to predict the adsorption of multicomponent mixtures of methane, ethane, propane, and n-butane adsorbed on Silicalite S-115 at 300 K. The methodology employed uses the algorithm of molecular simulation for the grand canonical ensemble as an equation of state for the adsorbed phase. The adsorbent surface is modeled as a two-dimensional lattice in which solid heterogeneity is represented by of two kinds of sites with different adsorption energies. In all cases presented, the simulations described well the adsorption characteristics of the systems.

  19. Monte Carlo simulation of model Spin systemsr

    Indian Academy of Sciences (India)

    three~dimensional Ising models and Heisenberg models are dealt with in some detail. Recent applications of the Monte Carlo method to spin glass systems and to estimate renormalisation group critical exponents are reviewod. Keywords. _ Monte-carlo simulation; critical phenomena; Ising models; Heisenberg models ...

  20. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  1. Unrestricted Mixture Models for Class Identification in Growth Mixture Modeling

    Science.gov (United States)

    Liu, Min; Hancock, Gregory R.

    2014-01-01

    Growth mixture modeling has gained much attention in applied and methodological social science research recently, but the selection of the number of latent classes for such models remains a challenging issue, especially when the assumption of proper model specification is violated. The current simulation study compared the performance of a linear…

  2. Lattice model for water-solute mixtures.

    Science.gov (United States)

    Furlan, A P; Almarza, N G; Barbosa, M C

    2016-10-14

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  3. Dealing with Label Switching in Mixture Models Under Genuine Multimodality

    OpenAIRE

    Grün, Bettina; Leisch, Friedrich

    2008-01-01

    The fitting of finite mixture models is an ill-defined estimation problem as completely different parameterizations can induce similar mixture distributions. This leads to multiple modes in the likelihood which is a problem for frequentist maximum likelihood estimation, and complicates statistical inference of Markov chain Monte Carlo draws in Bayesian estimation. For the analysis of the posterior density of these draws a suitable separation into different modes is desirable. In addition, a u...

  4. Mixture Model Averaging for Clustering

    OpenAIRE

    Wei, Yuhong; McNicholas, Paul D.

    2012-01-01

    In mixture model-based clustering applications, it is common to fit several models from a family and report clustering results from only the `best' one. In such circumstances, selection of this best model is achieved using a model selection criterion, most often the Bayesian information criterion. Rather than throw away all but the best model, we average multiple models that are in some sense close to the best one, thereby producing a weighted average of clustering results. Two (weighted) ave...

  5. Multilevel Mixture Factor Models

    Science.gov (United States)

    Varriale, Roberta; Vermunt, Jeroen K.

    2012-01-01

    Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…

  6. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    Science.gov (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  7. Bayesian Repulsive Gaussian Mixture Model

    OpenAIRE

    Xie, Fangzheng; Xu, Yanxun

    2017-01-01

    We develop a general class of Bayesian repulsive Gaussian mixture models that encourage well-separated clusters, aiming at reducing potentially redundant components produced by independent priors for locations (such as the Dirichlet process). The asymptotic results for the posterior distribution of the proposed models are derived, including posterior consistency and posterior contraction rate in the context of nonparametric density estimation. More importantly, we show that compared to the in...

  8. Accelerated Hazards Mixture Cure Model

    Science.gov (United States)

    Zhang, Jiajia; Peng, Yingwei

    2010-01-01

    We propose a new cure model for survival data with a surviving or cure fraction. The new model is a mixture cure model where the covariate effects on the proportion of cure and the distribution of the failure time of uncured patients are separately modeled. Unlike the existing mixture cure models, the new model allows covariate effects on the failure time distribution of uncured patients to be negligible at time zero and to increase as time goes by. Such a model is particularly useful in some cancer treatments when the treat effect increases gradually from zero, and the existing models usually cannot handle this situation properly. We develop a rank based semiparametric estimation method to obtain the maximum likelihood estimates of the parameters in the model. We compare it with existing models and methods via a simulation study, and apply the model to a breast cancer data set. The numerical studies show that the new model provides a useful addition to the cure model literature. PMID:19697127

  9. Effect of bond-disorder on the phase-separation kinetics of binary mixtures: A Monte Carlo simulation study

    Science.gov (United States)

    Singh, Awaneesh; Singh, Amrita; Chakraborti, Anirban

    2017-09-01

    We present Monte Carlo (MC) simulation studies of phase separation in binary (AB) mixtures with bond-disorder that is introduced in two different ways: (i) at randomly selected lattice sites and (ii) at regularly selected sites. The Ising model with spin exchange (Kawasaki) dynamics represents the segregation kinetics in conserved binary mixtures. We find that the dynamical scaling changes significantly by varying the number of disordered sites in the case where bond-disorder is introduced at the randomly selected sites. On the other hand, when we introduce the bond-disorder in a regular fashion, the system follows the dynamical scaling for the modest number of disordered sites. For a higher number of disordered sites, the evolution morphology illustrates a lamellar pattern formation. Our MC results are consistent with the Lifshitz-Slyozov power-law growth in all the cases.

  10. Sparse Gaussian graphical mixture model

    OpenAIRE

    ANANI, Lotsi; WIT, Ernst

    2016-01-01

    This paper considers the problem of networks reconstruction from heterogeneous data using a Gaussian Graphical Mixture Model (GGMM). It is well known that parameter estimation in this context is challenging due to large numbers of variables coupled with the degenerate nature of the likelihood. We propose as a solution a penalized maximum likelihood technique by imposing an l1 penalty on the precision matrix. Our approach shrinks the parameters thereby resulting in better identifiability and v...

  11. MONTE CARLO SIMULATION OF PHASE EQUILIBRIA IN ISING FLUIDS AND THEIR MIXTURES

    Directory of Open Access Journals (Sweden)

    W.Fenz

    2003-01-01

    Full Text Available The mean field theory for the pure Ising fluid was recently extended to binary mixtures of an Ising and a van der Waals fluid. Depending on the relative interaction strengths, their three dimensional phase diagrams show lines of tricritical consolute and plait points, lines of critical end points and magnetic consolute point lines. Our current efforts are to compare these mean field results with different Monte Carlo simulation techniques, investigating both first order (liquid-vapor and demixing and second order (paramagnetic-ferromagnetic phase transitions. We show the resulting ρ, T phase diagrams of the pure Ising fluid for different magnetic interaction strengths R and constant pressure cross-sections of the x,T,p phase diagrams of Ising mixtures for different relative interaction strengths. The methods we have used include Gibbs Ensemble MC, Multihistogram Reweighting, Hyper-parallel Tempering, the cumulant intersection method and the newly developed Density of States MC technique.

  12. Grand canonical Monte Carlo simulations of phase equilibria of pure silicon tetrachloride and its binary mixture with carbon dioxide

    Science.gov (United States)

    Suleimenov, O. M.; Panagiotopoulos, A. Z.; Seward, T. M.

    Grand canonical histogram-reweighting Monte Carlo simulations were used to obtain the phase behaviour of pure silicon tetrachloride and its binary mixture with carbon dioxide. Two new potential models for pure silicon tetrachloride were developed and parametrized to the vapour-liquid coexistence properties. The first model, with one exponential-6 site and fixed electrostatic charges on atoms, does not adequately reproduce the experimental phase behaviour due to its inability to represent orientational anisotropy in the liquid phase. The second potential model, with five exponential-6 sites for the repulsive and dispersive interactions plus partial charges, accurately reproduces experimental saturated liquid and vapour densities as well as vapour pressures and the second virial coefficient for pure silicon tetrachloride. This model was used in simulations of the phase behaviour of the binary mixture carbon dioxide-silicon tetrachloride. Two sets of combining rules (Lorentz-Berthelot and Kong [1973, J. chem. Phys., 59, 2464]) were used to obtain unlike-pair potential parameters. For the binary system, the predicted phase diagram is in good agreement with experiment when the Kong combining rules are used. The Lorentz-Berthelot rules significantly overestimate the solubility of carbon dioxide in silicon tetrachloride.

  13. A generalized mixture model applied to diabetes incidence data.

    Science.gov (United States)

    Zuanetti, Daiane Aparecida; Milan, Luis Aparecido

    2017-07-01

    We present a generalization of the usual (independent) mixture model to accommodate a Markovian first-order mixing distribution. We propose the data-driven reversible jump, a Markov chain Monte Carlo (MCMC) procedure, for estimating the a posteriori probability for each model in a model selection procedure and estimating the corresponding parameters. Simulated datasets show excellent performance of the proposed method in the convergence, model selection, and precision of parameters estimates. Finally, we apply the proposed method to analyze USA diabetes incidence datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  15. Mixture Modeling of Individual Learning Curves

    Science.gov (United States)

    Streeter, Matthew

    2015-01-01

    We show that student learning can be accurately modeled using a mixture of learning curves, each of which specifies error probability as a function of time. This approach generalizes Knowledge Tracing [7], which can be viewed as a mixture model in which the learning curves are step functions. We show that this generality yields order-of-magnitude…

  16. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  17. Improving Power System Risk Evaluation Method Using Monte Carlo Simulation and Gaussian Mixture Method

    Directory of Open Access Journals (Sweden)

    GHAREHPETIAN, G. B.

    2009-06-01

    Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too.

  18. Recursive unsupervised learning of finite mixture models

    NARCIS (Netherlands)

    Zivkovic, Z.; van der Heijden, Ferdinand

    2004-01-01

    There are two open problems when finite mixture densities are used to model multivariate data: the selection of the number of components and the initialization. In this paper, we propose an online (recursive) algorithm that estimates the parameters of the mixture and that simultaneously selects the

  19. On the mixture model for multiphase flow

    Energy Technology Data Exchange (ETDEWEB)

    Manninen, M.; Taivassalo, V. [VTT Energy, Espoo (Finland). Nuclear Energy; Kallio, S. [Aabo Akademi, Turku (Finland)

    1996-12-31

    Numerical flow simulation utilising a full multiphase model is impractical for a suspension possessing wide distributions in the particle size or density. Various approximations are usually made to simplify the computational task. In the simplest approach, the suspension is represented by a homogeneous single-phase system and the influence of the particles is taken into account in the values of the physical properties. This study concentrates on the derivation and closing of the model equations. The validity of the mixture model is also carefully analysed. Starting from the continuity and momentum equations written for each phase in a multiphase system, the field equations for the mixture are derived. The mixture equations largely resemble those for a single-phase flow but are represented in terms of the mixture density and velocity. The volume fraction for each dispersed phase is solved from a phase continuity equation. Various approaches applied in closing the mixture model equations are reviewed. An algebraic equation is derived for the velocity of a dispersed phase relative to the continuous phase. Simplifications made in calculating the relative velocity restrict the applicability of the mixture model to cases in which the particles reach the terminal velocity in a short time period compared to the characteristic time scale of the flow of the mixture. (75 refs.)

  20. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  1. forecasting with nonlinear time series model: a monte-carlo ...

    African Journals Online (AJOL)

    PUBLICATIONS1

    ABSTRACT. In this paper, we propose a new method of forecasting with nonlinear time series model using. Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-. Carlo method of forecasting ...

  2. Forecasting with nonlinear time series model: A Monte-Carlo ...

    African Journals Online (AJOL)

    In this paper, we propose a new method of forecasting with nonlinear time series model using Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-Carlo method of forecasting using a ...

  3. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  4. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...... spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians...

  5. Phase Diagram of Hydrogen and a Hydrogen-Helium Mixture at Planetary Conditions by Quantum Monte Carlo Simulations.

    Science.gov (United States)

    Mazzola, Guglielmo; Helled, Ravit; Sorella, Sandro

    2018-01-12

    Understanding planetary interiors is directly linked to our ability of simulating exotic quantum mechanical systems such as hydrogen (H) and hydrogen-helium (H-He) mixtures at high pressures and temperatures. Equation of state (EOS) tables based on density functional theory are commonly used by planetary scientists, although this method allows only for a qualitative description of the phase diagram. Here we report quantum Monte Carlo (QMC) molecular dynamics simulations of pure H and H-He mixture. We calculate the first QMC EOS at 6000 K for a H-He mixture of a protosolar composition, and show the crucial influence of He on the H metallization pressure. Our results can be used to calibrate other EOS calculations and are very timely given the accurate determination of Jupiter's gravitational field from the NASA Juno mission and the effort to determine its structure.

  6. Modelling Placebo Response via Infinite Mixtures

    Science.gov (United States)

    Petkova, Eva

    2010-01-01

    Non-specific treatment response, also known as placebo response, is ubiquitous in the treatment of mental illness, particularly in treating depression. The study of placebo effect is complicated because the factors that constitute non-specific treatment effects are latent and not directly observed. A flexible infinite mixture model is introduced to model these nonspecific treatment effects. The infinite mixture model stipulates that the non-specific treatment effects are continuous and this is contrasted with a finite mixture model that is based on the assumption that the non-specific treatment effects are discrete. Data from a depression clinical trial is used to illustrate the model and to study the evolution of the placebo effect over the course of treatment. PMID:21804745

  7. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  8. Residual-based model diagnosis methods for mixture cure models.

    Science.gov (United States)

    Peng, Yingwei; Taylor, Jeremy M G

    2017-06-01

    Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.

  9. Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models

    Science.gov (United States)

    Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George

    2012-01-01

    Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…

  10. Monte-Carlo simulation-based statistical modeling

    CERN Document Server

    Chen, John

    2017-01-01

    This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.

  11. Exact Fit of Simple Finite Mixture Models

    Directory of Open Access Journals (Sweden)

    Dirk Tasche

    2014-11-01

    Full Text Available How to forecast next year’s portfolio-wide credit default rate based on last year’s default observations and the current score distribution? A classical approach to this problem consists of fitting a mixture of the conditional score distributions observed last year to the current score distribution. This is a special (simple case of a finite mixture model where the mixture components are fixed and only the weights of the components are estimated. The optimum weights provide a forecast of next year’s portfolio-wide default rate. We point out that the maximum-likelihood (ML approach to fitting the mixture distribution not only gives an optimum but even an exact fit if we allow the mixture components to vary but keep their density ratio fixed. From this observation we can conclude that the standard default rate forecast based on last year’s conditional default rates will always be located between last year’s portfolio-wide default rate and the ML forecast for next year. As an application example, cost quantification is then discussed. We also discuss how the mixture model based estimation methods can be used to forecast total loss. This involves the reinterpretation of an individual classification problem as a collective quantification problem.

  12. Models of network reliability analysis, combinatorics, and Monte Carlo

    CERN Document Server

    Gertsbakh, Ilya B

    2009-01-01

    Unique in its approach, Models of Network Reliability: Analysis, Combinatorics, and Monte Carlo provides a brief introduction to Monte Carlo methods along with a concise exposition of reliability theory ideas. From there, the text investigates a collection of principal network reliability models, such as terminal connectivity for networks with unreliable edges and/or nodes, network lifetime distribution in the process of its destruction, network stationary behavior for renewable components, importance measures of network elements, reliability gradient, and network optimal reliability synthesis

  13. Multilevel Growth Mixture Models for Classifying Groups

    Science.gov (United States)

    Palardy, Gregory J.; Vermunt, Jeroen K.

    2010-01-01

    This article introduces a multilevel growth mixture model (MGMM) for classifying both the individuals and the groups they are nested in. Nine variations of the general model are described that differ in terms of categorical and continuous latent variable specification within and between groups. An application in the context of school effectiveness…

  14. Bayesian mixture models for partially verified data

    DEFF Research Database (Denmark)

    Kostoulas, Polychronis; Browne, William J.; Nielsen, Søren Saxmose

    2013-01-01

    Bayesian mixture models can be used to discriminate between the distributions of continuous test responses for different infection stages. These models are particularly useful in case of chronic infections with a long latent period, like Mycobacterium avium subsp. paratuberculosis (MAP) infection...

  15. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...

  16. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...

  17. Gaussian Mixture Model and Rjmcmc Based RS Image Segmentation

    Science.gov (United States)

    Shi, X.; Zhao, Q. H.

    2017-09-01

    For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  18. GAUSSIAN MIXTURE MODEL AND RJMCMC BASED RS IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-09-01

    Full Text Available For the image segmentation method based on Gaussian Mixture Model (GMM, there are some problems: 1 The number of component was usually a fixed number, i.e., fixed class and 2 GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC. In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  19. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration

    2017-01-01

    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.

  20. Spatially explicit dynamic N-mixture models

    Science.gov (United States)

    Zhao, Qing; Royle, Andy; Boomer, G. Scott

    2017-01-01

    Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.

  1. Mixture model analysis of complex samples

    NARCIS (Netherlands)

    Wedel, M; ter Hofstede, F; Steenkamp, JBEM

    1998-01-01

    We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample

  2. A Skew-Normal Mixture Regression Model

    Science.gov (United States)

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  3. Modeling neutron guides using Monte Carlo simulations

    CERN Document Server

    Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R

    2002-01-01

    Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.

  4. Self-assembly models for lipid mixtures

    Science.gov (United States)

    Singh, Divya; Porcar, Lionel; Butler, Paul; Perez-Salas, Ursula

    2006-03-01

    Solutions of mixed long and short (detergent-like) phospholipids referred to as ``bicelle'' mixtures in the literature, are known to form a variety of different morphologies based on their total lipid composition and temperature in a complex phase diagram. Some of these morphologies have been found to orient in a magnetic field, and consequently bicelle mixtures are widely used to study the structure of soluble as well as membrane embedded proteins using NMR. In this work, we report on the low temperature phase of the DMPC and DHPC bicelle mixture, where there is agreement on the discoid structures but where molecular packing models are still being contested. The most widely accepted packing arrangement, first proposed by Vold and Prosser had the lipids completely segregated in the disk: DHPC in the rim and DMPC in the disk. Using data from small angle neutron scattering (SANS) experiments, we show how radius of the planar domain of the disks is governed by the effective molar ratio qeff of lipids in aggregate and not the molar ratio q (q = [DMPC]/[DHPC] ) as has been understood previously. We propose a new quantitative (packing) model and show that in this self assembly scheme, qeff is the real determinant of disk sizes. Based on qeff , a master equation can then scale the radii of disks from mixtures with varying q and total lipid concentration.

  5. Supervised and Unsupervised Classification Using Mixture Models

    Science.gov (United States)

    Girard, S.; Saracco, J.

    2016-05-01

    This chapter is dedicated to model-based supervised and unsupervised classification. Probability distributions are defined over possible labels as well as over the observations given the labels. To this end, the basic tools are the mixture models. This methodology yields a posterior distribution over the labels given the observations which allows to quantify the uncertainty of the classification. The role of Gaussian mixture models is emphasized leading to Linear Discriminant Analysis and Quadratic Discriminant Analysis methods. Some links with Fisher Discriminant Analysis and logistic regression are also established. The Expectation-Maximization algorithm is introduced and compared to the K-means clustering method. The methods are illustrated both on simulated datasets as well as on real datasets using the R software.

  6. Strain in the mesoscale kinetic Monte Carlo model for sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    2014-01-01

    Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate...

  7. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    Knowledge of the thermodynamic properties and phase equilibria of mixtures containing carbon dioxide (CO2) is important in several industrial processes such as enhanced oil recovery, carbon capture and storage, and supercritical extractions, where CO2 is used as a solvent. Despite this importance......, accurate predictions of the thermodynamic properties and phase equilibria of mixtures containing CO2 are challenging with classical models such as the Soave-Redlich-Kwong (SRK) equation of state (EoS). This is believed to be due to the fact, that CO2 has a large quadrupole moment which the classical models...... complicated due to parameter identifiability issues. In an attempt to quantify and illustrate these issues, the uncertainties in the pure compound parameters of CO2 were investigated using qCPA as well as different CPA approaches. The approaches employ between three and five parameters. The uncertainties...

  8. Mixture-model-based signal denoising

    OpenAIRE

    Samé, Allou; Oukhellou, Latifa; Côme, Etienne; Aknin, Patrice

    2007-01-01

    International audience; This paper proposes a new signal denoising methodology for dealing with asymmetrical noises. The adopted strategy is based on a regression model where the noise is supposed to be additive and distributed following a mixture of Gaussian densities. The parameters estimation is performed using a Generalized EM (GEM) algorithm. Experimental studies on simulated and real signals in the context of a diagnosis application in the railway domain reveal that the proposed approac...

  9. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    Science.gov (United States)

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  10. Monte Carlo Simulations Probing the Adsorptive Separation of Hydrogen Sulfide/Methane Mixtures Using All-Silica Zeolites.

    Science.gov (United States)

    Shah, Mansi S; Tsapatsis, Michael; Siepmann, J Ilja

    2015-11-10

    Selective removal of hydrogen sulfide (H2S) from sour natural gas mixtures is one of the key challenges facing the natural gas industry. Adsorption and pervaporation processes utilizing nanoporous materials, such as zeolites, can be alternatives to highly energy-intensive amine-based absorption processes. In this work, the adsorption behavior of binary mixtures containing H2S and methane (CH4) in seven different all-silica zeolite frameworks (CHA, DDR, FER, IFR, MFI, MOR, and MWW) is investigated using Gibbs ensemble Monte Carlo simulations at two temperatures (298 and 343 K) and pressures ranging from 1 to 50 bar. The simulations demonstrate high selectivities that, with the exception of MOR, increase with increasing H2S concentration due to favorable sorbate-sorbate interactions. The simulations indicate significant inaccuracies of predictions using unary adsorption data and ideal adsorbed solution theory. In addition, the adsorption of binary H2S/H2O mixtures in MFI is considered to probe whether the presence of H2S induces coadsorption and reduces the hydrophobic character of all-silica zeolites. The simulations show preferential adsorption of H2S from moist gases with a selectivity of about 18 over H2O.

  11. Microscopic structure and interaction analysis for supercritical carbon dioxide-ethanol mixtures: a Monte Carlo simulation study.

    Science.gov (United States)

    Xu, Wenhao; Yang, Jichu; Hu, Yinyu

    2009-04-09

    Configurational-bias Monte Carlo simulations in the isobaric-isothermal ensemble using the TraPPE-UA force field were performed to study the microscopic structures and molecular interactions of mixtures containing supercritical carbon dioxide (scCO(2)) and ethanol (EtOH). The binary vapor-liquid coexisting curves were calculated at 298.17, 333.2, and 353.2 K and are in excellent agreement with experimental results. For the first time, three important interactions, i.e., EtOH-EtOH hydrogen bonding, EtOH-CO(2) hydrogen bonding, and EtOH-CO(2) electron donor-acceptor (EDA) bonding, in the mixtures were fully analyzed and compared. The EtOH mole fraction, temperature, and pressure effect on the three interactions was investigated and then explained by the competition of interactions between EtOH and CO(2) molecules. Analysis of the microscopic structures indicates a strong preference for the formation of EtOH-CO(2) hydrogen-bonded tetramers and pentamers at higher EtOH compositions. The distribution of aggregation sizes and types shows that a very large EtOH-EtOH hydrogen-bonded network exists in the mixtures, while only linear EtOH-CO(2) hydrogen-bonded and EDA-bonded dimers and trimers are present. Further analysis shows that EtOH-CO(2) EDA complex is more stable than the hydrogen-bonded one.

  12. Modeling abundance using multinomial N-mixture models

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  13. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    Energy Technology Data Exchange (ETDEWEB)

    D.P. Stotler

    2005-06-09

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

  14. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  15. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  16. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    van der Stoep, A.W.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance,

  17. Quantum Monte Carlo study of quasiparticles in the Hubbard model

    NARCIS (Netherlands)

    Linden, W. von der; Morgenstern, I.; Raedt, H. de

    1990-01-01

    We present an improved version of the projector quantum Monte Carlo method, which has recently been proposed. This scheme allows a very precise computation of the ground-state energy of fermionic models. The “minus sign” has been treated without further approximations and does not influence the

  18. Monte Carlo investigation of the one-dimensional Potts model

    Energy Technology Data Exchange (ETDEWEB)

    Karma, A.S.; Nolan, M.J.

    1983-02-01

    Monte Carlo results are presented for a variety of one-dimensional dynamical q-state Potts models. Our calculations confirm the expected universal value z = 2 for the dynamic scaling exponent. Our results also indicate that an increase in q at fixed correlation length drives the dynamics into the scaling regime.

  19. Multinomial mixture model with heterogeneous classification probabilities

    Science.gov (United States)

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  20. Motor simulation via coupled internal models using sequential Monte Carlo

    OpenAIRE

    Dindo H; Zambuto D.; Pezzulo G.

    2011-01-01

    We describe a generative Bayesian model for action understanding in which inverse-forward internal model pairs are considered 'hypotheses' of plausible action goals that are explored in parallel via an approximate inference mechanism based on sequential Monte Carlo methods. The reenactment of internal model pairs can be considered a form of motor simulation, which supports both perceptual prediction and action understanding at the goal level. However, this procedure is generally considered to...

  1. Investigation of a Gamma model for mixture STR samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Lauritzen, Steffen L.

    The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis.......The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis....

  2. Dirichlet-vMF Mixture Model

    OpenAIRE

    Li, Shaohua

    2017-01-01

    This document is about the multi-document Von-Mises-Fisher mixture model with a Dirichlet prior, referred to as VMFMix. VMFMix is analogous to Latent Dirichlet Allocation (LDA) in that they can capture the co-occurrence patterns acorss multiple documents. The difference is that in VMFMix, the topic-word distribution is defined on a continuous n-dimensional hypersphere. Hence VMFMix is used to derive topic embeddings, i.e., representative vectors, from multiple sets of embedding vectors. An ef...

  3. Modeling the flow of activated H2 + CH4 mixture by deposition of diamond nanostructures

    Directory of Open Access Journals (Sweden)

    Plotnikov Mikhail

    2017-01-01

    Full Text Available Algorithm of the direct simulation Monte Carlo method for the flow of hydrogen and methane mixture in a cylindrical channel is developed. Heterogeneous reactions on tungsten channel surfaces are included into the model. Their effects on flows are analyzed. A one-dimensional approach based on the solution of equilibrium chemical kinetics equations is used to analyze gas-phase methane decomposition. The obtained results may be useful for optimization of gas-dynamic sources of activated gas diamond synthesis.

  4. Perfect posterior simulation for mixture and hidden Marko models

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Breyer, Laird A.; Roberts, Gareth O.

    2010-01-01

    In this paper we present an application of the read-once coupling from the past algorithm to problems in Bayesian inference for latent statistical models. We describe a method for perfect simulation from the posterior distribution of the unknown mixture weights in a mixture model. Our method...... is extended to a more general mixture problem, where unknown parameters exist for the mixture components, and to a hidden Markov model....

  5. The phase behavior of a hard sphere chain model of a binary n-alkane mixture

    Energy Technology Data Exchange (ETDEWEB)

    Malanoski, A. P. [Department of Chemical Engineering, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Monson, P. A. [Department of Chemical Engineering, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2000-02-08

    Monte Carlo computer simulations have been used to study the solid and fluid phase properties as well as phase equilibrium in a flexible, united atom, hard sphere chain model of n-heptane/n-octane mixtures. We describe a methodology for calculating the chemical potentials for the components in the mixture based on a technique used previously for atomic mixtures. The mixture was found to conform accurately to ideal solution behavior in the fluid phase. However, much greater nonidealities were seen in the solid phase. Phase equilibrium calculations indicate a phase diagram with solid-fluid phase equilibrium and a eutectic point. The components are only miscible in the solid phase for dilute solutions of the shorter chains in the longer chains. (c) 2000 American Institute of Physics.

  6. Monte Carlo modeling of pigmented lesions

    Science.gov (United States)

    Gareau, Daniel; Jacques, Steven; Krueger, James

    2014-03-01

    Colors observed in clinical dermoscopy are critical to diagnosis but the mechanisms that lead to the spectral components of diffuse reflectance are more than meets the eye: combinations of the absorption and scattering spectra of the biomolecules as well as the "structural color" effect of skin anatomy. We modeled diffuse remittance from skin based on histopathology. The optical properties of the tissue types were based on the relevant chromophores and scatterers. The resulting spectral images mimic the appearance of pigmented lesions quite well when the morphology is mathematically derived but limited when based on histopathology, raising interesting questions about the interaction between various wavelengths with various pathological anatomical features.

  7. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  8. Model Selection Methods for Mixture Dichotomous IRT Models

    Science.gov (United States)

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  9. Probabilistic drought classification using gamma mixture models

    Science.gov (United States)

    Mallya, Ganeshchandra; Tripathi, Shivam; Govindaraju, Rao S.

    2015-07-01

    Drought severity is commonly reported using drought classes obtained by assigning pre-defined thresholds on drought indices. Current drought classification methods ignore modeling uncertainties and provide discrete drought classification. However, the users of drought classification are often interested in knowing inherent uncertainties in classification so that they can make informed decisions. Recent studies have used hidden Markov models (HMM) for quantifying uncertainties in drought classification. The HMM method conceptualizes drought classes as distinct hydrological states that are not observed (hidden) but affect observed hydrological variables. The number of drought classes or hidden states in the model is pre-specified, which can sometimes result in model over-specification problem. This study proposes an alternate method for probabilistic drought classification where the number of states in the model is determined by the data. The proposed method adapts Standard Precipitation Index (SPI) methodology of drought classification by employing gamma mixture model (Gamma-MM) in a Bayesian framework. The method alleviates the problem of choosing a suitable distribution for fitting data in SPI analysis, quantifies modeling uncertainties, and propagates them for probabilistic drought classification. The method is tested on rainfall data over India. Comparison of the results with standard SPI show important differences particularly when SPI assumptions on data distribution are violated. Further, the new method is simpler and more parsimonious than HMM based drought classification method and can be a viable alternative for probabilistic drought classification.

  10. A Fast Incremental Gaussian Mixture Model.

    Directory of Open Access Journals (Sweden)

    Rafael Coimbra Pinto

    Full Text Available This work builds upon previous efforts in online incremental learning, namely the Incremental Gaussian Mixture Network (IGMN. The IGMN is capable of learning from data streams in a single-pass by improving its model after analyzing each data point and discarding it thereafter. Nevertheless, it suffers from the scalability point-of-view, due to its asymptotic time complexity of O(NKD3 for N data points, K Gaussian components and D dimensions, rendering it inadequate for high-dimensional data. In this work, we manage to reduce this complexity to O(NKD2 by deriving formulas for working directly with precision matrices instead of covariance matrices. The final result is a much faster and scalable algorithm which can be applied to high dimensional tasks. This is confirmed by applying the modified algorithm to high-dimensional classification datasets.

  11. Grand canonical Monte Carlo using solvent repacking: Application to phase behavior of hard disk mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Kindt, James T., E-mail: jkindt@emory.edu [Department of Chemistry, Emory University, Atlanta, Georgia 30322 (United States)

    2015-09-28

    A new “solvent repacking Monte Carlo” strategy for performing grand canonical ensemble simulations in condensed phases is introduced and applied to the study of hard-disk systems. The strategy is based on the configuration-bias approach, but uses an auxiliary biasing potential to improve the efficiency of packing multiple solvent particles in the cavity formed by removing one large solute. The method has been applied to study the coexistence of ordered and isotropic phases in three binary mixtures of hard disks with a small mole fraction (x{sub L} < 0.02) of the larger “solute” component. A chemical potential of 12.81 ± 0.01 k{sub B}T was found to correspond to the freezing transition of the pure hard disk “solvent.” Simulations permitted the study of partitioning of large disks between ordered and isotropic phases, which showed a distinct non-monotonic dependence on size; the isotropic phase was enriched approximately 10-fold, 20-fold, and 5-fold over the coexisting ordered phases at diameter ratios d = 1.4, 2.5, and 3, respectively. Mixing of large and small disks within both phases near coexistence was strongly non-ideal in spite of the dilution. Structures of systems near coexistence were analyzed to determine correlations between large disks’ positions within each phase, the orientational correlation length of small disks within the fluid phases, and the nature of translational order in the ordered phase. The analyses indicate that the ordered phase coexists with an isotropic phase resembling a nanoemulsion of ordered domains of small disks, with large disks enriched at the disordered domain interfaces.

  12. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    Science.gov (United States)

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  13. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    Science.gov (United States)

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  14. A Monte Carlo approach for the bouncer model

    Science.gov (United States)

    Díaz, Gabriel; Yoshida, Makoto; Leonel, Edson D.

    2017-11-01

    A Monte Carlo investigation is made in a dissipative bouncer model to describe some statistical properties for chaotic dynamics as a function of the control parameters. The dynamics of the system is described via a two dimensional mapping for the variables velocity of the particle and phase of the moving wall at the instant of the impact. A small stochastic noise is introduced in the time of flight of the particle as an attempt to investigate the evolution of the system without the need to solve transcendental equations. We show that average values along the chaotic dynamics do not strongly depend on the noise size. It allows us to propose a Monte Carlo like simulation that lead to calculate average values for the observables with great accuracy and fast simulations.

  15. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  16. GLIMMIX : Software for estimating mixtures and mixtures of generalized linear models

    NARCIS (Netherlands)

    Wedel, M

    2001-01-01

    GLIMMIX is a commercial WINDOWS-based computer program that implements the EM algorithm (Dempster, Laird and Rubin 1977) for the estimation of finite mixtures and mixtures of generalized linear models. The program allows for the specification of a number of distributions in the exponential family,

  17. Smoothed particle hydrodynamics model for phase separating fluid mixtures. II. Diffusion in a binary mixture

    NARCIS (Netherlands)

    Thieulot, C; Janssen, LPBM; Espanol, P

    A previously formulated smoothed particle hydrodynamics model for a phase separating mixture is tested for the case when viscous processes are negligible and only mass and energy diffusive processes take place. We restrict ourselves to the case of a binary mixture that can exhibit liquid-liquid

  18. Monte Carlo modeling of human tooth optical coherence tomography imaging

    Science.gov (United States)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-07-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth.

  19. Density matrix Monte Carlo modeling of quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian

    2017-10-01

    By including elements of the density matrix formalism, the semiclassical ensemble Monte Carlo method for carrier transport is extended to incorporate incoherent tunneling, known to play an important role in quantum cascade lasers (QCLs). In particular, this effect dominates electron transport across thick injection barriers, which are frequently used in terahertz QCL designs. A self-consistent model for quantum mechanical dephasing is implemented, eliminating the need for empirical simulation parameters. Our modeling approach is validated against available experimental data for different types of terahertz QCL designs.

  20. A fitter use of Monte Carlo simulations in regression models

    Directory of Open Access Journals (Sweden)

    Alessandro Ferrarini

    2011-12-01

    Full Text Available In this article, I focus on the use of Monte Carlo simulations (MCS within regression models, being this application very frequent in biology, ecology and economy as well. I'm interested in enhancing a typical fault in this application of MCS, i.e. the inner correlations among independent variables are not used when generating random numbers that fit their distributions. By means of an illustrative example, I provide proof that the misuse of MCS in regression models produces misleading results. Furthermore, I also provide a solution for this topic.

  1. MOLECULAR SIMULATION OF THE VAPOR-LIQUID EQUILIBRIUM OF N2-NC5 MIXTURE BY MONTE CARLO SIMULATIONS

    Directory of Open Access Journals (Sweden)

    Florianne Castillo-Borja

    2013-12-01

    Full Text Available ABSTRACT This study used Monte Carlo simulations in the Gibbs ensemble to describe the liquid-vapor phase equilibrium of nitrogen-n-pentane system for three isotherms. The study analyzed a wide range of pressures ranging up to 25 MPa. The system was modeled using the intermolecular potential Galassi-Tildesley for nitrogen and SKS for n-pentane. Results were compared against experimental data. Far from the critical point region, analyzed models reproduce favorably shape of the curve of phase equilibrium and in the vicinity of the critical point, results tend to move away from the experimental behavior. Critical points were determined (pressure, density and composition for the three isotherms using an extrapolation method based on scaling laws, with satisfactory results. Calculated coexistence curves are adequate even if the models analyzed do not contain optimized binary interaction parameters .

  2. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  3. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...... 2011, and compute dollar losses and implied standard deviation losses. We compare our results to those of existing mixture models and other benchmarks like component models and jump models. Using the model confidence set test, the overall dollar root mean squared error of the best performing benchmark...... model is significantly larger than that of the best mixture model....

  4. Evolutionary Sequential Monte Carlo Samplers for Change-Point Models

    Directory of Open Access Journals (Sweden)

    Arnaud Dufays

    2016-03-01

    Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

  5. Monte Carlo model for electron degradation in methane

    CERN Document Server

    Bhardwaj, Anil

    2015-01-01

    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...

  6. Monte Carlo shell model studies with massively parallel supercomputers

    Science.gov (United States)

    Shimizu, Noritaka; Abe, Takashi; Honma, Michio; Otsuka, Takaharu; Togashi, Tomoaki; Tsunoda, Yusuke; Utsuno, Yutaka; Yoshida, Tooru

    2017-06-01

    We present an overview of the advanced Monte Carlo shell model (MCSM), including its recent applications to no-core shell-model calculations and to large-scale shell-model calculations (LSSM) in the usual sense. For the ab initio no-core MCSM we show recent methodological developments, which include the evaluation of energy eigenvalues in an infinitely large model space by an extrapolation method. As an example of the application of the no-core MCSM, the cluster structure of Be isotopes is discussed. Regarding LSSM applications, the triple shape coexistence in 68Ni and 70Ni and the shape transition of Zr isotopes are clarified with the visualization of the intrinsic deformation of the MCSM wave function. General aspects of the code development of the MCSM on massively parallel computers are also briefly described.

  7. Quantum Monte Carlo study of the Rabi-Hubbard model

    Science.gov (United States)

    Flottat, Thibaut; Hébert, Frédéric; Rousseau, Valéry G.; Batrouni, George Ghassan

    2016-10-01

    We study, using quantum Monte Carlo (QMC) simulations, the ground state properties of a one dimensional Rabi-Hubbard model. The model consists of a lattice of Rabi systems coupled by a photon hopping term between near neighbor sites. For large enough coupling between photons and atoms, the phase diagram generally consists of only two phases: a coherent phase and a compressible incoherent one separated by a quantum phase transition (QPT). We show that, as one goes deeper in the coherent phase, the system becomes unstable exhibiting a divergence of the number of photons. The Mott phases which are present in the Jaynes-Cummings-Hubbard model are not observed in these cases due to the presence of non-negligible counter-rotating terms. We show that these two models become equivalent only when the detuning is negative and large enough, or if the counter-rotating terms are small enough

  8. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    Science.gov (United States)

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  9. A Multilevel Mixture IRT Model with an Application to DIF

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.

    2010-01-01

    Mixture item response theory models have been suggested as a potentially useful methodology for identifying latent groups formed along secondary, possibly nuisance dimensions. In this article, we describe a multilevel mixture item response theory (IRT) model (MMixIRTM) that allows for the possibility that this nuisance dimensionality may function…

  10. A Kinetic Model for Gas Mixtures Based on a Fokker-Planck Equation

    Science.gov (United States)

    Gorji, Hossein; Jenny, Patrick

    2012-05-01

    We present a generalized nonlinear Fokker-Planck equation, which describes the dynamics of rarefied monatomic gas mixture flows. The devised kinetic model leads to correct transfer of energy and momentum between gas species and to consistent evolution of molecular stresses and heat fluxes with respect to the generalized Boltzmann equation. Thus, the correct diffusion coefficient together with the mixture viscosity and mixture heat conductivity coefficients are obtained. The strength of the presented model lies on the computational efficiency, which is due to the fact that the resulting stochastic processes are continuous in time. Therefore, unlike in Direct Simulation Monte Carlo (DSMC), here simulated particles do not collide with each other, but move along independent continuous stochastic paths. Another aspect of the new Fokker-Planck model equation is that here the effect of collisions is described via drift and diffusion type processes. Accordingly, a scheme can be derived for which the time step size limitation of the corresponding numerical simulation becomes independent of the Knudsen number. Consequently, this leads to more efficient simulations, especially in low or intermediate Knudsen numbers. Results are presented for helium-argon mixture in a one dimensional geometry. The calculated mixture viscosity is found to be in accordance with experimental data, which reveals the accuracy and relevance of the approach.

  11. Monte Carlo modeling of spallation targets containing uranium and americium

    Science.gov (United States)

    Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter

    2014-09-01

    Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on 241Am and 243Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for 233,235U, 237Np, 239Pu and 241Am. Several geometry options and material compositions (U, U + Am, Am, Am2O3) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of k∼0.5. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.

  12. Evaluation of fecal mRNA reproducibility via a marginal transformed mixture modeling approach

    Directory of Open Access Journals (Sweden)

    Davidson Laurie A

    2010-01-01

    Full Text Available Abstract Background Developing and evaluating new technology that enables researchers to recover gene-expression levels of colonic cells from fecal samples could be key to a non-invasive screening tool for early detection of colon cancer. The current study, to the best of our knowledge, is the first to investigate and report the reproducibility of fecal microarray data. Using the intraclass correlation coefficient (ICC as a measure of reproducibility and the preliminary analysis of fecal and mucosal data, we assessed the reliability of mixture density estimation and the reproducibility of fecal microarray data. Using Monte Carlo-based methods, we explored whether ICC values should be modeled as a beta-mixture or transformed first and fitted with a normal-mixture. We used outcomes from bootstrapped goodness-of-fit tests to determine which approach is less sensitive toward potential violation of distributional assumptions. Results The graphical examination of both the distributions of ICC and probit-transformed ICC (PT-ICC clearly shows that there are two components in the distributions. For ICC measurements, which are between 0 and 1, the practice in literature has been to assume that the data points are from a beta-mixture distribution. Nevertheless, in our study we show that the use of a normal-mixture modeling approach on PT-ICC could provide superior performance. Conclusions When modeling ICC values of gene expression levels, using mixture of normals in the probit-transformed (PT scale is less sensitive toward model mis-specification than using mixture of betas. We show that a biased conclusion could be made if we follow the traditional approach and model the two sets of ICC values using the mixture of betas directly. The problematic estimation arises from the sensitivity of beta-mixtures toward model mis-specification, particularly when there are observations in the neighborhood of the the boundary points, 0 or 1. Since beta-mixture modeling

  13. A gradient Markov chain Monte Carlo algorithm for computing multivariate maximum likelihood estimates and posterior distributions: mixture dose-response assessment.

    Science.gov (United States)

    Li, Ruochen; Englehardt, James D; Li, Xiaoguang

    2012-02-01

    Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided. © 2012 Society for Risk Analysis. All rights reserved.

  14. Modelling interactions in grass-clover mixtures

    NARCIS (Netherlands)

    Nassiri Mahallati, M.

    1998-01-01

    The study described in this thesis focuses on a quantitative understanding of the complex interactions in binary mixtures of perennial ryegrass (Lolium perenne L.) and white clover (Trifolium repens L.) under cutting. The first part of the study describes the dynamics of growth, production

  15. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

    National Research Council Canada - National Science Library

    Haro, Gloria; Randal, Gregory; Sapiro, Guillermo

    2007-01-01

    .... The basic idea relies on modeling the high dimensional sample points as a process of Translated Poisson mixtures, with regularizing restrictions, leading to a model which includes the presence of noise...

  16. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  17. Monte Carlo modelling of Schottky diode for rectenna simulation

    Science.gov (United States)

    Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.

    2017-09-01

    Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.

  18. Stochastic radiative transfer model for mixture of discontinuous vegetation canopies

    Energy Technology Data Exchange (ETDEWEB)

    Shabanov, Nikolay V. [Department of Geography, Boston University, 675 Commonwealth Avenue, Boston, MA 02215 (United States)]. E-mail: shabanov@bu.edu; Huang, D. [Brookhaven National Laboratory, Environmental Sciences Department, P.O. Box 5000, Upton, NY 11973 (United States); Knjazikhin, Y. [Department of Geography, Boston University, 675 Commonwealth Avenue, Boston, MA 02215 (United States); Dickinson, R.E. [School of Earth and Atmospheric Sciences, Georgia Institute of Technology, Atlanta, GA 30332 (United States); Myneni, Ranga B. [Department of Geography, Boston University, 675 Commonwealth Avenue, Boston, MA 02215 (United States)

    2007-09-15

    Modeling of the radiation regime of a mixture of vegetation species is a fundamental problem of the Earth's land remote sensing and climate applications. The major existing approaches, including the linear mixture model and the turbid medium (TM) mixture radiative transfer model, provide only an approximate solution to this problem. In this study, we developed the stochastic mixture radiative transfer (SMRT) model, a mathematically exact tool to evaluate radiation regime in a natural canopy with spatially varying optical properties, that is, canopy, which exhibits a structured mixture of vegetation species and gaps. The model solves for the radiation quantities, direct input to the remote sensing/climate applications: mean radiation fluxes over whole mixture and over individual species. The canopy structure is parameterized in the SMRT model in terms of two stochastic moments: the probability of finding species and the conditional pair-correlation of species. The second moment is responsible for the 3D radiation effects, namely, radiation streaming through gaps without interaction with vegetation and variation of the radiation fluxes between different species. We performed analytical and numerical analysis of the radiation effects, simulated with the SMRT model for the three cases of canopy structure: (a) non-ordered mixture of species and gaps (TM); (b) ordered mixture of species without gaps; and (c) ordered mixture of species with gaps. The analysis indicates that the variation of radiation fluxes between different species is proportional to the variation of species optical properties (leaf albedo, density of foliage, etc.) Gaps introduce significant disturbance to the radiation regime in the canopy as their optical properties constitute major contrast to those of any vegetation species. The SMRT model resolves deficiencies of the major existing mixture models: ignorance of species radiation coupling via multiple scattering of photons (the linear mixture

  19. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill

    2013-10-01

    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  20. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    .e. the window length. In this work we use the Wishart Mixture Model (WMM) as a probabilistic model for dFC based on variational inference. The framework admits arbitrary window lengths and number of dynamic components and includes the static one-component model as a special case. We exploit that the WMM...... framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...... performance of dFC models to their static counterparts and choosing the window length having largest difference as most favorable for characterizing dFC. On synthetic data we find that generalizability is influenced by window length and signal-tonoise ratio. Too long windows cause dynamic states to be mixed...

  1. Monte Carlo Computational Modeling of Atomic Oxygen Interactions

    Science.gov (United States)

    Banks, Bruce A.; Stueber, Thomas J.; Miller, Sharon K.; De Groh, Kim K.

    2017-01-01

    Computational modeling of the erosion of polymers caused by atomic oxygen in low Earth orbit (LEO) is useful for determining areas of concern for spacecraft environment durability. Successful modeling requires that the characteristics of the environment such as atomic oxygen energy distribution, flux, and angular distribution be properly represented in the model. Thus whether the atomic oxygen is arriving normal to or inclined to a surface and whether it arrives in a consistent direction or is sweeping across the surface such as in the case of polymeric solar array blankets is important to determine durability. When atomic oxygen impacts a polymer surface it can react removing a certain volume per incident atom (called the erosion yield), recombine, or be ejected as an active oxygen atom to potentially either react with other polymer atoms or exit into space. Scattered atoms can also have a lower energy as a result of partial or total thermal accommodation. Many solutions to polymer durability in LEO involve protective thin films of metal oxides such as SiO2 to prevent atomic oxygen erosion. Such protective films also have their own interaction characteristics. A Monte Carlo computational model has been developed which takes into account the various types of atomic oxygen arrival and how it reacts with a representative polymer (polyimide Kapton H) and how it reacts at defect sites in an oxide protective coating, such as SiO2 on that polymer. Although this model was initially intended to determine atomic oxygen erosion behavior at defect sites for the International Space Station solar arrays, it has been used to predict atomic oxygen erosion or oxidation behavior on many other spacecraft components including erosion of polymeric joints, durability of solar array blanket box covers, and scattering of atomic oxygen into telescopes and microwave cavities where oxidation of critical component surfaces can take place. The computational model is a two dimensional model

  2. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  3. Exploring a Parasite-Host Model with Monte Carlo Simulations

    Science.gov (United States)

    Breecher, Nyles; Dong, Jiajia

    2011-03-01

    We explore parasite-host interactions, a less investigated subset of the well-established predator-prey model. In particular, it is not well known how the numerous parameters of the system affect its characteristics. Parasite-host systems rely on their spatial interaction, as a parasite must make physical contact with the host to reproduce. Using C++ to program a Monte Carlo simulation, we study how the speed and type of movement of the host affect the spatial and temporal distribution of the parasites. By drawing on mean-field theoretics, we find the exact solution for the parasite distribution with a stationary host at the center and analyze the distributions for a moving host. The findings of the study provide rich behavior of a non-equilibrium system and bring insights to pest-control and, on a larger scale, epidemics spreading.

  4. Bonus-Malus System Using Finite Mixture Models

    Directory of Open Access Journals (Sweden)

    Saeed MohammadPour

    2017-09-01

    Full Text Available There is a vast literature on Bonus-Malus System (BMS, in which a policyholders responsible for positive claims will be penalised by a malus and the policyholders who had no claim will be rewarded by a bonus. In this paper, we present an optimal BMS using finite mixture models.  We conduct a numerical study to compare the new model with the current BMS that use finite mixture models.

  5. Extending Growth Mixture Models Using Continuous Non-Elliptical Distributions

    OpenAIRE

    Wei, Yuhong; Tang, Yang; Shireman, Emilie; McNicholas, Paul D.; Steinley, Douglas L.

    2017-01-01

    Growth mixture models (GMMs) incorporate both conventional random effects growth modeling and latent trajectory classes as in finite mixture modeling; therefore, they offer a way to handle the unobserved heterogeneity between subjects in their development. GMMs with Gaussian random effects dominate the literature. When the data are asymmetric and/or have heavier tails, more than one latent class is required to capture the observed variable distribution. Therefore, a GMM with continuous non-el...

  6. Monte Carlo modeling of spatial coherence: free-space diffraction.

    Science.gov (United States)

    Fischer, David G; Prahl, Scott A; Duncan, Donald D

    2008-10-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions.

  7. Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method

    Science.gov (United States)

    Saini, P. Sri; Prince, Shanthi

    2011-10-01

    At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

  8. A valence force field-Monte Carlo algorithm for quantum dot growth modeling

    DEFF Research Database (Denmark)

    Barettin, Daniele; Kadkhodazadeh, Shima; Pecchia, Alessandro

    2017-01-01

    We present a novel kinetic Monte Carlo version for the atomistic valence force fields algorithm in order to model a self-assembled quantum dot growth process. We show our atomistic model is both computationally favorable and capture more details compared to traditional kinetic Monte Carlo models...

  9. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  10. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  11. Modeling of Complex Mixtures: JP-8 Toxicokinetics

    Science.gov (United States)

    2008-10-01

    assessment, which in turn will facilitate the identification of “ bad actors” in the mixture. Identification of potential toxic components and their mode...Competitive metabolic inhibition max 1 x x x x tot m x x V cRAM cK c K =  + +    • Identification of “ bad actors” • Delivered dose predictions...implanted osmotic pump that delivers a constant rate of chemical over a period of time. A total of 157 male rats will be used resulting in euthanasia

  12. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    Science.gov (United States)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  13. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

    Science.gov (United States)

    2007-09-01

    unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ...stratification learning. We show experi- ments with synthetic and real data in Section 5, including comparisons with critical literature, and finally

  14. A quantum-statistical-mechanical extension of Gaussian mixture model

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, K [Graduate School of Information Sciences, Tohoku University, 6-3-09 Aramaki-aza-aoba, Aoba-ku, Sendai 980-8579 (Japan); Tsuda, K [Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tuebingen (Germany)], E-mail: kazu@smapip.is.tohoku.ac.jp

    2008-01-15

    We propose an extension of Gaussian mixture models in the statistical-mechanical point of view. The conventional Gaussian mixture models are formulated to divide all points in given data to some kinds of classes. We introduce some quantum states constructed by superposing conventional classes in linear combinations. Our extension can provide a new algorithm in classifications of data by means of linear response formulas in the statistical mechanics.

  15. Monte Carlo modelling of an extended DXA technique

    Science.gov (United States)

    Michael, G. J.; Henderson, C. J.

    1998-09-01

    The precision achieved in measuring bone mineral density (BMD) by commercial dual-energy x-ray absorptiometry (DXA) machines is typically better than 1%, but accuracy is considerably worse. Errors, due to inhomogeneous distributions of fat, of up to 10% have been reported. These errors arise because the DXA technique assumes a two-component model for the human body, i.e. bone mineral and soft tissue. This paper describes an extended DXA technique that uses a three-component model of human tissue and significantly reduces errors due to inhomogeneous fat distribution. In addition to two x-ray transmission measurements, a measurement of the path length of the x-ray beam within the patient is required. This provides a third equation, i.e. where T, , and are the total, lean soft tissue, bone mineral and fatty tissue thicknesses respectively. Monte Carlo modelling was undertaken to make a comparison of the standard and extended DXA techniques in the presence of inhomogeneous fat distribution. Two geometries of varying complexity were simulated. In each case the extended DXA technique produced BMD measurements that were independent of soft tissue composition whereas the standard technique produced BMD measurements that were strongly dependent on soft tissue composition. For example, in one case, the gradients of the plots of BMD versus fractional fat content were for standard DXA and for extended DXA . In all cases the extended DXA method produced more accurate but less precise results than the standard DXA technique.

  16. Monte Carlo model for electron degradation in xenon gas

    CERN Document Server

    Mukundan, Vrinda

    2016-01-01

    We have developed a Monte Carlo model for studying the local degradation of electrons in the energy range 9-10000 eV in xenon gas. Analytically fitted form of electron impact cross sections for elastic and various inelastic processes are fed as input data to the model. Two dimensional numerical yield spectrum, which gives information on the number of energy loss events occurring in a particular energy interval, is obtained as output of the model. Numerical yield spectrum is fitted analytically, thus obtaining analytical yield spectrum. The analytical yield spectrum can be used to calculate electron fluxes, which can be further employed for the calculation of volume production rates. Using yield spectrum, mean energy per ion pair and efficiencies of inelastic processes are calculated. The value for mean energy per ion pair for Xe is 22 eV at 10 keV. Ionization dominates for incident energies greater than 50 eV and is found to have an efficiency of 65% at 10 keV. The efficiency for the excitation process is 30%...

  17. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  18. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    Science.gov (United States)

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  19. Numerical simulation of asphalt mixtures fracture using continuum models

    Science.gov (United States)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  20. Relationship between Diffusion and Chemical Exchange in Mixtures of Carbon Dioxide and an Amine-Functionalized Ionic Liquid by High Field NMR and Kinetic Monte Carlo Simulations.

    Science.gov (United States)

    Hazelbaker, Eric D; Budhathoki, Samir; Wang, Han; Shah, Jindal; Maginn, Edward J; Vasenkov, Sergey

    2014-05-15

    NMR exchange spectroscopy (EXSY) and NMR diffusion spectroscopy (PFG NMR) were applied in combination with kinetic Monte Carlo (MC) simulations to investigate self-diffusion in a mixture of carbon dioxide and an amine-functionalized ionic liquid under conditions of an exchange of carbon dioxide molecules between the reacted and unreacted states in the mixture. EXSY studies enabled residence times of carbon dioxide molecules to be obtained in the two states, whereas PFG NMR revealed time-dependent effective diffusivities for diffusion times comparable with and larger than the residence times. Analytical treatment of the PFG NMR attenuation curves as well as fitting of the PFG NMR effective diffusivities by KMC simulations enabled determination of diffusivities of carbon dioxide in the reacted and unreacted states. In contrast to carbon dioxide, the ion diffusivities were found to be diffusion time independent.

  1. Modeling root-reinforcement with a Fiber-Bundle Model and Monte Carlo simulation

    Science.gov (United States)

    This paper uses sensitivity analysis and a Fiber-Bundle Model (FBM) to examine assumptions underpinning root-reinforcement models. First, different methods for apportioning load between intact roots were investigated. Second, a Monte Carlo approach was used to simulate plants with heartroot, platero...

  2. Monte Carlo modeling and optimization of buffer gas positron traps

    Science.gov (United States)

    Marjanović, Srđan; Petrović, Zoran Lj

    2017-02-01

    Buffer gas positron traps have been used for over two decades as the prime source of slow positrons enabling a wide range of experiments. While their performance has been well understood through empirical studies, no theoretical attempt has been made to quantitatively describe their operation. In this paper we apply standard models as developed for physics of low temperature collision dominated plasmas, or physics of swarms to model basic performance and principles of operation of gas filled positron traps. The Monte Carlo model is equipped with the best available set of cross sections that were mostly derived experimentally by using the same type of traps that are being studied. Our model represents in realistic geometry and fields the development of the positron ensemble from the initial beam provided by the solid neon moderator through voltage drops between the stages of the trap and through different pressures of the buffer gas. The first two stages employ excitation of N2 with acceleration of the order of 10 eV so that the trap operates under conditions when excitation of the nitrogen reduces the energy of the initial beam to trap the positrons without giving them a chance to become annihilated following positronium formation. The energy distribution function develops from the assumed distribution leaving the moderator, it is accelerated by the voltage drops and forms beams at several distinct energies. In final stages the low energy loss collisions (vibrational excitation of CF4 and rotational excitation of N2) control the approach of the distribution function to a Maxwellian at room temperature but multiple non-Maxwellian groups persist throughout most of the thermalization. Optimization of the efficiency of the trap may be achieved by changing the pressure and voltage drops and also by selecting to operate in a two stage mode. The model allows quantitative comparisons and test of optimization as well as development of other properties.

  3. Modeling low-coherence enhanced backscattering using Monte Carlo simulation.

    Science.gov (United States)

    Subramanian, Hariharan; Pradhan, Prabhakar; Kim, Young L; Liu, Yang; Li, Xu; Backman, Vadim

    2006-08-20

    Constructive interference between coherent waves traveling time-reversed paths in a random medium gives rise to the enhancement of light scattering observed in directions close to backscattering. This phenomenon is known as enhanced backscattering (EBS). According to diffusion theory, the angular width of an EBS cone is proportional to the ratio of the wavelength of light lambda to the transport mean-free-path length l(s)* of a random medium. In biological media a large l(s)* approximately 0.5-2 mm > lambda results in an extremely small (approximately 0.001 degrees ) angular width of the EBS cone, making the experimental observation of such narrow peaks difficult. Recently, the feasibility of observing EBS under low spatial coherence illumination (spatial coherence length Lsc path lengths and thus resulting in an increase of more than 100 times in the angular width of low coherence EBS (LEBS) cones. However, a conventional diffusion approximation-based model of EBS has not been able to explain such a dramatic increase in LEBS width. We present a photon random walk model of LEBS by using Monte Carlo simulation to elucidate the mechanism accounting for the unprecedented broadening of the LEBS peaks. Typically, the exit angles of the scattered photons are not considered in modeling EBS in the diffusion regime. We show that small exit angles are highly sensitive to low-order scattering, which is crucial for accurate modeling of LEBS. Our results show that the predictions of the model are in excellent agreement with the experimental data.

  4. Mixture toxicity in the marine environment: Model development and evidence for synergism at environmental concentrations.

    Science.gov (United States)

    Deruytter, David; Baert, Jan M; Nevejan, Nancy; De Schamphelaere, Karel A C; Janssen, Colin R

    2017-12-01

    Little is known about the effect of metal mixtures on marine organisms, especially after exposure to environmentally realistic concentrations. This information is, however, required to evaluate the need to include mixtures in future environmental risk assessment procedures. We assessed the effect of copper (Cu)-Nickel (Ni) binary mixtures on Mytilus edulis larval development using a full factorial design that included environmentally relevant metal concentrations and ratios. The reproducibility of the results was assessed by repeating this experiment 5 times. The observed mixture effects were compared with the effects predicted with the concentration addition model. Deviations from the concentration addition model were estimated using a Markov chain Monte-Carlo algorithm. This enabled the accurate estimation of the deviations and their uncertainty. The results demonstrated reproducibly that the type of interaction-synergism or antagonism-mainly depended on the Ni concentration. Antagonism was observed at high Ni concentrations, whereas synergism occurred at Ni concentrations as low as 4.9 μg Ni/L. This low (and realistic) Ni concentration was 1% of the median effective concentration (EC50) of Ni or 57% of the Ni predicted-no-effect concentration (PNEC) in the European Union environmental risk assessment. It is concluded that results from mixture studies should not be extrapolated to concentrations or ratios other than those investigated and that significant mixture interactions can occur at environmentally realistic concentrations. This should be accounted for in (marine) environmental risk assessment of metals. Environ Toxicol Chem 2017;36:3471-3479. © 2017 SETAC. © 2017 SETAC.

  5. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  6. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid......–liquid) and densities are considered for the mixtures involved. Different approaches for modeling pure CO2 and mixtures are compared. CO2 is modeled as non self-associating fluid, or as self-associating component having two, three and four association sites. Moreover, when mixtures of CO2 with polar compounds (water...... for binary mixtures of CO2 and water or alcohols when the solvation between CO2 and the polar compound is explicitly accounted for, whereas the model is less satisfactory when CO2 is treated as self-associating compound....

  7. Monte Carlo modeling of spatial coherence: free-space diffraction

    OpenAIRE

    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D

    2008-01-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excel...

  8. Moving target detection method based on improved Gaussian mixture model

    Science.gov (United States)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  9. Estimating animal abundance with N-mixture models using the R-INLA package for R

    KAUST Repository

    Meehan, Timothy D.

    2017-05-03

    Successful management of wildlife populations requires accurate estimates of abundance. Abundance estimates can be confounded by imperfect detection during wildlife surveys. N-mixture models enable quantification of detection probability and often produce abundance estimates that are less biased. The purpose of this study was to demonstrate the use of the R-INLA package to analyze N-mixture models and to compare performance of R-INLA to two other common approaches -- JAGS (via the runjags package), which uses Markov chain Monte Carlo and allows Bayesian inference, and unmarked, which uses Maximum Likelihood and allows frequentist inference. We show that R-INLA is an attractive option for analyzing N-mixture models when (1) familiar model syntax and data format (relative to other R packages) are desired, (2) survey level covariates of detection are not essential, (3) fast computing times are necessary (R-INLA is 10 times faster than unmarked, 300 times faster than JAGS), and (4) Bayesian inference is preferred.

  10. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    framework. The global form of heterogeneity is incorporated in a Hedonic Price Index model that encompasses a nonlinear function of the geographical coordinates of each dwelling. The local form of heterogeneity is subsequently modeled as a Finite Mixture Model for the residuals of the Hedonic Index....... The identified mixtures are considered as the different spatial housing submarkets. The main advantage of the approach is that submarkets are recovered by the housing prices data compared to submarkets imposed by administrative or geographical criteria. The Finite Mixture Model is estimated using the Figueiredo......The problem of modeling housing prices has attracted considerable attention due to its importance in terms of households' wealth and in terms of public revenues through taxation. One of the main concerns raised in both the theoretical and the empirical literature is the existence of spatial...

  11. A Monte Carlo Study of Marginal Maximum Likelihood Parameter Estimates for the Graded Model.

    Science.gov (United States)

    Ankenmann, Robert D.; Stone, Clement A.

    Effects of test length, sample size, and assumed ability distribution were investigated in a multiple replication Monte Carlo study under the 1-parameter (1P) and 2-parameter (2P) logistic graded model with five score levels. Accuracy and variability of item parameter and ability estimates were examined. Monte Carlo methods were used to evaluate…

  12. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.

  13. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  14. Mixture model analysis of complex samples

    NARCIS (Netherlands)

    Wedel, M.; Hofstede, F. ter; Steenkamp, J.-B.E.M.

    1997-01-01

    This paper investigates asymmetric effects of monetary policy over the business cycle. A two-state Markov Switching Model is employed to model both recessions and expansions. For the United States and Germany, strong evidence is found that monetary policy is more effective in a recession than during

  15. Improvements to mixture level tracking model

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, W.L.

    1996-04-01

    The purpose of this paper is to present the results of the testing of the recent improvements made to the two-phase level tracking model in RELAP5/MOD3.2. The level model was originally developed during the development of the TRAC-BWR computer code and was subsequently modified by the Pennsylvania State University (PSU). The modifications developed at PSU concern the way in which the two-phase level is moved from volume to volume as the thermal-hydraulic conditions in the system being simulated change during the course of a transients. The other components in the level tracking model remain as described in the original implementation of the model.

  16. A Monte Carlo reflectance model for soil surfaces with three-dimensional structure

    Science.gov (United States)

    Cooper, K. D.; Smith, J. A.

    1985-01-01

    A Monte Carlo soil reflectance model has been developed to study the effect of macroscopic surface irregularities larger than the wavelength of incident flux. The model treats incoherent multiple scattering from Lambertian facets distributed on a periodic surface. Resulting bidirectional reflectance distribution functions are non-Lambertian and compare well with experimental trends reported in the literature. Examples showing the coupling of the Monte Carlo soil model to an adding bidirectional canopy of reflectance model are also given.

  17. A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS

    1995-01-01

    A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of

  18. A Mixture Rasch Model with Item Response Time Components

    Science.gov (United States)

    Meyer, J. Patrick

    2010-01-01

    An examinee faced with a test item will engage in solution behavior or rapid-guessing behavior. These qualitatively different test-taking behaviors bias parameter estimates for item response models that do not control for such behavior. A mixture Rasch model with item response time components was proposed and evaluated through application to real…

  19. Spurious Latent Classes in the Mixture Rasch Model

    Science.gov (United States)

    Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.

    2011-01-01

    Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…

  20. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    Science.gov (United States)

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  1. Investigating Individual Differences in Toddler Search with Mixture Models

    Science.gov (United States)

    Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina

    2015-01-01

    Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…

  2. Single Maneuvering Target Tracking in Clutter Based on Multiple Model Algorithm with Gaussian Mixture Reduction

    Directory of Open Access Journals (Sweden)

    Ji Zhang

    2013-10-01

    Full Text Available The measurement origin uncertainty and target (dynamic or/and measurement model uncertainty are two fundamental problems in maneuvering target tracking in clutter. The multiple hypothesis tracker (MHT and multiple model (MM algorithm are two well-known methods dealing with these two problems, respectively. In this work, we address the problem of single maneuvering target tracking in clutter by combing MHT and MM based on the Gaussian mixture reduction (GMR. Different ways of combinations of MHT and MM for this purpose were available in previous studies, but in heuristic manners. The GMR is adopted because it provides a theoretically appealing way to reduce the exponentially increasing numbers of measurement association possibilities and target model trajectories. The superior performance of our method, comparing with the existing IMM+PDA and IMM+MHT algorithms, is demonstrated by the results of Monte Carlo simulation.

  3. Topics in Bayesian Hierarchical Modeling and its Monte Carlo Computations

    Science.gov (United States)

    Tak, Hyung Suk

    The first chapter addresses a Beta-Binomial-Logit model that is a Beta-Binomial conjugate hierarchical model with covariate information incorporated via a logistic regression. Various researchers in the literature have unknowingly used improper posterior distributions or have given incorrect statements about posterior propriety because checking posterior propriety can be challenging due to the complicated functional form of a Beta-Binomial-Logit model. We derive data-dependent necessary and sufficient conditions for posterior propriety within a class of hyper-prior distributions that encompass those used in previous studies. Frequency coverage properties of several hyper-prior distributions are also investigated to see when and whether Bayesian interval estimates of random effects meet their nominal confidence levels. The second chapter deals with a time delay estimation problem in astrophysics. When the gravitational field of an intervening galaxy between a quasar and the Earth is strong enough to split light into two or more images, the time delay is defined as the difference between their travel times. The time delay can be used to constrain cosmological parameters and can be inferred from the time series of brightness data of each image. To estimate the time delay, we construct a Gaussian hierarchical model based on a state-space representation for irregularly observed time series generated by a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian approach jointly infers model parameters via a Gibbs sampler. We also introduce a profile likelihood of the time delay as an approximation of its marginal posterior distribution. The last chapter specifies a repelling-attracting Metropolis algorithm, a new Markov chain Monte Carlo method to explore multi-modal distributions in a simple and fast manner. This algorithm is essentially a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes

  4. A Gamma Model for Mixture STR Samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Morling, Niels

    This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered in the amp......This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered...

  5. Chemical kinetic modeling of component mixtures relevant to gasoline

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Curran, H J; Pitz, W J; Dooley, S; Westbrook, C K

    2008-05-29

    Detailed kinetic models of pyrolysis and combustion of hydrocarbon fuels are nowadays widely used in the design of internal combustion engines and these models are effectively applied to help meet the increasingly stringent environmental and energetic standards. In previous studies by the combustion community, such models not only contributed to the understanding of pure component combustion, but also provided a deeper insight into the combustion behavior of complex mixtures. One of the major challenges in this field is now the definition and the development of appropriate surrogate models able to mimic the actual features of real fuels. Real fuels are complex mixtures of thousands of hydrocarbon compounds including linear and branched paraffins, naphthenes, olefins and aromatics. Their behavior can be effectively reproduced by simpler fuel surrogates containing a limited number of components. Aside the most commonly used surrogates containing iso-octane and n-heptane only, the so called Primary Reference Fuels (PRF), new mixtures have recently been suggested to extend the reference components in surrogate mixtures to also include alkenes and aromatics. It is generally agreed that, including representative species for all the main classes of hydrocarbons which can be found in real fuels, it is possible to reproduce very effectively in a wide range of operating conditions not just the auto-ignition propensity of gasoline or Diesel fuels, but also their physical properties and their combustion residuals [1]. In this work, the combustion behavior of several components relevant to gasoline surrogate formulation is computationally examined. The attention is focused on the autoignition of iso-octane, hexene and their mixtures. Some important issues relevant to the experimental and modeling investigation of such fuels are discussed with the help of rapid compression machine data and calculations. Following the model validation, the behavior of mixtures is discussed on the

  6. Coarse-Grained Modeling of Colloid-Nanoparticle Mixtures

    Science.gov (United States)

    Denton, Alan R.; Chung, Jun Kyung

    2013-03-01

    Colloid-nanoparticle mixtures have attracted much recent attention for their rich phase behavior. The potential to independently vary size and charge ratios greatly expands the possibilities for tuning interparticle interactions and stabilizing unusual phases. Experiments have begun to explore the self-assembly and stability of colloid-nanoparticle mixtures, which are characterized by extreme size and charge asymmetries. In modeling such complex soft materials, coarse-grained methods often prove essential to surmount computational challenges posed by multiple length and time scales. We describe a hierarchical approach to modeling effective interactions in ultra-polydisperse mixtures. Using a sequential coarse-graining procedure, we show that a mixture of charged colloids and nanoparticles can be mapped onto a one-component model of pseudo-colloids interacting via a Yukawa effective pair potential and a one-body volume energy, which contributes to the free energy of the system. Nanoparticles are found to enhance electrostatic screening and to modify the volume energy. Taking the effective interactions as input to simulations and perturbation theory, we calculate structural properties and explore phase stability of highly asymmetric charged colloid-nanoparticle mixtures. This work was supported by the National Science Foundation under Grant No. DMR-1106331

  7. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... occurring genes in the population. CONCLUSION: Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes....

  8. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  9. Predictors of computer anxiety: a factor mixture model analysis.

    Science.gov (United States)

    Marcoulides, George A; Cavus, Hayati; Marcoulides, Laura D; Gunbatar, Mustafa Serkan

    2009-12-01

    A mixture modeling approach was used to assess the existence of latent classes in terms of the perceptions of individuals toward computer anxiety and subsequently predictors of the identified latent classes were examined. The perceptions of individuals were measured using the Computer Anxiety Scale. Mixture models are ideally suited to represent subpopulations or classes of respondents with common patterns of responses. Using data from a sample of Turkish college students, two classes of respondents were identified and designated as occasionally uncomfortable users and as anxious computerphobic users. Results indicated that the best predictors of the identified classes were variables dealing with past computer experiences.

  10. Monte Carlo simulation of quantum statistical lattice models

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1985-01-01

    In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used

  11. LCN: a random graph mixture model for community detection in functional brain networks.

    Science.gov (United States)

    Bryant, Christopher; Zhu, Hongtu; Ahn, Mihye; Ibrahim, Joseph

    2017-01-01

    The aim of this article is to develop a Bayesian random graph mixture model (RGMM) to detect the latent class network (LCN) structure of brain connectivity networks and estimate the parameters governing this structure. The use of conjugate priors for unknown parameters leads to efficient estimation, and a well-known nonidentifiability issue is avoided by a particular parameterization of the stochastic block model (SBM). Posterior computation proceeds via an efficient Markov Chain Monte Carlo algorithm. Simulations demonstrate that LCN outperforms several other competing methods for community detection in weighted networks, and we apply our RGMM to estimate the latent community structures in the functional resting brain networks of 185 subjects from the ADHD-200 sample. We find overlap in the estimated community structure across subjects, but also heterogeneity even within a given diagnosis group.

  12. Monte Carlo modeling of ultrasound probes for image guided radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Bazalova-Carter, Magdalena, E-mail: bazalova@uvic.ca [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 2Y2 (Canada); Schlosser, Jeffrey [SoniTrack Systems, Inc., Palo Alto, California 94304 (United States); Chen, Josephine [Department of Radiation Oncology, UCSF, San Francisco, California 94143 (United States); Hristov, Dimitre [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2015-10-15

    Purpose: To build Monte Carlo (MC) models of two ultrasound (US) probes and to quantify the effect of beam attenuation due to the US probes for radiation therapy delivered under real-time US image guidance. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their megavoltage (MV) CT images acquired in a Tomotherapy machine with a 3.5 MV beam in the EGSnrc, BEAMnrc, and DOSXYZnrc codes. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2 and 8.0 g/cm{sup 3}. Beam attenuation due to the US probes in horizontal (for both probes) and vertical (for the X6-1 probe) orientation was measured in a solid water phantom for 6 and 15 MV (15 × 15) cm{sup 2} beams with a 2D ionization chamber array and radiographic films at 5 cm depth. The MC models of the US probes were validated by comparison of the measured dose distributions and dose distributions predicted by MC. Attenuation of depth dose in the (15 × 15) cm{sup 2} beams and small circular beams due to the presence of the probes was assessed by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities in the X6-1 and C5-2 probes were found to be 4.8 and 5.2 g/cm{sup 3}, respectively. Dose profile differences between MC simulations and measurements of less than 3% for US probes in horizontal orientation were found, with the exception of the penumbra region. The largest 6% dose difference was observed in dose profiles of the X6-1 probe placed in vertical orientation, which was attributed to inadequate modeling of the probe cable. Gamma analysis of the simulated and measured doses showed that over 96% of measurement points passed the 3%/3 mm criteria for both probes placed in horizontal orientation and for the X6-1 probe in vertical orientation. The

  13. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  14. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...

  15. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...

  16. Detecting Social Desirability Bias Using Factor Mixture Models

    Science.gov (United States)

    Leite, Walter L.; Cooper, Lou Ann

    2010-01-01

    Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…

  17. Sparse Gaussian graphical mixture model | Lotsi | Afrika Statistika

    African Journals Online (AJOL)

    Abstract. This paper considers the problem of networks reconstruction from heterogeneous data using a Gaussian Graphical Mixture Model (GGMM). It is well known that parameter estimation in this context is challenging due to large numbers of variables coupled with the degenerate nature of the likelihood. We propose as ...

  18. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    The objective of this research is to experiment the use of the parametric Gaussian mixture model multi-class classifier/algorithm for multi-class remote sensing task, implemented in MATLAB. MATLAB is a programming language just like C, C++, and python. In this research, a computer program implemented in MATLAB is ...

  19. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...

  20. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  1. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    -associating mixtures. The influence on the results of the association schemes, type of data available, combining rules for cross-associating mixtures and interaction parameters are discussed also in connection to other cross-associating mixtures, previously studied with the model. Finally, the capabilities...

  2. An Investigation of Growth Mixture Models for Studying the Flynn Effect

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2014-10-01

    Full Text Available The Flynn effect (FE is the well-documented generational increase of mean IQ scores over time, but a methodological issue that has not received much attention in the FE literature is the heterogeneity in change patterns across time. Growth mixture models (GMMs offer researchers a flexible latent variable framework for examining the potential heterogeneity of change patterns. The article presents: (1 a Monte Carlo investigation of the performance of the various measures of model fit for GMMs in data that resemble previous FE studies; and (2 an application of GMM to the National Intelligence Tests. The Monte Carlo study supported the use of the Bayesian information criterion (BIC and consistent Akaike information criterion (CAIC for model selection. The GMM application study resulted in the identification of two classes of participants that had unique change patterns across three time periods. Our studies show that GMMs, when applied carefully, are likely to identify homogeneous subpopulations in FE studies, which may aid in further understanding of the FE.

  3. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time....... Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%....... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  4. An integral equation model for warm and hot dense mixtures

    CERN Document Server

    Starrett, C E; Daligault, J; Hamel, S

    2014-01-01

    In Starrett and Saumon [Phys. Rev. E 87, 013104 (2013)] a model for the calculation of electronic and ionic structures of warm and hot dense matter was described and validated. In that model the electronic structure of one "atom" in a plasma is determined using a density functional theory based average-atom (AA) model, and the ionic structure is determined by coupling the AA model to integral equations governing the fluid structure. That model was for plasmas with one nuclear species only. Here we extend it to treat plasmas with many nuclear species, i.e. mixtures, and apply it to a carbon-hydrogen mixture relevant to inertial confinement fusion experiments. Comparison of the predicted electronic and ionic structures with orbital-free and Kohn-Sham molecular dynamics simulations reveals excellent agreement wherever chemical bonding is not significant.

  5. Improvements in Neutronics/Thermal-Hydraulics Coupling in Two-Phase Flow Systems Using Stochastic-Mixture Transport Models

    CERN Document Server

    Palmer, T S

    2003-01-01

    In this NEER project, researchers from Oregon State University have investigated the limitations of the treatment of two-phase coolants as a homogeneous mixture in neutron transport calculations. Improved methods of calculating the neutron distribution in binary stochastic mixtures have been developed over the past 10-15 years and are readily available in the transport literature. These methods are computationally more expensive than the homogeneous (or atomic mix) models, but can give much more accurate estimates of ensemble average fluxes and reaction rates provided statistical descriptions of the distributions of the two materials are know. A thorough review of the two-phase flow literature has been completed and the relevant mixture distributions have been identified. Using these distributions, we have performed Monte Carlo criticality calculations of fuel assemblies to assess the accuracy of the atomic mix approximation when compared to a resolved treatment of the two-phase coolant. To understand the ben...

  6. Self-assembly in a model colloidal mixture of dimers and spherical particles.

    Science.gov (United States)

    Prestipino, Santi; Munaò, Gianmarco; Costa, Dino; Caccamo, Carlo

    2017-02-28

    We investigate the structure of a dilute mixture of amphiphilic dimers and spherical particles, a model relevant to the problem of encapsulating globular "guest" molecules in a dispersion. Dimers and spheres are taken to be hard particles, with an additional attraction between spheres and the smaller monomers in a dimer. Using the Monte Carlo simulation, we document the low-temperature formation of aggregates of guests (clusters) held together by dimers, whose typical size and shape depend on the guest concentration χ. For low χ (less than 10%), most guests are isolated and coated with a layer of dimers. As χ progressively increases, clusters grow in size becoming more and more elongated and polydisperse; after reaching a shallow maximum for χ≈50%, the size of clusters again reduces upon increasing χ further. In one case only (χ=50% and moderately low temperature) the mixture relaxed to a fluid of lamellae, suggesting that in this case clusters are metastable with respect to crystal-vapor separation. On heating, clusters shrink until eventually the system becomes homogeneous on all scales. On the other hand, as the mixture is made denser and denser at low temperature, clusters get increasingly larger until a percolating network is formed.

  7. A general mixture model for sediment laden flows

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián

    2017-09-01

    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  8. Simplest Validation of the HIJING Monte Carlo Model

    CERN Document Server

    Uzhinsky, V.V.

    2003-01-01

    Fulfillment of the energy-momentum conservation law, as well as the charge, baryon and lepton number conservation is checked for the HIJING Monte Carlo program in $pp$-interactions at $\\sqrt{s}=$ 200, 5500, and 14000 GeV. It is shown that the energy is conserved quite well. The transverse momentum is not conserved, the deviation from zero is at the level of 1--2 GeV/c, and it is connected with the hard jet production. The deviation is absent for soft interactions. Charge, baryon and lepton numbers are conserved. Azimuthal symmetry of the Monte Carlo events is studied, too. It is shown that there is a small signature of a "flow". The situation with the symmetry gets worse for nucleus-nucleus interactions.

  9. Mixture model for biomagnetic separation in microfluidic systems

    Science.gov (United States)

    Khashan, S. A.; Alazzam, A.; Mathew, B.; Hamdan, M.

    2017-11-01

    In this paper, we show that mixture model, with algebraic slip velocity relating to the magnetophoresis, provides a continuum-based, and cost-effective tool to simulate biomagnetic separations in microfluidics. The model is most effective to simulate magnetic separation protocols in which magnetic or magnetically labeled biological targets are within a naturally dilute or diluted samples. The transport of these samples is characterized as mixtures in which the dispersed magnetic microparticles establish their magnetophoretic mobility quickly in response to the acting forces. Our simulations demonstrate the coupled particle-fluid transport and the High Gradient Magnetic Capture (HGMC) of magnetic beads flowing through a microchannel. Also, we show that the mixture model and accordingly the modeling of the slip velocity model, unlike with the case with dense and/or macro-scale systems, can be further simplified by ignoring the gravitational and granular parameters. Furthermore, we show, by conducting comparative simulations, that the developed model provides an easier and viable alternative to the commonly used Lagrangian-Eulerian (particle-based) models.

  10. A RICIAN MIXTURE MODEL CLASSIFICATION ALGORITHM FOR MAGNETIC RESONANCE IMAGES

    Science.gov (United States)

    Roy, Snehashis; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.

    2009-01-01

    Tissue classification algorithms developed for magnetic resonance images commonly assume a Gaussian model on the statistics of noise in the image. While this is approximately true for voxels having large intensities, it is less true as the underlying intensity becomes smaller. In this paper, the Gaussian model is replaced with a Rician model, which is a better approximation to the observed signal. A new classification algorithm based on a finite mixture model of Rician signals is presented wherein the expectation maximization algorithm is used to find the joint maximum likelihood estimates of the unknown mixture parameters. Improved accuracy of tissue classification is demonstrated on several sample data sets. It is also shown that classification repeatability for the same subject under different MR acquisitions is improved using the new method. PMID:20126426

  11. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  12. A Monte Carlo study of backscattering effects in the photoelectron emission from CsI into CH$_{4}$ and Ar-CH$_{4}$ mixtures

    CERN Document Server

    Escada, J; Rachinhas, P J B M; Lopes, J A M; Santos, F P; Távora, L M N; Conde, C A N; Stauffer, A D

    2007-01-01

    Monte Carlo simulation is used to investigate photoelectron backscattering effects in the emission from a CsI photocathode into CH4 and Ar-CH4 mixtures for incident monochromatic photons with energies Eph in the range 6.8 eV to 9.8 eV (182 nm to 127 nm), and photons from a continuous VUV Hg(Ar) lamp with a spectral distribution peaked at Eph = 6.7 eV (185 nm), considering reduced applied electric fields E/N in the 0.1 Td to 40 Td range. The addition of CH4 to a noble gas efficiently increases electron transmission and drift velocity, due to vibrational excitation of the molecules at low electron energies. Results are presented for the photoelectron transmission efficiencies f, where f is the fraction of the number of photoelectrons emitted from CsI which are transmitted through the gas as compared to vacuum. The dependence of f on Eph, E/N, and mixture composition is analyzed and explained in terms of electron scattering in the different gas media, and results are compared with available measurements. Electro...

  13. Modeling, clustering, and segmenting video with mixtures of dynamic textures.

    Science.gov (United States)

    Chan, Antoni B; Vasconcelos, Nuno

    2008-05-01

    A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.

  14. A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization

    Directory of Open Access Journals (Sweden)

    Gonzalo Vegas-Sanchez-Ferrero

    2012-01-01

    Full Text Available Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG distribution (which also generalizes the Nakagami distribution was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1 a simple but robust methodology to estimate the ML parameters of GG distributions and (2 a Generalized Gama Mixture Model (GGMM. These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.

  15. VON MISES-FISHER MIXTURE MODEL OF THE DIFFUSION ODF

    Science.gov (United States)

    McGraw, Tim; Vemuri, Baba C.; Yezierski, Bob; Mareci, Thomas

    2009-01-01

    High angular resolution diffusion imaging (HARDI) permits the computation of water molecule displacement probabilities over the sphere. This probability is often referred to as the orientation distribution function (ODF). In this paper we present a novel model for representing this diffusion ODF namely, a mixture of von Mises-Fisher (vMF) distributions. Our model is compact in that it requires very few parameters to represent complicated ODF geometries which occur specifically in the presence of heterogeneous nerve fiber orientations. We present a Riemannian geometric framework for computing intrinsic distances (in closed-form) and for performing interpolation between ODFs represented by vMF mixtures. We also present closed-form equations for entropy and variance based anisotropy measures that are then computed and illustrated for real HARDI data from a rat brain. PMID:19759891

  16. Modeling of active transmembrane transport in a mixture theory framework.

    Science.gov (United States)

    Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T

    2010-05-01

    This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.

  17. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  18. Sampling from Dirichlet process mixture models with unknown concentration parameter: mixing issues in large data implementations.

    Science.gov (United States)

    Hastie, David I; Liverani, Silvia; Richardson, Sylvia

    We consider the question of Markov chain Monte Carlo sampling from a general stick-breaking Dirichlet process mixture model, with concentration parameter [Formula: see text]. This paper introduces a Gibbs sampling algorithm that combines the slice sampling approach of Walker (Communications in Statistics - Simulation and Computation 36:45-54, 2007) and the retrospective sampling approach of Papaspiliopoulos and Roberts (Biometrika 95(1):169-186, 2008). Our general algorithm is implemented as efficient open source C++ software, available as an R package, and is based on a blocking strategy similar to that suggested by Papaspiliopoulos (A note on posterior sampling from Dirichlet mixture models, 2008) and implemented by Yau et al. (Journal of the Royal Statistical Society, Series B (Statistical Methodology) 73:37-57, 2011). We discuss the difficulties of achieving good mixing in MCMC samplers of this nature in large data sets and investigate sensitivity to initialisation. We additionally consider the challenges when an additional layer of hierarchy is added such that joint inference is to be made on [Formula: see text]. We introduce a new label-switching move and compute the marginal partition posterior to help to surmount these difficulties. Our work is illustrated using a profile regression (Molitor et al. Biostatistics 11(3):484-498, 2010) application, where we demonstrate good mixing behaviour for both synthetic and real examples.

  19. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-11-01

    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  20. Genotoxicity of model and complex mixtures of polycyclic aromatic hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, K.C.; Phillips, T.D.; Onufrock, A.M.; Collie, S.L.; Huebner, H.J.; Washburn, K.S. [Texas A and M Univ., College Station, TX (United States). Dept. of Veterinary Anatomy and Public Health

    1996-12-31

    Polycyclic aromatic hydrocarbons (PAHs) are one of the most ubiquitous classes of environmental carcinogens; however, limited information is available to describe their potential genotoxic interactions. This manuscript reports on the interactions of PAHs in complex mixtures as determined in microbial mutagenicity assays. Samples analyzed included separate 2-, 3-, and 4-ring PAH individual model fractions (IMFs) constructed to simulate the composition of a model coal tar. These were tested individually and in various combinations, including a reconstituted model fraction (RMF) composed of all three IMFs. A solvent extract of coal tar and a benzo(a)pyrene-amended extract of coal tar were also tested. The maximum mutagenic response of 1,089 revertants was induced by the RMF at a dose of 90 {micro}g/plate with metabolic activation. At the four lowest dose levels, the response observed in the RMF sample was increased when compared to the 4-ring-IMF sample alone. However, the response observed with the RMF sample at the highest dose tested was less than was observed in the 4-ring-IMF sample tested independently. When IMF samples were combined or mixed with individual chemicals, some inhibition was observed. These data indicate that mixtures of PAHs can exhibit a variety of mutagenic interactions controlled by both the metabolism of the PAHs and by their concentration in the mixture.

  1. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  2. Theory and simulations for hard-disk models of binary mixtures of molecules with internal degrees of freedom

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1991-01-01

    by the method in the case of a binary mixture, and results are presented for varying disk-size ratios and degeneracies. The results are also compared with the predictions of the extended scaled-particle theory. Applications of the model are discussed in relation to lipid monolayers spread on air......A two-dimensional Monte Carlo simulation method based on the NpT ensemble and the Voronoi tesselation, which was previously developed for single-species hard-disk systems, is extended, along with a version of scaled-particle theory, to many-component mixtures. These systems are unusual in the sense...... that their composition is not fixed, but rather determined by a set of internal degeneracies assigned to the differently sized hard disks, where the larger disks have the higher degeneracies. Such systems are models of monolayers of molecules with internal degrees of freedom. The combined set of translational...

  3. Microscopic imaging through turbid media Monte Carlo modeling and applications

    CERN Document Server

    Gu, Min; Deng, Xiaoyuan

    2015-01-01

    This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.

  4. Thresholding functional connectomes by means of mixture modeling.

    Science.gov (United States)

    Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F

    2018-01-05

    Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture

  5. Mixture probability distribution functions to model wind speed distributions

    Energy Technology Data Exchange (ETDEWEB)

    Kollu, Ravindra; Rayapudi, Srinivasa Rao; Pakkurthi, Krishna Mohan [J.N.T. Univ., Kakinada (India). Dept. of Electrical and Electronics Engineering; Narasimham, S.V.L. [J.N.T. Univ., Andhra Pradesh (India). Computer Science and Engineering Dept.

    2012-11-01

    Accurate wind speed modeling is critical in estimating wind energy potential for harnessing wind power effectively. The quality of wind speed assessment depends on the capability of chosen probability density function (PDF) to describe the measured wind speed frequency distribution. The objective of this study is to describe (model) wind speed characteristics using three mixture probability density functions Weibull-extreme value distribution (GEV), Weibull-lognormal, and GEV-lognormal which were not tried before. Statistical parameters such as maximum error in the Kolmogorov-Smirnov test, root mean square error, Chi-square error, coefficient of determination, and power density error are considered as judgment criteria to assess the fitness of the probability density functions. Results indicate that Weibull- GEV PDF is able to describe unimodal as well as bimodal wind distributions accurately whereas GEV-lognormal PDF is able to describe familiar bell-shaped unimodal distribution well. Results show that mixture probability functions are better alternatives to conventional Weibull, two-component mixture Weibull, gamma, and lognormal PDFs to describe wind speed characteristics. (orig.)

  6. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  7. Modeling Sheet Flow in Oscillatory Flows Using a Mixture Approach

    Science.gov (United States)

    Burdick, G. M.; Slinn, D. N.

    2004-12-01

    Understanding the transport of sediment is crucial to predicting many coastal engineering processes, such as sedimentation and erosion around structures and beach profile changes. Traditional methods for modeling sediment transport require solving separate equations for fluid and particle motion. For densely-laden flows, this can present challenges in capturing the physics of the system, as fluid-particle and particle-particle interactions must be accounted for, and can present difficulties because of the computational effort required. In this approach, fluid-particle interactions are expressed through the drag and lift forces, while adequate models for particle-particle interactions are currently being developed. We have chosen an alternate approach that assumes a system containing sediment particles can be approximated as a mixture having variable density and viscosity that depend on the local sediment concentration. Here, the interactions are expressed through the mixture viscosity and a stress-induced diffusion term. There are five governing equations that describe the flow field. They are the mixture continuity and momentum equations and a species continuity equation for the sediment. We use the control volume approach on a three-dimensional staggered grid to solve the equations numerically. The turbulent dynamics of an initially stationary densely packed sand layer (60% by volume sand) driven by a sinusoidally oscillating flow are examined and model results are compared with the experimental data of Horikawa, Watanabe, & Katori (1982). The model does a reasonable job of predicting concentration profiles and sheet flow layer thickness. Both the model and the experimental data show that a significant amount of sand is entrained during the acceleration phase of the wave cycle. This entrained sand then falls back to the bed during the deceleration phase of the wave cycle.

  8. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  9. Estimation of wind energy potential using finite mixture distribution models

    Energy Technology Data Exchange (ETDEWEB)

    Akpinar, Sinan [Physics Department, Firat University, 23279 Elazig (Turkey); Akpinar, Ebru Kavak [Mechanical Engineering Department, Firat University, 23279 Elazig (Turkey)

    2009-04-15

    In this paper has been investigated an analysis of wind characteristics of four stations (Elazig, Elazig-Maden, Elazig-Keban, and Elazig-Agin) over a period of 8 years (1998-2005). The probabilistic distributions of wind speed are a critical piece of information needed in the assessment of wind energy potential, and have been conventionally described by various empirical correlations. Among the empirical correlations, there are the Weibull distribution and the Maximum Entropy Principle. These wind speed distributions can not accurately represent all wind regimes observed in that region. However, this study represents a theoretical approach of wind speed frequency distributions observed in that region through applications of a Singly Truncated from below Normal Weibull mixture distribution and a two component mixture Weibull distribution and offer less relative errors in determining the annual mean wind power density. The parameters of the distributions are estimated using the least squares method and Statistica software. The suitability of the distributions is judged from the probability plot correlation coefficient plot R{sup 2}, RMSE and {chi}{sup 2}. Based on the results obtained, we conclude that the two mixture distributions proposed here provide very flexible models for wind speed studies. (author)

  10. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    Science.gov (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  11. Alternative Approaches to Structural Modeling of Ordinal Data: A Monte Carlo Study.

    Science.gov (United States)

    Coenders, Germa; Saris, Willem E.; Satorra, Albert

    1997-01-01

    A Monte Carlo study is reported that shows the comparative performance of alternative approaches under deviations from their respective assumptions in the case of structural equation models with latent variables with attention restricted to point estimates of model parameters. The conditional polychoric correlations method is shown most robust…

  12. A model for Monte Carlo simulation of low angle photon scattering in biological tissues

    CERN Document Server

    Tartari, A; Bonifazzi, C

    2001-01-01

    In order to include the molecular interference effect, a simple procedure is proposed and demonstrated to be able to update the usual cross section database for photon coherent scattering modelling in Monte Carlo codes. This effect was evaluated by measurement of coherent scattering distributions and by means of a model based on four basic materials composing biological tissues.

  13. Universality of the Ising and the S=1 model on Archimedean lattices: a Monte Carlo determination.

    Science.gov (United States)

    Malakis, A; Gulpinar, G; Karaaslan, Y; Papakonstantinou, T; Aslan, G

    2012-03-01

    The Ising models S=1/2 and S=1 are studied by efficient Monte Carlo schemes on the (3,4,6,4) and the (3,3,3,3,6) Archimedean lattices. The algorithms used, a hybrid Metropolis-Wolff algorithm and a parallel tempering protocol, are briefly described and compared with the simple Metropolis algorithm. Accurate Monte Carlo data are produced at the exact critical temperatures of the Ising model for these lattices. Their finite-size analysis provide, with high accuracy, all critical exponents which, as expected, are the same with the well-known 2D Ising model exact values. A detailed finite-size scaling analysis of our Monte Carlo data for the S=1 model on the same lattices provides very clear evidence that this model obeys, also very well, the 2D Ising model critical exponents. As a result, we find that recent Monte Carlo simulations and attempts to define effective dimensionality for the S=1 model on these lattices are misleading. Accurate estimates are obtained for the critical amplitudes of the logarithmic expansions of the specific heat for both models on the two Archimedean lattices.

  14. Bayesian Monte Carlo method for monotonic models applying the Generalized Beta distribution

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Demirbilek, Z.

    2011-01-01

    A novel Bayesian Monte Carlo method for monotonic models (BMCM) is described in this paper. The BMCM method enjoys the advantages of the recently developed method of Dynamic Bounds [1] for the reliability assessment of monotonic models, and incorporates weighted logical dependence between

  15. Monte Carlo simulation of diblock copolymer microphases by means of a 'fast' off-lattice model

    DEFF Research Database (Denmark)

    Besold, Gerhard; Hassager, O.; Mouritsen, Ole G.

    1999-01-01

    We present a mesoscopic off-lattice model for the simulation of diblock copolymer melts by Monte Carlo techniques. A single copolymer molecule is modeled as a discrete Edwards chain consisting of two blocks with vertices of type A and B, respectively. The volume interaction is formulated in terms...

  16. MC-SPAM: Monte-Carlo Synthetic-Photometry/Atmosphere-Model

    Science.gov (United States)

    Espinoza, Néstor; Jordán, Andrés

    2017-03-01

    MC-SPAM (Monte-Carlo Synthetic-Photometry/Atmosphere-Model) generates limb-darkening coefficients from models that are comparable to transit photometry; it extends the original SPAM algorithm by Howarth (2011) by taking in consideration the uncertainty on the stellar and transit parameters of the system under analysis.

  17. Advanced Monte Carlo Model for Arborescent Polyisobutylene Production in Batch Reactor

    NARCIS (Netherlands)

    Zhao, Y.R.; McAuley, K.B.; Iedema, P.D.; Puskas, J.E.

    2014-01-01

    An advanced Monte Carlo (MC) model is developed to predict the molecular weight distribution and branching level for arborescent polyisobutylene produced in a batch reactor via carbocationic copolymerization of isobutylene and an inimer. This new MC model uses differential equations and random

  18. Mixture-Model Clustering of Pathological Gait Patterns.

    Science.gov (United States)

    Dolatabadi, Elham; Mansfield, Avril; Patterson, Kara K; Taati, Babak; Mihailidis, Alex

    2017-09-01

    This study applies mixture-model clustering to spatiotemporal gait parameters in order to characterize the pathological gait pattern and to generate a composite measure indicative of overall gait performance. Gait data from 68 adults with stroke (age: 61.5 ± 13.6 years) and 20 healthy adults (age: 28.8 ± 7.1 years) were used in this study. Participants performed three passes across a GAITRite mat at different time points following stroke (poststroke adults only). Mixture-model clustering grouped participants' gait patterns based on their spatiotemporal gait features including symmetry, speed, and variability. Mixture-models with different covariance matrix parameterizations and numbers of clusters were examined. The selected clustering model successfully categorized participants' spatiotemporal gait data into three clinically meaningful groups. Based on the clustering results, gait speed, and variability measures varied across the three groups. Individuals in Group 1 are all symmetric and had the fastest and lowest gait velocity and variability, respectively. As expected, healthy participants were assigned to Group 1. All gait parameters were at an intermediate level in Group 2 and worse condition in Group 3. Moreover, resulting cluster centers were in line with previously published clinical studies on gait. In addition to clustering, each individual was given an indexed membership (ranged 0-1) to each of three groups. These indexed memberships were proposed as a single measure to encompass information about multiple gait parameters (symmetry, speed, and variability) and as a measure that is sensitive and responsive to improvement or deterioration and rehabilitation over time.

  19. Effective dielectric mixture model for characterization of diesel contaminated soil

    Energy Technology Data Exchange (ETDEWEB)

    Al-Mattarneh, H.M.A. [Tenaga National Univ., Kajang (Malaysia). Dept. of Civil Engineering; Essa Ahmad, M. [Al-Balqa Applied Univ. (Jordan). Al-Huson University College; Zain, M.F.M.; Tha, M.R. [Kebangsaan Malaysia Univ., Bangi (Malaysia)

    2007-07-01

    Human exposure to contaminated soil by diesel isomers can have serious health consequences like neurological diseases or cancer. The potential of dielectric measuring techniques for electromagnetic characterization of contaminated soils was investigated in this paper. The purpose of the research was to develop an empirical dielectric mixture model for soil hydrocarbon contamination application. The paper described the basic theory and elaborated in dielectric mixture theory. The analytical and empirical models were explained in simple algebraic formulas. The experimental study was then described with reference to materials, properties and experimental results. The results of the analytical models were also mathematically explained. The proposed semi-empirical model was also presented. According to the result of the electromagnetic properties of dry soil contaminated with diesel, the diesel presence had no significant effect on the electromagnetic properties of dry soil. It was concluded that diesel had no contribution to the soil electrical conductivity, which confirmed the nonconductive character of diesel. The results of diesel-contaminated soil at saturation condition indicated that both dielectric constant and loss factors of soil were decreased with increasing diesel content. 15 refs., 2 tabs., 9 figs.

  20. Measuring free energy in spin-lattice models using parallel tempering Monte Carlo.

    Science.gov (United States)

    Wang, Wenlong

    2015-05-01

    An efficient and simple approach of measuring the absolute free energy as a function of temperature for spin lattice models using a two-stage parallel tempering Monte Carlo and the free energy perturbation method is discussed and the results are compared with those of population annealing Monte Carlo using the three-dimensional Edwards-Anderson Ising spin glass model as benchmark tests. This approach requires little modification of regular parallel tempering Monte Carlo codes with also little overhead. Numerical results show that parallel tempering, even though using a much less number of temperatures than population annealing, can nevertheless equally efficiently measure the absolute free energy by simulating each temperature for longer times.

  1. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  2. The balance model for heat transport from hydrolytic reaction mixture

    Directory of Open Access Journals (Sweden)

    Janacova Dagmar

    2017-01-01

    Full Text Available The content of the paper is the industrial application of enzyme hydrolysis of tanning solids waste with a view to minimizing the price of enzyme hydrolysate product, which has widely used. On the base of the energy balance of the enzymatic hydrolysis we estimated the critical minimal charge of a tanning drum. We performed of the critical minimal on the basis of a balance model for heat transport from reaction mixture into the environment through reactor wall. Employing a tanning drum for hydrolytic reaction allows to process tanning wastes in the place of their origin. It means thus considerably to enhancing economics of the whole process.

  3. Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua

    2016-01-01

    elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated......Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...

  4. A mixture copula Bayesian network model for multimodal genomic data

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2017-04-01

    Full Text Available Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  5. A mixture copula Bayesian network model for multimodal genomic data.

    Science.gov (United States)

    Zhang, Qingyang; Shi, Xuan

    2017-01-01

    Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation-maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  6. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.

    2012-04-01

    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  7. Global Low-Rank Image Restoration With Gaussian Mixture Model.

    Science.gov (United States)

    Zhang, Sibo; Jiao, Licheng; Liu, Fang; Wang, Shuang

    2017-06-27

    Low-rank restoration has recently attracted a lot of attention in the research of computer vision. Empirical studies show that exploring the low-rank property of the patch groups can lead to superior restoration performance, however, there is limited achievement on the global low-rank restoration because the rank minimization at image level is too strong for the natural images which seldom match the low-rank condition. In this paper, we describe a flexible global low-rank restoration model which introduces the local statistical properties into the rank minimization. The proposed model can effectively recover the latent global low-rank structure via nuclear norm, as well as the fine details via Gaussian mixture model. An alternating scheme is developed to estimate the Gaussian parameters and the restored image, and it shows excellent convergence and stability. Besides, experiments on image and video sequence datasets show the effectiveness of the proposed method in image inpainting problems.

  8. Converting Boundary Representation Solid Models to Half-Space Representation Models for Monte Carlo Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Davis JE, Eddy MJ, Sutton TM, Altomari TJ

    2007-03-01

    Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.

  9. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    Science.gov (United States)

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  10. Understanding Symmetric Smoothing Filters: A Gaussian Mixture Model Perspective

    Science.gov (United States)

    Chan, Stanley H.; Zickler, Todd; Lu, Yue M.

    2017-11-01

    Many patch-based image denoising algorithms can be formulated as applying a smoothing filter to the noisy image. Expressed as matrices, the smoothing filters must be row normalized so that each row sums to unity. Surprisingly, if we apply a column normalization before the row normalization, the performance of the smoothing filter can often be significantly improved. Prior works showed that such performance gain is related to the Sinkhorn-Knopp balancing algorithm, an iterative procedure that symmetrizes a row-stochastic matrix to a doubly-stochastic matrix. However, a complete understanding of the performance gain phenomenon is still lacking. In this paper, we study the performance gain phenomenon from a statistical learning perspective. We show that Sinkhorn-Knopp is equivalent to an Expectation-Maximization (EM) algorithm of learning a Gaussian mixture model of the image patches. By establishing the correspondence between the steps of Sinkhorn-Knopp and the EM algorithm, we provide a geometrical interpretation of the symmetrization process. This observation allows us to develop a new denoising algorithm called Gaussian mixture model symmetric smoothing filter (GSF). GSF is an extension of the Sinkhorn-Knopp and is a generalization of the original smoothing filters. Despite its simple formulation, GSF outperforms many existing smoothing filters and has a similar performance compared to several state-of-the-art denoising algorithms.

  11. On the characterization of flowering curves using Gaussian mixture models.

    Science.gov (United States)

    Proïa, Frédéric; Pernet, Alix; Thouroude, Tatiana; Michel, Gilles; Clotault, Jérémy

    2016-08-07

    In this paper, we develop a statistical methodology applied to the characterization of flowering curves using Gaussian mixture models. Our study relies on a set of rosebushes flowering data, and Gaussian mixture models are mainly used to quantify the reblooming properties of each one. In this regard, we also suggest our own selection criterion to take into account the lack of symmetry of most of the flowering curves. Three classes are created on the basis of a principal component analysis conducted on a set of reblooming indicators, and a subclassification is made using a longitudinal k-means algorithm which also highlights the role played by the precocity of the flowering. In this way, we obtain an overview of the correlations between the features we decided to retain on each curve. In particular, results suggest the lack of correlation between reblooming and flowering precocity. The pertinent indicators obtained in this study will be a first step towards the comprehension of the environmental and genetic control of these biological processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Understanding Symmetric Smoothing Filters: A Gaussian Mixture Model Perspective.

    Science.gov (United States)

    Chan, Stanley H; Zickler, Todd; Lu, Yue M

    2017-11-01

    Many patch-based image denoising algorithms can be formulated as applying a smoothing filter to the noisy image. Expressed as matrices, the smoothing filters must be row normalized, so that each row sums to unity. Surprisingly, if we apply a column normalization before the row normalization, the performance of the smoothing filter can often be significantly improved. Prior works showed that such performance gain is related to the Sinkhorn-Knopp balancing algorithm, an iterative procedure that symmetrizes a row-stochastic matrix to a doubly stochastic matrix. However, a complete understanding of the performance gain phenomenon is still lacking. In this paper, we study the performance gain phenomenon from a statistical learning perspective. We show that Sinkhorn-Knopp is equivalent to an expectation-maximization (EM) algorithm of learning a Gaussian mixture model of the image patches. By establishing the correspondence between the steps of Sinkhorn-Knopp and the EM algorithm, we provide a geometrical interpretation of the symmetrization process. This observation allows us to develop a new denoising algorithm called Gaussian mixture model symmetric smoothing filter (GSF). GSF is an extension of the Sinkhorn-Knopp and is a generalization of the original smoothing filters. Despite its simple formulation, GSF outperforms many existing smoothing filters and has a similar performance compared with several state-of-the-art denoising algorithms.

  13. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  14. Model unspecific search in CMS. Treatment of insufficient Monte Carlo statistics

    Energy Technology Data Exchange (ETDEWEB)

    Lieb, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In 2015, the CMS detector recorded proton-proton collisions at an unprecedented center of mass energy of √(s)=13 TeV. The Model Unspecific Search in CMS (MUSiC) offers an analysis approach of these data which is complementary to dedicated analyses: By taking all produced final states into consideration, MUSiC is sensitive to indicators of new physics appearing in final states that are usually not investigated. In a two step process, MUSiC first classifies events according to their physics content and then searches kinematic distributions for the most significant deviations between Monte Carlo simulations and observed data. Such a general approach introduces its own set of challenges. One of them is the treatment of situations with insufficient Monte Carlo statistics. Complementing introductory presentations on the MUSiC event selection and classification, this talk will present a method of dealing with the issue of low Monte Carlo statistics.

  15. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  16. Dynamic Monte Carlo simulation of the one-dimensional Potts model

    Energy Technology Data Exchange (ETDEWEB)

    Karma, A.S.

    1982-12-01

    Monte Carlo results are presented for a variety of one-dimensional q-state Potts models. Our calculation confirms the expected universal value z = 2 for the dynamic scaling exponent. Our results also indicate that an increase in q, at fixed correlation length, drives the dynamics into the scaling regime.

  17. Surprising convergence of the Monte Carlo renormalization group for the three-dimensional Ising model.

    Science.gov (United States)

    Ron, Dorit; Brandt, Achi; Swendsen, Robert H

    2017-05-01

    We present a surprisingly simple approach to high-accuracy calculations of the critical properties of the three-dimensional Ising model. The method uses a modified block-spin transformation with a tunable parameter to improve convergence in the Monte Carlo renormalization group. The block-spin parameter must be tuned differently for different exponents to produce optimal convergence.

  18. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    Science.gov (United States)

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  19. An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.

    Science.gov (United States)

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…

  20. Monte Carlo modeling of non-tracking concentrator using light trapping techniques

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, B.A.; Knasel, T.M.; Houghton, A.; Bain, C.N.

    1980-01-01

    Monte Carlo methods have been used to model the performance of a non-tracking solar collector which uses light trapping to provide concentration. Light trapping results from total internal reflection of light diffused from the bottom surface of the cover sheet of the collector. Gains of 20% to 30% are readily achievable, and have been measured on a simple prototype.

  1. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction...

  2. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast and e...

  3. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste)

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture

  4. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    Science.gov (United States)

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  5. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  6. MODELLING AND PARAMETER ESTIMATION IN REACTIVE CONTINUOUS MIXTURES: THE CATALYTIC CRACKING OF ALKANES. PART I

    OpenAIRE

    PEIXOTO, F. C.; Medeiros,J. L.

    1999-01-01

    Fragmentation kinetics is employed to model a continuous reactive mixture. An explicit solution is found and experimental data on the catalytic cracking of a mixture of alkanes are used for deactivation and kinetic parameter estimation.

  7. MODELLING AND PARAMETER ESTIMATION IN REACTIVE CONTINUOUS MIXTURES: THE CATALYTIC CRACKING OF ALKANES. PART I

    Directory of Open Access Journals (Sweden)

    PEIXOTO F. C.

    1999-01-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture. An explicit solution is found and experimental data on the catalytic cracking of a mixture of alkanes are used for deactivation and kinetic parameter estimation.

  8. Tractography Segmentation Using a Hierarchical Dirichlet Processes Mixture Model

    Science.gov (United States)

    Wang, Xiaogang; Grimson, W. Eric L.; Westin, Carl-Fredrik

    2010-01-01

    In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learnt from data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learnt from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects without subsampling. We present results on multiple data sets, the largest of which has more than 120, 000 fibers. PMID:19694256

  9. Segway 2.0: Gaussian mixture models and minibatch training.

    Science.gov (United States)

    Chan, Rachel C W; Libbrecht, Maxwell W; Roberts, Eric G; Bilmes, Jeffrey A; Noble, William Stafford; Hoffman, Michael M; Birol, Inanc

    2018-02-15

    Segway performs semi-automated genome annotation, discovering joint patterns across multiple genomic signal datasets. We discuss a major new version of Segway and highlight its ability to model data with substantially greater accuracy. Major enhancements in Segway 2.0 include the ability to model data with a mixture of Gaussians, enabling capture of arbitrarily complex signal distributions, and minibatch training, leading to better learned parameters. Segway and its source code are freely available for download at http://segway.hoffmanlab.org. We have made available scripts (https://doi.org/10.5281/zenodo.802939) and datasets (https://doi.org/10.5281/zenodo.802906) for this paper's analysis. michael.hoffman@utoronto.ca. Supplementary data are available at Bioinformatics online.

  10. Modeling adsorption of liquid mixtures on porous materials

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2009-01-01

    The multicomponent potential theory of adsorption (MPTA), which was previously applied to adsorption from gases, is extended onto adsorption of liquid mixtures on porous materials. In the MPTA, the adsorbed fluid is considered as an inhomogeneous liquid with thermodynamic properties that depend...... of the MPTA onto liquids has been tested on experimental binary and ternary adsorption data. We show that, for the set of experimental data considered in this work, the MPTA model is capable of correlating binary adsorption equilibria. Based on binary adsorption data, the theory can then predict ternary...... adsorption equilibria. Good agreement with the theoretical predictions is achieved in most of the cases. Some limitations of the model are also discussed....

  11. Improved Collision Modeling for Direct Simulation Monte Carlo Methods

    Science.gov (United States)

    2011-03-01

    you guys helped me make it through! Dave Doak has been a lifesaver in the Linux lab, and helped us in anyway he could. Without his assistance in the...neglected, and it is assumed that molecules only interact with each other by physically colliding. In other models, such as the Sutherland model, that...take intermolecular forces into account, molecules can interact simply by being within a certain radius of each other. The Sutherland model adds weak

  12. Multilevel Monte Carlo and Improved Timestepping Methods in Atmospheric Dispersion Modelling

    OpenAIRE

    Katsiolides, G; Muller, EH; Scheichl, R.; Shardlow, T.; Giles, MB; Thomson, DJ

    2017-01-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an ef...

  13. Prediction of the bubble point pressure for the binary mixture of ethanol and 1,1,1,2,3,3,3-heptafluoropropane from Gibbs ensemble Monte Carlo simulations using the TraPPE force field

    Energy Technology Data Exchange (ETDEWEB)

    Rai, N; Rafferty, J L; Maiti, A; Siepmann, I

    2007-02-28

    Configurational-bias Monte Carlo simulations in the Gibbs ensemble using the TraPPE force field were carried out to predict the pressure-composition diagrams for the binary mixture of ethanol and 1,1,1,2,3,3,3-heptafluoropropane at 283.17 and 343.13 K. A new approach is introduced that allows to scale predictions at one temperature based on the differences in Gibbs free energies of transfer between experiment and simulation obtained at another temperature. A detailed analysis of the molecular structure and hydrogen bonding for this fluid mixture is provided.

  14. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    , multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...

  15. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  16. Correction of confidence intervals in excess relative risk models using Monte Carlo dosimetry systems with shared errors.

    Directory of Open Access Journals (Sweden)

    Zhuo Zhang

    Full Text Available In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose, a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters.

  17. Robot Obstacle Avoidance Learning Based on Mixture Models

    Directory of Open Access Journals (Sweden)

    Huiwen Zhang

    2016-01-01

    Full Text Available We briefly surveyed the existing obstacle avoidance algorithms; then a new obstacle avoidance learning framework based on learning from demonstration (LfD is proposed. The main idea is to imitate the obstacle avoidance mechanism of human beings, in which humans learn to make a decision based on the sensor information obtained by interacting with environment. Firstly, we endow robots with obstacle avoidance experience by teaching them to avoid obstacles in different situations. In this process, a lot of data are collected as a training set; then, to encode the training set data, which is equivalent to extracting the constraints of the task, Gaussian mixture model (GMM is used. Secondly, a smooth obstacle-free path is generated by Gaussian mixture regression (GMR. Thirdly, a metric of imitation performance is constructed to derive a proper control policy. The proposed framework shows excellent generalization performance, which means that the robots can fulfill obstacle avoidance task efficiently in a dynamic environment. More importantly, the framework allows learning a wide variety of skills, such as grasp and manipulation work, which makes it possible to build a robot with versatile functions. Finally, simulation experiments are conducted on a Turtlebot robot to verify the validity of our algorithms.

  18. A CARTILAGE GROWTH MIXTURE MODEL WITH COLLAGEN REMODELING: VALIDATION PROTOCOLS

    Science.gov (United States)

    Klisch, Stephen M.; Asanbaeva, Anna; Oungoulian, Sevan R.; Masuda, Koichi; Thonar, Eugene J-MA; Davol, Andrew; Sah, Robert L.

    2009-01-01

    A cartilage growth mixture (CGM) model is proposed to address limitations of a model used in a previous study. New stress constitutive equations for the solid matrix are derived and collagen (COL) remodeling is incorporated into the CGM model by allowing the intrinsic COL material constants to evolve during growth. An analytical validation protocol based on experimental data from a recent in vitro growth study is developed. Available data included measurements of tissue volume, biochemical composition, and tensile modulus for bovine calf articular cartilage (AC) explants harvested at three depths and incubated for 13 days in 20% FBS and 20% FBS+β-aminopropionitrile. The proposed CGM model can match tissue biochemical content and volume exactly while predicting theoretical values of tensile moduli that do not significantly differ from experimental values. Also, theoretical values of a scalar COL remodeling factor are positively correlated with COL crosslink content, and mass growth functions are positively correlated with cell density. The results suggest that the CGM model may help to guide in vitro growth protocols for AC tissue via the a priori prediction of geometric and biomechanical properties. PMID:18532855

  19. Supervoxel Segmentation with Voxel-Related Gaussian Mixture Model.

    Science.gov (United States)

    Ban, Zhihua; Chen, Zhong; Liu, Jianguo

    2018-01-05

    Extended from superpixel segmentation by adding an additional constraint on temporal consistency, supervoxel segmentation is to partition video frames into atomic segments. In this work, we propose a novel scheme for supervoxel segmentation to address the problem of new and moving objects, where the segmentation is performed on every two consecutive frames and thus each internal frame has two valid superpixel segmentations. This scheme provides coarse-grained parallel ability, and subsequent algorithms can validate their result using two segmentations that will further improve robustness. To implement this scheme, a voxel-related Gaussian mixture model (GMM) is proposed, in which each supervoxel is assumed to be distributed in a local region and represented by two Gaussian distributions that share the same color parameters to capture temporal consistency. Our algorithm has a lower complexity with respect to frame size than the traditional GMM. According to our experiments, it also outperforms the state-of-the-art in accuracy.

  20. Two Dimensional Projection Pursuit Applied to Gaussian Mixture Model Fitting

    Directory of Open Access Journals (Sweden)

    Natella Likhterov

    2003-08-01

    Full Text Available In this paper we seek a Gaussian mixture model (GMM of an n-variate probability density function. Usually the parameters of GMMs are determined by a maximum likelihood (ML criterion. A practical deficiency of ML fitting of GMMs is poor performance when dealing with high-dimensional data since a large sample size is needed to match the accuracy that is possible in low dimensions. We propose a method to fit the GMM to multivariate data which is based on the two-dimensional projection pursuit (PP method. By means of simulations we compare the proposed method with a one-dimensional PP method for GMM. We conclude that a combination of one- and twodimensional PP methods could be useful in some applications.

  1. Bayesian Prediction under a Finite Mixture of Generalized Exponential Lifetime Model

    Directory of Open Access Journals (Sweden)

    Mohamed Mohmoud Mohamed

    2014-12-01

    Full Text Available In this article a heterogeneous population is represented by a mixture of two generalized exponential distributions. Using the two-sample prediction technique, Bayesian prediction bounds for future order statistics are obtained based on type II censored and complete data. A numerical example is given to illustrate the procedures and the accuracy of the prediction intervals is investigated via extensive Monte Carlo simulation.

  2. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for

  3. Monte Carlo simulation of three-dimensional dilute Ising model

    Science.gov (United States)

    Chowdhury, Debashish; Stauffer, Dietrich

    1986-07-01

    A multispin coding program for site-diluted Ising models on large simple cubic lattices is described in detail. The spontaneous magnetization is computed as a function of temperature, and the critical temperature as a function of concentration is found to agree well with the data of Marro et al.(4) and Landau(3) for smaller systems.

  4. Modeling of non-additive mixture properties using the Online CHEmical database and Modeling environment (OCHEM

    Directory of Open Access Journals (Sweden)

    Oprisiu Ioana

    2013-01-01

    Full Text Available Abstract The Online Chemical Modeling Environment (OCHEM, http://ochem.eu is a web-based platform that provides tools for automation of typical steps necessary to create a predictive QSAR/QSPR model. The platform consists of two major subsystems: a database of experimental measurements and a modeling framework. So far, OCHEM has been limited to the processing of individual compounds. In this work, we extended OCHEM with a new ability to store and model properties of binary non-additive mixtures. The developed system is publicly accessible, meaning that any user on the Web can store new data for binary mixtures and develop models to predict their non-additive properties. The database already contains almost 10,000 data points for the density, bubble point, and azeotropic behavior of binary mixtures. For these data, we developed models for both qualitative (azeotrope/zeotrope and quantitative endpoints (density and bubble points using different learning methods and specially developed descriptors for mixtures. The prediction performance of the models was similar to or more accurate than results reported in previous studies. Thus, we have developed and made publicly available a powerful system for modeling mixtures of chemical compounds on the Web.

  5. Modeling of non-additive mixture properties using the Online CHEmical database and Modeling environment (OCHEM).

    Science.gov (United States)

    Oprisiu, Ioana; Novotarskyi, Sergii; Tetko, Igor V

    2013-01-15

    The Online Chemical Modeling Environment (OCHEM, http://ochem.eu) is a web-based platform that provides tools for automation of typical steps necessary to create a predictive QSAR/QSPR model. The platform consists of two major subsystems: a database of experimental measurements and a modeling framework. So far, OCHEM has been limited to the processing of individual compounds. In this work, we extended OCHEM with a new ability to store and model properties of binary non-additive mixtures. The developed system is publicly accessible, meaning that any user on the Web can store new data for binary mixtures and develop models to predict their non-additive properties.The database already contains almost 10,000 data points for the density, bubble point, and azeotropic behavior of binary mixtures. For these data, we developed models for both qualitative (azeotrope/zeotrope) and quantitative endpoints (density and bubble points) using different learning methods and specially developed descriptors for mixtures. The prediction performance of the models was similar to or more accurate than results reported in previous studies. Thus, we have developed and made publicly available a powerful system for modeling mixtures of chemical compounds on the Web.

  6. The NIMO Monte Carlo model for box-air-mass factor and radiance calculations

    Science.gov (United States)

    Hay, Timothy D.; Bodeker, Greg E.; Kreher, Karin; Schofield, Robyn; Liley, J. Ben; Scherer, Martin; McDonald, Adrian J.

    2012-06-01

    A new fully spherical multiple scattering Monte Carlo radiative transfer model named NIMO (NIWA Monte Carlo model) is presented. The ray tracing algorithm is described in detail along with the treatment of scattering and absorption, and the simulation of backward adjoint trajectories. The primary application of NIMO is the calculation of box-air-mass factors (box-AMFs), which are used to convert slant column densities (SCDs) of trace gases, derived from UV-visible multiple axis Differential Optical Absorption Spectroscopy (MAX-DOAS) measurements, into vertical column densities (VCDs). Box-AMFs are also employed as weighting functions for optimal estimation retrievals of vertical trace gas profiles from SCDs. Monte Carlo models are well suited to AMF calculations at high solar zenith angles (SZA) and at low viewing elevation angles where multiple scattering is important. Additionally, the object-oriented structure of NIMO makes it easily extensible to new applications by plugging in objects for new absorbing or scattering species. Box-AMFs and radiances, calculated for various wavelengths, SZAs, viewing elevation and azimuth angles and aerosol scenarios, are compared with results from nine other models using a set of exercises from a recent radiative transfer model intercomparison. NIMO results for these simulations are well within the range of variability of the other models.

  7. Hierarchical Bayesian mixture modelling for antigen-specific T-cell subtyping in combinatorially encoded flow cytometry studies

    DEFF Research Database (Denmark)

    Lin, Lin; Chan, Cliburn; Hadrup, Sine R

    2013-01-01

    Novel uses of automated flow cytometry technology for measuring levels of protein markers on thousands to millions of cells are promoting increasing need for relevant, customized Bayesian mixture modelling approaches in many areas of biomedical research and application. In studies of immune...... in the ability to characterize variation in immune responses involving larger numbers of functionally differentiated cell subtypes. We describe novel classes of Markov chain Monte Carlo methods for model fitting that exploit distributed GPU (graphics processing unit) implementation. We discuss issues of cellular...... subtype identification in this novel, general model framework, and provide a detailed example using simulated data. We then describe application to a data set from an experimental study of antigen-specific T-cell subtyping using combinatorially encoded assays in human blood samples. Summary comments...

  8. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2004-01-01

    Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.

  9. Modeling and analysis of personal exposures to VOC mixtures using copulas.

    Science.gov (United States)

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-02-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver

  10. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary...

  11. NEW MODEL FOR MINES AND TRANSPORTATION TUNNELS EXTERNAL DOSE CALCULATION USING MONTE CARLO SIMULATION.

    Science.gov (United States)

    Allam, Kh A

    2017-12-01

    In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3-19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Effect of Phenolic Compound Mixtures on the Viability of Listeria monocytogenes in Meat Model

    OpenAIRE

    María José Rodríguez Vaquero; María Cristina Manca de Nadra; Pedro Adrián Aredes Fernández

    2011-01-01

    The aim of this work is to investigate the synergistic antibacterial effect of phenolic compound mixtures against Listeria monocytogenes in brain heart infusion (BHI) medium, and to select the best mixture for testing their antibacterial activity in a meat model system. In BHI medium, the most effective mixtures were those of gallic and caffeic acids, gallic and protocatechuic acids, and rutin and quercetin. At the concentration of 200 mg/L, the mixtures of gallic and protocatechuic, then gal...

  13. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    Science.gov (United States)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  14. Classification of microarray data with factor mixture models.

    Science.gov (United States)

    Martella, Francesca

    2006-01-15

    The classification of few tissue samples on a very large number of genes represents a non-standard problem in statistics but a usual one in microarray expression data analysis. In fact, the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. We consider high-density oligonucleotide microarray data, where the expression level is associated to an 'absolute call', which represents a qualitative indication of whether or not a transcript is detected within a sample. The 'absolute call' is generally not taken in consideration in analyses. In contrast to frequently used cluster analysis methods to analyze gene expression data, we consider a problem of classification of tissues and of the variables selection. We adopted methodologies formulated by Ghahramani and Hinton and Rocci and Vichi for simultaneous dimensional reduction of genes and classification of tissues; trying to identify genes (denominated 'markers') that are able to distinguish between two known different classes of tissue samples. In this respect, we propose a generalization of the approach proposed by McLachlan et al. by advising to estimate the distribution of log LR statistic for testing one versus two component hypothesis in the mixture model for each gene considered individually, using a parametric bootstrap approach. We compare conditional (on 'absolute call') and unconditional analyses performed on dataset described in Golub et al. We show that the proposed techniques improve the results of classification of tissue samples with respect to known results on the same benchmark dataset. The software of Ghahramani and Hinton is written in Matlab and available in 'Mixture of Factor Analyzers' on http://www.gatsby.ucl.ac.uk/~zoubin/software.html while the software of Rocci and Vichi is available upon request from the authors.

  15. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    Science.gov (United States)

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  16. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2015-01-01

    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  17. Monte Carlo modeling of ion beam induced secondary electrons

    Energy Technology Data Exchange (ETDEWEB)

    Huh, U., E-mail: uhuh@vols.utk.edu [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Cho, W. [Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996-2100 (United States); Joy, D.C. [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Center for Nanophase Materials Science, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2016-09-15

    Ion induced secondary electrons (iSE) can produce high-resolution images ranging from a few eV to 100 keV over a wide range of materials. The interpretation of such images requires knowledge of the secondary electron yields (iSE δ) for each of the elements and materials present and as a function of the incident beam energy. Experimental data for helium ions are currently limited to 40 elements and six compounds while other ions are not well represented. To overcome this limitation, we propose a simple procedure based on the comprehensive work of Berger et al. Here we show that between the energy range of 10–100 keV the Berger et al. data for elements and compounds can be accurately represented by a single universal curve. The agreement between the limited experimental data that is available and the predictive model is good, and has been found to provide reliable yield data for a wide range of elements and compounds. - Highlights: • The Universal ASTAR Yield Curve was derived from data recently published by NIST. • IONiSE incorporated with the Curve will predict iSE yield for elements and compounds. • This approach can also handle other ion beams by changing basic scattering profile.

  18. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  19. Two-Part Factor Mixture Modeling: Application to an Aggressive Behavior Measurement Instrument

    Science.gov (United States)

    Kim, YoungKoung; Muthen, Bengt O.

    2009-01-01

    This study introduces a two-part factor mixture model as an alternative analysis approach to modeling data where strong floor effects and unobserved population heterogeneity exist in the measured items. As the names suggests, a two-part factor mixture model combines a two-part model, which addresses the problem of strong floor effects by…

  20. Polymer mixtures in confined geometries: Model systems to explore ...

    Indian Academy of Sciences (India)

    While binary (A,B) symmetric polymer mixtures in = 3 dimensions have an unmixing critical point that belongs to the 3 Ising universality class and crosses over to mean field behavior for very long chains, the critical behavior of mixtures confined into thin film geometry falls in the 2 Ising class irrespective of chain length.

  1. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes i...

  2. Polymer mixtures in confined geometries: Model systems to explore ...

    Indian Academy of Sciences (India)

    Abstract. While binary (A,B) symmetric polymer mixtures in d = 3 dimensions have an unmixing critical point that belongs to the 3d Ising universality class and crosses over to mean field behavior for very long chains, the critical behavior of mixtures confined into thin film geometry falls in the 2d Ising class irrespective of chain ...

  3. Monte Carlo modeling of spatially complex wrist tissue for the optimization of optical pulse oximeters

    Science.gov (United States)

    Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.

    2017-02-01

    Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.

  4. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  5. Monte Carlo path sampling approach to modeling aeolian sediment transport

    Science.gov (United States)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient

  6. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  7. Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model

    Science.gov (United States)

    White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.

    1989-01-01

    A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.

  8. Investigation of Multicritical Phenomena in ANNNI Model by Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    A. K. Murtazaev

    2012-01-01

    Full Text Available The anisotropic Ising model with competing interactions is investigated in wide temperature range and |J1/J| parameters by means of Monte Carlo methods. Static critical exponents of the magnetization, susceptibility, heat capacity, and correlation radius are calculated in the neighborhood of Lifshitz point. According to obtained results, a phase diagram is plotted, the coordinates of Lifshitz point are defined, and a character of multicritical behavior of the system is detected.

  9. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence...... and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example....

  10. Mixture Markov regression model with application to mosquito surveillance data analysis.

    Science.gov (United States)

    Gao, Xin; Cao, Yurong R; Ogden, Nicholas; Aubin, Louise; Zhu, Huaiping P

    2017-05-01

    A mixture Markov regression model is proposed to analyze heterogeneous time series data. Mixture quasi-likelihood is formulated to model time series with mixture components and exogenous variables. The parameters are estimated by quasi-likelihood estimating equations. A modified EM algorithm is developed for the mixture time series model. The model and proposed algorithm are tested on simulated data and applied to mosquito surveillance data in Peel Region, Canada. © 2017 Her Majesty the Queen in Right of Canada. Reproduced with the permission of the Minister of Health.

  11. Monte carlo electron source model validation for an Elekta Precise linac.

    Science.gov (United States)

    Ali, O A; Willemse, C A; Shaw, W; O'Reilly, F H J; du Plessis, F C P

    2011-05-01

    Electron radiation therapy is used frequently for the treatment of skin cancers and superficial tumors especially in the absence of kilovoltage treatment units. Head-and-neck treatment sites require accurate dose distribution calculation to minimize dose to critical structures, e.g., the eye, optic chiasm, nerves, and parotid gland. Monte Carlo simulations can be regarded as the dose calculation method of choice because it can simulate electron transport through any tissue and geometry. In order to use this technique, an accurate electron beam model should be used. In this study, a two point-source electron beam model developed for an Elekta Precise linear accelerator was validated. Monte Carlo data were benchmarked against measured water tank data for a set of regular and circular fields and at 95, 100, and 110 cm source-to-skin-distance. EDR2 Film dose distribution data were also obtained for a paranasal sinus treatment case using a Rando phantom and compared with corresponding dose distribution data obtained from Monte Carlo simulations and a CMS XiO treatment planning system. A partially shielded electron field was also evaluated using a solid water phantom and EDR2 film measurements against Monte Carlo simulations using the developed source model. The major findings were that it could accurately replicate percentage depth dose and beam profile data for water measurements at source-to-skin-distances ranging between 95 and 110 cm over beam energies ranging from 4 to 15 MeV. This represents a stand-off between 0 and 15 cm. Most percentage depth dose and beam profile data (better than 95%) agreed within 2%/2 mm and nearly 100% of the data compared within 3%/3 mm. Calculated penumbra data were within 2 mm for the 20 x 20 cm2 field compared to water tank data at 95 cm source-to-skin-distance over the above energy range. Film data for the Rando phantom case showed gamma index map data that is similar in comparison with the treatment planning system and the Monte

  12. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  13. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    Science.gov (United States)

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  14. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  15. Addressing the Problem of Switched Class Labels in Latent Variable Mixture Model Simulation Studies

    Science.gov (United States)

    Tueller, Stephen J.; Drotar, Scott; Lubke, Gitta H.

    2011-01-01

    The discrimination between alternative models and the detection of latent classes in the context of latent variable mixture modeling depends on sample size, class separation, and other aspects that are related to power. Prior to a mixture analysis it is useful to investigate model performance in a simulation study that reflects the research…

  16. RIM: A Random Item Mixture Model to Detect Differential Item Functioning

    Science.gov (United States)

    Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…

  17. RIM: A random item mixture model to detect Differential Item Functioning

    NARCIS (Netherlands)

    Frederickx, S.; Tuerlinckx, T.; de Boeck, P.; Magis, D.

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is

  18. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    Science.gov (United States)

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  19. Direct Monte Carlo simulation of nanoscale mixed gas bearings

    Directory of Open Access Journals (Sweden)

    Kyaw Sett Myo

    2015-06-01

    Full Text Available The conception of sealed hard drives with helium gas mixture has been recently suggested over the current hard drives for achieving higher reliability and less position error. Therefore, it is important to understand the effects of different helium gas mixtures on the slider bearing characteristics in the head–disk interface. In this article, the helium/air and helium/argon gas mixtures are applied as the working fluids and their effects on the bearing characteristics are studied using the direct simulation Monte Carlo method. Based on direct simulation Monte Carlo simulations, the physical properties of these gas mixtures such as mean free path and dynamic viscosity are achieved and compared with those obtained from theoretical models. It is observed that both results are comparable. Using these gas mixture properties, the bearing pressure distributions are calculated under different fractions of helium with conventional molecular gas lubrication models. The outcomes reveal that the molecular gas lubrication results could have relatively good agreement with those of direct simulation Monte Carlo simulations, especially for pure air, helium, or argon gas cases. For gas mixtures, the bearing pressures predicted by molecular gas lubrication model are slightly larger than those from direct simulation Monte Carlo simulation.

  20. Monte Carlo Calculation of the Thermodynamic Properties of a Quantum Model : A One-Dimensional Fermion Lattice Model

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1981-01-01

    Starting from a genuine discrete version of the Feynman path-integral representation for the partition function, calculations have been made of the energy, specific heat, and the static density-density correlation functions for a one-dimensional lattice model at nonzero temperatures. A Monte Carlo

  1. Monte Carlo simulation for statistical mechanics model of ion-channel cooperativity in cell membranes.

    Science.gov (United States)

    Erdem, Riza; Aydiner, Ekrem

    2009-03-01

    Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique.

  2. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing

    2010-01-01

    This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...

  3. An Infinite Mixture Model for Coreference Resolution in Clinical Notes

    Science.gov (United States)

    Liu, Sijia; Liu, Hongfang; Chaudhary, Vipin; Li, Dingcheng

    2016-01-01

    It is widely acknowledged that natural language processing is indispensable to process electronic health records (EHRs). However, poor performance in relation detection tasks, such as coreference (linguistic expressions pertaining to the same entity/event) may affect the quality of EHR processing. Hence, there is a critical need to advance the research for relation detection from EHRs. Most of the clinical coreference resolution systems are based on either supervised machine learning or rule-based methods. The need for manually annotated corpus hampers the use of such system in large scale. In this paper, we present an infinite mixture model method using definite sampling to resolve coreferent relations among mentions in clinical notes. A similarity measure function is proposed to determine the coreferent relations. Our system achieved a 0.847 F-measure for i2b2 2011 coreference corpus. This promising results and the unsupervised nature make it possible to apply the system in big-data clinical setting. PMID:27595047

  4. Storytelling Voice Conversion: Evaluation Experiment Using Gaussian Mixture Models

    Science.gov (United States)

    Přibil, Jiří; Přibilová, Anna; Ďuračková, Daniela

    2015-07-01

    In the development of the voice conversion and personification of the text-to-speech (TTS) systems, it is very necessary to have feedback information about the users' opinion on the resulting synthetic speech quality. Therefore, the main aim of the experiments described in this paper was to find out whether the classifier based on Gaussian mixture models (GMM) could be applied for evaluation of different storytelling voices created by transformation of the sentences generated by the Czech and Slovak TTS system. We suppose that it is possible to combine this GMM-based statistical evaluation with the classical one in the form of listening tests or it can replace them. The results obtained in this way were in good correlation with the results of the conventional listening test, so they confirm practical usability of the developed GMM classifier. With the help of the performed analysis, the optimal setting of the initial parameters and the structure of the input feature set for recognition of the storytelling voices was finally determined.

  5. A Monte Carlo simulation for kinetic chemotaxis models: an application to the traveling population wave

    CERN Document Server

    Yasuda, Shugo

    2015-01-01

    A Monte Carlo simulation for the chemotactic bacteria is developed on the basis of the kinetic modeling, i.e., the Boltzmann transport equation, and applied to the one-dimensional traveling population wave in a micro channel.In this method, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to solve the macroscopic transport of the chemical cues in the field. The simulation method can successfully reproduce the traveling population wave of bacteria which was observed experimentally. The microscopic dynamics of bacteria, e.g., the velocity autocorrelation function and velocity distribution function of bacteria, are also investigated. It is found that the bacteria which form the traveling population wave create quasi-periodic motions as well as a migratory movement along with the traveling population wave. Simulations are also performed with changing the sensitivity and modulation parameters in the response function of bacteria. It is found th...

  6. Modeling Replenishment of Ultrathin Liquid Perfluoropolyether Z Films on Solid Surfaces Using Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    M. S. Mayeed

    2014-01-01

    Full Text Available Applying the reptation algorithm to a simplified perfluoropolyether Z off-lattice polymer model an NVT Monte Carlo simulation has been performed. Bulk condition has been simulated first to compare the average radius of gyration with the bulk experimental results. Then the model is tested for its ability to describe dynamics. After this, it is applied to observe the replenishment of nanoscale ultrathin liquid films on solid flat carbon surfaces. The replenishment rate for trenches of different widths (8, 12, and 16 nms for several molecular weights between two films of perfluoropolyether Z from the Monte Carlo simulation is compared to that obtained solving the diffusion equation using the experimental diffusion coefficients of Ma et al. (1999, with room condition in both cases. Replenishment per Monte Carlo cycle seems to be a constant multiple of replenishment per second at least up to 2 nm replenished film thickness of the trenches over the carbon surface. Considerable good agreement has been achieved here between the experimental results and the dynamics of molecules using reptation moves in the ultrathin liquid films on solid surfaces.

  7. Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    CERN Document Server

    Navarro, C A; Deng, Youjin

    2015-01-01

    The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate...

  8. Null distribution of multiple correlation coefficient under mixture normal model

    Directory of Open Access Journals (Sweden)

    Hydar Ali

    2002-01-01

    correlation coefficient, R2, when a sample is drawn from a mixture of two multivariate Gaussian populations. The moments of 1−R2 and inverse Mellin transform have been used to derive the density of R2.

  9. Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.

    Science.gov (United States)

    Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M

    2014-12-01

    Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Monte Carlo modeling of the MammoSite(Reg) treatments: Dose effects of air pockets

    Science.gov (United States)

    Huang, Yu-Huei Jessica

    In the treatment of early-stage breast cancer, MammoSiteRTM has been used as one of the partial breast irradiation techniques after breast-conserving surgery. The MammoSiteRTM applicator is a single catheter with an inflatable balloon at its distal end that can be placed in the resected cavity (tumor bed). The treatment is performed by delivering the Ir-192 high-dose-rate source through the center lumen of the catheter by a remote afterloader while the balloon is inflated in the tumor bed cavity. In the MammoSiteRTM treatment, it has been found that air pockets occasionally exist and can be seen and measured in CT images. Experiences have shown that about 90% of the patients have air pockets when imaged two days after the balloon placement. The criterion for the air pocket volume is less than or equal to 10% of the planning target volume in volume. The purpose of this study is to quantify dose errors occurring at the interface of the air pocket in MammoSiteRTM treatments with Monte Carlo calculations, so that the dosimetric effects from the air pocket can be fully understood. Modern brachytherapy treatment planning systems typically consider patient anatomy as a homogeneous water medium, and incorrectly model lateral and backscatter radiation during treatment delivery. Heterogeneities complicate the problem and may result in overdosage to the tissue located near the medium interface. This becomes a problem in MammoSiteRTM brachytherapy when air pocket appears during the treatment. The resulting percentage dose difference near the air-tissue interface is hypothesized to be greater than 10% when comparing Monte Carlo N-Particle (version 5) with current treatment planning systems. The specific aims for this study are: (1) Validate Monte Carlo N-Particle (Version 5) source modeling. (2) Develop phantom. (3) Calculate phantom doses with Monte Carlo N-Particle (Version 5) and investigate doses difference between thermoluminescent dosimeter measurement, treatment planning

  11. Geometric comparison of popular mixture-model distances.

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A.

    2010-09-01

    Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively little consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.

  12. Numerical simulation of slurry jets using mixture model

    Directory of Open Access Journals (Sweden)

    Wen-xin Huai

    2013-01-01

    Full Text Available Slurry jets in a static uniform environment were simulated with a two-phase mixture model in which flow-particle interactions were considered. A standard k-ε turbulence model was chosen to close the governing equations. The computational results were in agreement with previous laboratory measurements. The characteristics of the two-phase flow field and the influences of hydraulic and geometric parameters on the distribution of the slurry jets were analyzed on the basis of the computational results. The calculated results reveal that if the initial velocity of the slurry jet is high, the jet spreads less in the radial direction. When the slurry jet is less influenced by the ambient fluid (when the Stokes number St is relatively large, the turbulent kinetic energy k and turbulent dissipation rate ε, which are relatively concentrated around the jet axis, decrease more rapidly after the slurry jet passes through the nozzle. For different values of St, the radial distributions of streamwise velocity and particle volume fraction are both self-similar and fit a Gaussian profile after the slurry jet fully develops. The decay rate of the particle velocity is lower than that of water velocity along the jet axis, and the axial distributions of the centerline particle streamwise velocity are self-similar along the jet axis. The pattern of particle dispersion depends on the Stokes number St. When St = 0.39, the particle dispersion along the radial direction is considerable, and the relative velocity is very low due to the low dynamic response time. When St = 3.08, the dispersion of particles along the radial direction is very little, and most of the particles have high relative velocities along the streamwise direction.

  13. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  14. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    Science.gov (United States)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  15. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  16. Molecular mobility with respect to accessible volume in Monte Carlo lattice model for polymers

    Science.gov (United States)

    Diani, J.; Gilormini, P.

    2017-02-01

    A three-dimensional cubic Monte Carlo lattice model is considered to test the impact of volume on the molecular mobility of amorphous polymers. Assuming classic polymer chain dynamics, the concept of locked volume limiting the accessible volume around the polymer chains is introduced. The polymer mobility is assessed by its ability to explore the entire lattice thanks to reptation motions. When recording the polymer mobility with respect to the lattice accessible volume, a sharp mobility transition is observed as witnessed during glass transition. The model ability to reproduce known actual trends in terms of glass transition with respect to material parameters, is also tested.

  17. Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model

    Science.gov (United States)

    Heller, Urs M.

    1988-01-01

    An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.

  18. Complete model description of an electron beam using ACCEPT Monte Carlo simulation code

    Energy Technology Data Exchange (ETDEWEB)

    Weiss, D.E. [Corporate Research Process Technologies Lab., St. Paul, MN (United States); Kensek, R.P. [Sandia National Labs., Albuquerque, NM (United States)

    1993-12-31

    A 3D model of a low voltage electron beam has been constructed using the ITS/ACCEPT Monte Carlo code in order to validate the code for this application and improve upon 1D slab geometry simulations. A line source description update to the code allows complete simulation of a low voltage electron beam with any filament length. Faithful reproduction of the geometric elements involved, especially the window support structure, can account for 90--95% of the dose received by routine dosimetry. With a 3D model, dose distributions in non-web articles can be determined and the effects of equipment modifications can be anticipated in advance.

  19. Development of a Monte Carlo model for the Brainlab microMLC.

    Science.gov (United States)

    Belec, Jason; Patrocinio, Horacio; Verhaegen, Frank

    2005-03-07

    Stereotactic radiosurgery with several static conformal beams shaped by a micro multileaf collimator (microMLC) is used to treat small irregularly shaped brain lesions. Our goal is to perform Monte Carlo calculations of dose distributions for certain treatment plans as a verification tool. A dedicated microMLC component module for the BEAMnrc code was developed as part of this project and was incorporated in a model of the Varian CL2300 linear accelerator 6 MV photon beam. As an initial validation of the code, the leaf geometry was visualized by tracing particles through the component module and recording their position each time a leaf boundary was crossed. The leaf dimensions were measured and the leaf material density and interleaf air gap were chosen to match the simulated leaf leakage profiles with film measurements in a solid water phantom. A comparison between Monte Carlo calculations and measurements (diode, radiographic film) was performed for square and irregularly shaped fields incident on flat and homogeneous water phantoms. Results show that Monte Carlo calculations agree with measured dose distributions to within 2% and/or 1 mm except for field size smaller than 1.2 cm diameter where agreement is within 5% due to uncertainties in measured output factors.

  20. Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system

    Energy Technology Data Exchange (ETDEWEB)

    Penchev, Petar; Maeder, Ulf; Fiebich, Martin [IMPS University of Applied Sciences, Giessen (Germany). Inst. of Medical Physics and Radiation Protection; Zink, Klemens [IMPS University of Applied Sciences, Giessen (Germany). Inst. of Medical Physics and Radiation Protection; University Hospital Marburg (Germany). Dept. of Radiotherapy and Oncology

    2015-07-01

    The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.

  1. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties in the commerc...... and the laboratory test values for the complex modulus shows that DEM can be used to reliably predict the viscoelastic properties of asphalt mixtures.......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...... modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted...

  2. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  3. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  4. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  5. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)

    2010-01-01

    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  6. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    Science.gov (United States)

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  7. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    Science.gov (United States)

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  8. Coupled Monte Carlo simulation and Copula theory for uncertainty analysis of multiphase flow simulation models.

    Science.gov (United States)

    Jiang, Xue; Na, Jin; Lu, Wenxi; Zhang, Yu

    2017-11-01

    Simulation-optimization techniques are effective in identifying an optimal remediation strategy. Simulation models with uncertainty, primarily in the form of parameter uncertainty with different degrees of correlation, influence the reliability of the optimal remediation strategy. In this study, a coupled Monte Carlo simulation and Copula theory is proposed for uncertainty analysis of a simulation model when parameters are correlated. Using the self-adaptive weight particle swarm optimization Kriging method, a surrogate model was constructed to replace the simulation model and reduce the computational burden and time consumption resulting from repeated and multiple Monte Carlo simulations. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were employed to identify whether the t Copula function or the Gaussian Copula is the optimal Copula function to match the relevant structure of the parameters. The results show that both the AIC and BIC values of the t Copula function are less than those of the Gaussian Copula function. This indicates that the t Copula function is the optimal function for matching the relevant structure of the parameters. The outputs of the simulation model when parameter correlation was considered and when it was ignored were compared. The results show that the amplitude of the fluctuation interval when parameter correlation was considered is less than the corresponding amplitude when parameter estimation was ignored. Moreover, it was demonstrated that considering the correlation among parameters is essential for uncertainty analysis of a simulation model, and the results of uncertainty analysis should be incorporated into the remediation strategy optimization process.

  9. MO-F-BRB-05: Monte Carlo Modeling of the Novalis TX Stereotactic Radiosurgery Mode.

    Science.gov (United States)

    Milroy, D; Patrocinio, H; Seuntjens, J

    2012-06-01

    To model the stereotactic mode of the Varian-Brainlab Novalis TX linear accelerator using the BEAMnrc Monte Carlo user code Methods: The EGSnrc Monte Carlo user codes BEAMnrc and DOSXYZnrc were used for photon simulations and dose calculations, respectively. A Monte Carlo model of a Varian Clinac 21 EX was modified to model the stereotactic radiosurgery (SRS) mode of the Novalis, taking into account the smaller dimensions of the SRS flattening filter and limited field sizes. The parameters of source such as energy, size and angular spread, were readjusted following a new procedure outlined by Almberg et al, 2012. A component module, DYNVMLC, previously used to model the Varian Millennium 120 multi-leaf collimator (MLC), was reprogrammed to include the four leaf types of the Varian high definition 120 leaf MLC. Interleaf air-gap and leaf density were adjusted to match interleaf leakage profiles measured with EBT2 film. Subsequent validation included profiles, percent depth dose curves and output factors measured with ion chambers, and other film measurements. From PDD measurements, the energy of the incident electron beam was determined to be 6.6 MeV. From penumbra measurements, the electron radial intensity distribution, given as the full width at half maximum of a Gaussian distribution, was found to be 0.7 mm (cross-plane) and 0.8 mm (in-plane). From profiles in water, the mean angular spread had to be adjusted to 1.27° to achieve an acceptable match. The interleaf air-gap and the density of the leaves of the HDMLC were determined to be 0.0047 cm and 18.5 g/cm(3) , respectively. The Almberg procedure was successfully implemented in determining the electron beam parameters to model the Novalis Tx's SRS mode. Dose profiles simulated with the new HDMLC component module agreed with measurements within 2%. © 2012 American Association of Physicists in Medicine.

  10. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  11. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into

  12. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  13. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Science.gov (United States)

    Kim, S.; Nam, W.; Ahn, H.; Kim, T.; Heo, J.-H.

    2015-06-01

    Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold) data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  14. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  15. Application of association models to mixtures containing alkanolamines

    DEFF Research Database (Denmark)

    Avlund, Ane Søgaard; Eriksen, Daniel Kunisch; Kontogeorgis, Georgios

    2011-01-01

    . The role of association schemes is investigated in connection with CPA, while for sPC-SAFT emphasisis given on the role of different types of data in the determination of pure compound parameters suitable for mixture calculations. Moreover, the performance of CPA and sPC-SAFT for MEA-containing systems...... is compared.The investigation showed that vapor pressures and liquid densities were not sufficient for obtaining reliable parameters with either CPA or sPC-SAFT, but that at least one other type of information is needed. LLE data for a binary mixture of the associating component with an inert compound is very...

  16. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  17. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  18. Bayesian Analysis of a Lipid-Based Physiologically Based Toxicokinetic Model for a Mixture of PCBs in Rats

    Directory of Open Access Journals (Sweden)

    Alan F. Sasso

    2012-01-01

    Full Text Available A lipid-based physiologically based toxicokinetic (PBTK model has been developed for a mixture of six polychlorinated biphenyls (PCBs in rats. The aim of this study was to apply population Bayesian analysis to a lipid PBTK model, while incorporating an internal exposure-response model linking enzyme induction and metabolic rate. Lipid-based physiologically based toxicokinetic models are a subset of PBTK models that can simulate concentrations of highly lipophilic compounds in tissue lipids, without the need for partition coefficients. A hierarchical treatment of population metabolic parameters and a CYP450 induction model were incorporated into the lipid-based PBTK framework, and Markov-Chain Monte Carlo was applied to in vivo data. A mass balance of CYP1A and CYP2B in the liver was necessary to model PCB metabolism at high doses. The linked PBTK/induction model remained on a lipid basis and was capable of modeling PCB concentrations in multiple tissues for all dose levels and dose profiles.

  19. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    Science.gov (United States)

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  20. Modeling of radiation-induced bystander effect using Monte Carlo methods

    Science.gov (United States)

    Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

    2009-03-01

    Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

  1. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  2. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo.

    Science.gov (United States)

    Golightly, Andrew; Wilkinson, Darren J

    2011-12-06

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka-Volterra system and a prokaryotic auto-regulatory network.

  3. Modeling turbulence in underwater wireless optical communications based on Monte Carlo simulation.

    Science.gov (United States)

    Vali, Zahra; Gholami, Asghar; Ghassemlooy, Zabih; Michelson, David G; Omoomi, Masood; Noori, Hamed

    2017-07-01

    Turbulence affects the performance of underwater wireless optical communications (UWOC). Although multiple scattering and absorption have been previously investigated by means of physical simulation models, still a physical simulation model is needed for UWOC with turbulence. In this paper, we propose a Monte Carlo simulation model for UWOC in turbulent oceanic clear water, which is far less computationally intensive than approaches based on computational fluid dynamics. The model is based on the variation of refractive index in a horizontal link. Results show that the proposed simulation model correctly reproduces lognormal probability density function of the received intensity for weak and moderate turbulence regimes. Results presented match well with experimental data reported for weak turbulence. Furthermore, scintillation index and turbulence-induced power loss versus link span are exhibited for different refractive index variations.

  4. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  5. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  6. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  7. A Monte Carlo model for 3D grain evolution during welding

    Science.gov (United States)

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-09-01

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.

  8. Statistical modeling of Optical Coherence Tomography images by asymmetric Normal Laplace mixture model.

    Science.gov (United States)

    Jorjandi, Sahar; Rabbani, Hossein; Kafieh, Raheleh; Amini, Zahra

    2017-07-01

    Optical Coherence Tomography (OCT) is known as a non-invasive and high resolution imaging modality in ophthalmology. Effecting noise on the OCT images as well as other reasons cause a random behavior in these images. In this study, we introduce a new statistical model for retinal layers in healthy OCT images. This model, namely asymmetric Normal Laplace (NL), fits well the advent of asymmetry and heavy-tailed in intensity distribution of each layer. Due to the layered structure of retina, a mixture model is addressed. It is proposed to evaluate the fitness criteria called Kull-back Leibler Divergence (KLD) and chi-square test along visual results. The results express the well performance of proposed model in fitness of data except for 6th and 7th layers. Using a complicated model, e.g. a mixture model with two component, seems to be appropriate for these layers. The mentioned process for train images can then be devised for a test image by employing the Expectation Maximization (EM) algorithm to estimate the values of parameters in mixture model.

  9. Unsupervised Segmentation of Spectral Images with a Spatialized Gaussian Mixture Model and Model Selection

    Directory of Open Access Journals (Sweden)

    Cohen S.X.

    2014-03-01

    Full Text Available In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.

  10. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.

    Science.gov (United States)

    Galford, J E

    2017-04-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A study of potential energy curves from the model space quantum Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2015-12-07

    We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.

  12. Characterisation of radiation damage in silicon photomultipliers with a Monte Carlo model

    CERN Document Server

    Majos, S Sanchez; Pochodzalla, J

    2008-01-01

    Measured response functions and low photon yield spectra of silicon photomultipliers (SiPM) were compared to multi-photoelectron pulse-height distributions generated by a Monte Carlo model. Characteristic parameters for SiPM were derived. The devices were irradiated with 14 MeV electrons at the Mainz microtron MAMI. It is shown that the first noticeable damage consists of an increase in the rate of dark pulses and the loss of uniformity in the pixel gains. Higher radiation doses reduced also the photon detection efficiency. The results are especially relevant for applications of SiPM in fibre detectors at high luminosity experiments.

  13. Growth Mixture Modeling of Depression Symptoms Following Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    Rapson Gomez

    2017-08-01

    Full Text Available Growth Mixture Modeling (GMM was used to investigate the longitudinal trajectory of groups (classes of depression symptoms, and how these groups were predicted by the covariates of age, sex, severity, and length of hospitalization following Traumatic Brain Injury (TBI in a group of 1074 individuals (696 males, and 378 females from the Royal Hobart Hospital, who sustained a TBI. The study began in late December 2003 and recruitment continued until early 2007. Ages ranged from 14 to 90 years, with a mean of 35.96 years (SD = 16.61. The study also examined the associations between the groups and causes of TBI. Symptoms of depression were assessed using the Hospital Anxiety and Depression Scale within 3 weeks of injury, and at 1, 3, 6, 12, and 24 months post-injury. The results revealed three groups: low, high, and delayed depression. In the low group depression scores remained below the clinical cut-off at all assessment points during the 24-months post-TBI, and in the high group, depression scores were above the clinical cut-off at all assessment points. The delayed group showed an increase in depression symptoms to 12 months after injury, followed by a return to initial assessment level during the following 12 months. Covariates were found to be differentially associated with the three groups. For example, relative to the low group, the high depression group was associated with more severe TBI, being female, and a shorter period of hospitalization. The delayed group also had a shorter period of hospitalization, were younger, and sustained less severe TBI. Our findings show considerable fluctuation of depression over time, and that a non-clinical level of depression at any one point in time does not necessarily mean that the person will continue to have non-clinical levels in the future. As we used GMM, we were able to show new findings and also bring clarity to contradictory past findings on depression and TBI. Consequently, we recommend the use

  14. Numerical Simulation of Water Jet Flow Using Diffusion Flux Mixture Model

    Directory of Open Access Journals (Sweden)

    Zhi Shang

    2014-01-01

    Full Text Available A multidimensional diffusion flux mixture model was developed to simulate water jet two-phase flows. Through the modification of the gravity using the gradients of the mixture velocity, the centrifugal force on the water droplets was able to be considered. The slip velocities between the continuous phase (gas and the dispersed phase (water droplets were able to be calculated through multidimensional diffusion flux velocities based on the modified multidimensional drift flux model. Through the numerical simulations, comparing with the experiments and the simulations of traditional algebraic slip mixture model on the water mist spray, the model was validated.

  15. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  16. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  17. Monte Carlo Radiative Transfer Modeling of Lightning Observed in Galileo Images of Jupiter

    Science.gov (United States)

    Dyudine, U. A.; Ingersoll, Andrew P.

    2002-01-01

    We study lightning on Jupiter and the clouds illuminated by the lightning using images taken by the Galileo orbiter. The Galileo images have a resolution of 25 km/pixel and axe able to resolve the shape of the single lightning spots in the images, which have full widths at half the maximum intensity in the range of 90-160 km. We compare the measured lightning flash images with simulated images produced by our ED Monte Carlo light-scattering model. The model calculates Monte Carlo scattering of photons in a ED opacity distribution. During each scattering event, light is partially absorbed. The new direction of the photon after scattering is chosen according to a Henyey-Greenstein phase function. An image from each direction is produced by accumulating photons emerging from the cloud in a small range (bins) of emission angles. Lightning bolts are modeled either as points or vertical lines. Our results suggest that some of the observed scattering patterns axe produced in a 3-D cloud rather than in a plane-parallel cloud layer. Lightning is estimated to occur at least as deep as the bottom of the expected water cloud. For the six cases studied, we find that the clouds above the lightning are optically thick (tau > 5). Jovian flashes are more regular and circular than the largest terrestrial flashes observed from space. On Jupiter there is nothing equivalent to the 30-40-km horizontal flashes which axe seen on Earth.

  18. Fluorescence spectroscopy of oral tissue: Monte Carlo modeling with site-specific tissue properties.

    Science.gov (United States)

    Pavlova, Ina; Weber, Crystal Redden; Schwarz, Richard A; Williams, Michelle D; Gillenwater, Ann M; Richards-Kortum, Rebecca

    2009-01-01

    A Monte Carlo model with site-specific input is used to predict depth-resolved fluorescence spectra from individual normal, inflammatory, and neoplastic oral sites. Our goal in developing this model is to provide a computational tool to study how the morphological characteristics of the tissue affect clinically measured spectra. Tissue samples from the measured sites are imaged using fluorescence confocal microscopy; autofluorescence patterns are measured as a function of depth and tissue sublayer for each individual site. These fluorescence distributions are used as input to the Monte Carlo model to generate predictions of fluorescence spectra, which are compared to clinically measured spectra on a site-by-site basis. A lower fluorescence intensity and longer peak emission wavelength observed in clinical spectra from dysplastic and cancerous sites are found to be associated with a decrease in measured fluorescence originating from the stroma or deeper fibrous regions, and an increase in the measured fraction of photons originating from the epithelium or superficial tissue layers. The simulation approach described here can be used to suggest an optical probe design that samples fluorescence at a depth that gives optimal separation in the spectral signal measured for benign, dysplastic, and cancerous oral mucosa.

  19. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  20. Three-dimensional Monte Carlo model of pulsed-laser treatment of cutaneous vascular lesions

    Science.gov (United States)

    Milanič, Matija; Majaron, Boris

    2011-12-01

    We present a three-dimensional Monte Carlo model of optical transport in skin with a novel approach to treatment of side boundaries of the volume of interest. This represents an effective way to overcome the inherent limitations of ``escape'' and ``mirror'' boundary conditions and enables high-resolution modeling of skin inclusions with complex geometries and arbitrary irradiation patterns. The optical model correctly reproduces measured values of diffuse reflectance for normal skin. When coupled with a sophisticated model of thermal transport and tissue coagulation kinetics, it also reproduces realistic values of radiant exposure thresholds for epidermal injury and for photocoagulation of port wine stain blood vessels in various skin phototypes, with or without application of cryogen spray cooling.

  1. Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Capizzo, M C; Sperandeo-Mineo, R M; Zarcone, M [UoP-PERG, University of Palermo Physics Education Research Group and Dipartimento di Fisica e Tecnologie Relative, Universita di Palermo (Italy)], E-mail: sperandeo@difter.unipa.it

    2008-05-15

    We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels.

  2. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  3. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    OpenAIRE

    PEIXOTO, F. C.; Medeiros,J. L.

    1999-01-01

    Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF) is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  4. The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012

    Science.gov (United States)

    Keen, David A.; Pusztai, László

    2013-11-01

    This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since

  5. Monte Carlo modeling of Standard Model multi-boson production processes for √s = 13 TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    We present the Monte Carlo(MC) setup used by ATLAS to model multi-boson processes in √s = 13 TeV proton-proton collisions. The baseline Monte Carlo generators are compared with each other in key kinematic distributions of the processes under study. Sample normalization and systematic uncertainties are discussed.

  6. Bayes estimation of the mixture of hazard-rate model

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, K.K.; Krishna, Hare; Singh, Bhupendra

    1997-01-01

    Engineering systems are subject to continuous stresses and shocks which may (or may not) cause a change in the failure pattern of the system with unknown probability q( =1 - p), 0 < p < 1. Conceptualising a mixture of hazard-rate or failure-rate patterns representing a realistic situation, the failure time distribution is given in the corresponding case. Classical and Bayesian estimation of the parameters and reliability characteristics of this failure time distribution is the subject matter of the present study.

  7. Null distribution of multiple correlation coefficient under mixture normal model

    OpenAIRE

    Ali, Hydar; Nagar, Daya K.

    2002-01-01

    The multiple correlation coefficient is used in a large variety of statistical tests and regression problems. In this article, we derive the null distribution of the square of the sample multiple correlation coefficient, R2, when a sample is drawn from a mixture of two multivariate Gaussian populations. The moments of 1−R2 and inverse Mellin transform have been used to derive the density of R2.

  8. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  9. Comparison of the Batho, ETAR and Monte Carlo dose calculation methods in CT based patient models.

    Science.gov (United States)

    du Plessis, F C; Willemse, C A; Lötter, M G; Goedhals, L

    2001-04-01

    This paper shows the contribution that Monte Carlo methods make in regard to dose distribution calculations in CT based patient models and the role it plays as a gold standard to evaluate other dose calculation algorithms. The EGS4 based BEAM code was used to construct a generic 8 MV accelerator to obtain a series of x-ray field sources. These were used in the EGS4 based DOSXYZ code to generate beam data in a mathematical water phantom to set up a beam model in a commercial treatment planning system (TPS), CADPLAN V.2.7.9. Dose distributions were calculated with the Batho and ETAR inhomogeneity correction algorithms in head/sinus, lung, and prostate patient models for 2 x 2, 5 x 5, and 10 X 10 cm2 open x-ray beams. Corresponding dose distributions were calculated with DOSXYZ that were used as a benchmark. The dose comparisons are expressed in terms of 2D isodose distributions, percentage depth dose data, and dose difference volume histograms (DDVH's). Results indicated that the Batho and ETAR methods contained inaccuracies of 20%-70% in the maxillary sinus region in the head model. Large lung inhomogeneities irradiated with small fields gave rise to absorbed dose deviations of 10%-20%. It is shown for a 10 x 10 cm2 field that DOSXYZ models lateral scatter in lung that is not present in the Batho and ETAR methods. The ETAR and Batho methods are accurate within 3% in a prostate model. We showed how the performance of these inhomogeneity correction methods can be understood in realistic patient models using validated Monte Carlo codes such as BEAM and DOSXYZ.

  10. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  11. L-moments based assessment of a mixture model for frequency analysis of rainfall extremes

    Directory of Open Access Journals (Sweden)

    V. Tartaglia

    2005-01-01

    Full Text Available In the framework of the regional analysis of hydrological extreme events, a probabilistic mixture model, given by a convex combination of two Gumbel distributions, is proposed for rainfall extremes modelling. The papers concerns with statistical methodology for parameter estimation. With this aim, theoretical expressions of the L-moments of the mixture model are here defined. The expressions have been tested on the time series of the annual maximum height of daily rainfall recorded in Tuscany (central Italy.

  12. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    Directory of Open Access Journals (Sweden)

    Orlov Alexey

    2016-01-01

    Full Text Available This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge cascades for separation of multicomponent isotope mixtures.

  13. Level densities of heavy nuclei in the shell model Monte Carlo approach

    Directory of Open Access Journals (Sweden)

    Alhassid Y.

    2016-01-01

    Full Text Available Nuclear level densities are necessary input to the Hauser-Feshbach theory of compound nuclear reactions. However, the microscopic calculation of level densities in the presence of correlations is a challenging many-body problem. The configurationinteraction shell model provides a suitable framework for the inclusion of correlations and shell effects, but the large dimensionality of the many-particle model space has limited its application in heavy nuclei. The shell model Monte Carlo method enables calculations in spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods and has proven to be a powerful tool in the microscopic calculation of level densities. We discuss recent applications of the method in heavy nuclei.

  14. Three-dimensional Monte Carlo model of the coffee-ring effect in evaporating colloidal droplets

    Science.gov (United States)

    Crivoi, A.; Duan, Fei

    2014-03-01

    The residual deposits usually left near the contact line after pinned sessile colloidal droplet evaporation are commonly known as a ``coffee-ring'' effect. However, there were scarce attempts to simulate the effect, and the realistic fully three-dimensional (3D) model is lacking since the complex drying process seems to limit the further investigation. Here we develop a stochastic method to model the particle deposition in evaporating a pinned sessile colloidal droplet. The 3D Monte Carlo model is developed in the spherical-cap-shaped droplet. In the algorithm, the analytical equations of fluid flow are used to calculate the probability distributions for the biased random walk, associated with the drift-diffusion equations. We obtain the 3D coffee-ring structures as the final results of the simulation and analyze the dependence of the ring profile on the particle volumetric concentration and sticking probability.

  15. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  16. Monte Carlo simulations of a supersymmetric matrix model of dynamical compactification in non perturbative string theory

    CERN Document Server

    Anagnostopoulos, Konstantinos N; Nishimura, Jun

    2012-01-01

    The IKKT or IIB matrix model has been postulated to be a non perturbative definition of superstring theory. It has the attractive feature that spacetime is dynamically generated, which makes possible the scenario of dynamical compactification of extra dimensions, which in the Euclidean model manifests by spontaneously breaking the SO(10) rotational invariance (SSB). In this work we study using Monte Carlo simulations the 6 dimensional version of the Euclidean IIB matrix model. Simulations are found to be plagued by a strong complex action problem and the factorization method is used for effective sampling and computing expectation values of the extent of spacetime in various dimensions. Our results are consistent with calculations using the Gaussian Expansion method which predict SSB to SO(3) symmetric vacua, a finite universal extent of the compactified dimensions and finite spacetime volume.

  17. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    In recent years, there has been an increase in the application of distributed, physically-based and integrated hydrological models. Many questions regarding how to properly calibrate and validate distributed models and assess the uncertainty of the estimated parameters and the spatially-distributed......-site validation must complement the usual time validation. In this study, we develop, through an application, a comprehensive framework for multi-criteria calibration and uncertainty assessment of distributed physically-based, integrated hydrological models. A revised version of the generalized likelihood...... uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining...

  18. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  19. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems

    DEFF Research Database (Denmark)

    Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.

    2002-01-01

    A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...

  20. Monte Carlo studies of dynamical compactification of extra dimensions in a model of nonperturbative string theory

    CERN Document Server

    Anagnostopoulos, Konstantinos N; Nishimura, Jun

    2015-01-01

    The IIB matrix model has been proposed as a non-perturbative definition of superstring theory. In this work, we study the Euclidean version of this model in which extra dimensions can be dynamically compactified if a scenario of spontaneously breaking the SO(10) rotational symmetry is realized. Monte Carlo calculations of the Euclidean IIB matrix model suffer from a very strong complex action problem due to the large fluctuations of the complex phase of the Pfaffian which appears after integrating out the fermions. We employ the factorization method in order to achieve effective sampling. We report on preliminary results that can be compared with previous studies of the rotational symmetry breakdown using the Gaussian expansion method.

  1. Monte Carlo modelling of germanium crystals that are tilted and have rounded front edges

    Energy Technology Data Exchange (ETDEWEB)

    Gasparro, Joel [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium); Hult, Mikael [EC-JRC-IRMM, Institute for Reference Materials and Measurements, Retieseweg 111, B-2440 Geel (Belgium)], E-mail: mikael.hult@ec.europa.eu; Johnston, Peter N. [Applied Physics, Royal Melbourne Institute of Technology, GPO Box 2476V, Melbourne 3001 (Australia); Tagziria, Hamid [EC-JRC-IPSC, Institute for the Protection and the Security of the Citizen, Via E. Fermi 1, I-21020 Ispra (Vatican City State, Holy See,) (Italy)

    2008-09-01

    Gamma-ray detection efficiencies and cascade summing effects in germanium detectors are often calculated using Monte Carlo codes based on a computer model of the detection system. Such a model can never fully replicate reality and it is important to understand how various parameters affect the results. This work concentrates on quantifying two issues, namely (i) the effect of having a Ge-crystal that is tilted inside the cryostat and (ii) the effect of having a model of a Ge-crystal with rounded edges (bulletization). The effect of the tilting is very small (in the order of per mille) when the tilting angles are within a realistic range. The effect of the rounded edges is, however, relatively large (5-10% or higher) particularly for gamma-ray energies below 100 keV.

  2. A geometrical model for the Monte Carlo simulation of the TrueBeam linac.

    Science.gov (United States)

    Rodriguez, M; Sempau, J; Fogliata, A; Cozzi, L; Sauerwein, W; Brualla, L

    2015-06-07

    Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3  ×  3 to 40  ×  40 cm(2) at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for

  3. A geometrical model for the Monte Carlo simulation of the TrueBeam linac

    Science.gov (United States)

    Rodriguez, M.; Sempau, J.; Fogliata, A.; Cozzi, L.; Sauerwein, W.; Brualla, L.

    2015-06-01

    Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3  ×  3 to 40  ×  40 cm2 at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for the

  4. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.

    2007-01-01

    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, eve...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved.......Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, even...

  5. The Impact of Ignoring a Level of Nesting Structure in Multilevel Mixture Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2012-01-01

    Full Text Available Mixture modeling has gained more attention among practitioners and statisticians in recent years. However, when researchers analyze their data using finite mixture model (FMM, some may assume that the units are independent of each other even though it may not always be the case. This article used simulation studies to examine the impact of ignoring a higher nesting structure in multilevel mixture models. Results indicate that the misspecification results in lower classification accuracy of individuals, less accurate fixed effect estimates, inflation of lower level variance estimates, and less accurate standard error estimates in each subpopulation, the latter result of which in turn affects the accuracy of tests of significance for the fixed effects. The magnitude of the intraclass correlation (ICC coefficient has a substantial impact. The implication for applied researchers is that it is important to model the multilevel data structure in mixture modeling.

  6. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  7. Diluents and lean mixture combustion modeling for SI engines with a quasi-dimensional model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, W.; Davis, G.C. [Ford Motor Co., Dearborn, MI (United States); Hall, M.J.; Matthews, R.D. [Univ. of Texas, Austin, TX (United States)

    1995-12-31

    Lean mixture combustion might be an important feature in the next generation of SI engines, while diluents have already played a key role in the reductions of emissions and fuel consumption. Lean burning modeling is even more important for engine modeling tools which are sometimes used for new engine development. The effect of flame strain on flame speed is believed to be significant, especially under lean mixture conditions. Current quasi-dimensional engine models usually do not include flame strain effects and tend to predict burn rate which is too high under lean burn conditions. An attempt was made to model flame strain effects in quasi-dimensional SI engine models. The Ford model GESIM was used as the platform. A new strain rate model was developed with the Lewis number effect included. A 2.5L V6 4-valve engine and 4.6L V8 2-valve modular engine were used to validate the modified turbulent entrainment combustion model in GESIM. Results showed that the current GESIM can differ by as much as 10 crank angle degrees compared with test data. The modified GESIM can predict burn duration to within 1--2 CA of experimental data, which is considered very good for engine models.

  8. S-factor calculations for mouse models using Monte-Carlo simulations.

    Science.gov (United States)

    Bitar, A; Lisbona, A; Bardiès, M

    2007-12-01

    Targeted radionuclide therapy applications require the use of small animals for preclinical experiments. Accurate dose estimation is needed in such animals to explore and analyze the toxicity of injected radiopharmaceuticals. We developed two numerical models to allow for a more accurate mouse dosimetry. A frozen nude mouse (30 g) was sliced and digital photographs were taken during the operation. More than 30 organs and tissues were identified and manually segmented. A digital (voxel-based) and a mathematical model were constructed from the segmented images. Important organs were simulated as radiation sources using the Monte-Carlo code MCNP4C. Mono-energetic photons from 0.005 to 2 MeV, and monoenergetic electrons from 0.1 to 2.5 MeV were simulated. Activity was supposed to be uniform in all source organs. Results from monoenergetic emissions were integrated over emission spectra. Radionuclide S-factors (Gy/Bq.s) were calculated by taking into account both electron and photon contributions. A comparison of the results obtained with either a voxel-based or mathematical model was carried out. The voxel-based model was then used to revise dosimetric results, obtained previously under the assumption that all emitted energy was absorbed locally. For (188)Re, the self-absorbed doses in xenografted tumors were 39-69% lower than that obtained by assuming local energy deposition. The voxel-based models represent more realistic anatomic approach. The rapid advancement of computer science and new features added to Monte-Carlo codes permit considerable reduction of computational run time. Cross-doses should not be neglected when medium to high energy beta emitters are being used for preclinical experiments on mice.

  9. Coexisting pulses in a model for binary-mixture convection

    Energy Technology Data Exchange (ETDEWEB)

    Riecke, H.; Rappel, W. [Department of Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, Illinois 60208 (United States)]|[Department of Physics, Northeastern University, 111 Dana Research Center, Boston, Massachusetts 02115 (United States)

    1995-11-27

    We address the striking coexistence of localized waves (``pulses``) of different lengths, which was observed in recent experiments and full numerical simulations of binary-mixture convection. Using a set of extended Ginzburg-Landau equations, we show that this multiplicity finds a natural explanation in terms of the competition of two distinct, physical localization mechanisms; one arises from dispersion and the other from a concentration mode. This competition is absent in the standard Ginzburg-Landau equation. It may also be relevant in other waves coupled to a large-scale field. {copyright} {ital 1995} {ital The} {ital American} {ital Physical} {ital Society}.

  10. Mixture drug-count response model for the high-dimensional drug combinatory effect on myopathy.

    Science.gov (United States)

    Wang, Xueying; Zhang, Pengyue; Chiang, Chien-Wei; Wu, Hengyi; Shen, Li; Ning, Xia; Zeng, Donglin; Wang, Lei; Quinney, Sara K; Feng, Weixing; Li, Lang

    2018-02-20

    Drug-drug interactions (DDIs) are a common cause of adverse drug events (ADEs). The electronic medical record (EMR) database and the FDA's adverse event reporting system (FAERS) database are the major data sources for mining and testing the ADE associated DDI signals. Most DDI data mining methods focus on pair-wise drug interactions, and methods to detect high-dimensional DDIs in medical databases are lacking. In this paper, we propose 2 novel mixture drug-count response models for detecting high-dimensional drug combinations that induce myopathy. The "count" indicates the number of drugs in a combination. One model is called fixed probability mixture drug-count response model with a maximum risk threshold (FMDRM-MRT). The other model is called count-dependent probability mixture drug-count response model with a maximum risk threshold (CMDRM-MRT), in which the mixture probability is count dependent. Compared with the previous mixture drug-count response model (MDRM) developed by our group, these 2 new models show a better likelihood in detecting high-dimensional drug combinatory effects on myopathy. CMDRM-MRT identified and validated (54; 374; 637; 442; 131) 2-way to 6-way drug interactions, respectively, which induce myopathy in both EMR and FAERS databases. We further demonstrate FAERS data capture much higher maximum myopathy risk than EMR data do. The consistency of 2 mixture models' parameters and local false discovery rate estimates are evaluated through statistical simulation studies. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Quantum Monte Carlo study of the Retinal Minimal Model C5H6NH2+.

    Science.gov (United States)

    Coccia, Emanuele; Guidoni, Leonardo

    2012-11-05

    In this work, we study the electronic and geometrical properties of the ground state of the Retinal Minimal Model C(5)H(6)NH(2)(+) using the variational Monte Carlo (VMC) method by means of the Jastrow antisymmetrized geminal power (JAGP) wavefunction. A full optimization of all wavefunction parameters, including coefficients, and exponents of the atomic basis, has been achieved, giving converged geometries with a compact and correlated wavefunction. The relaxed geometries of the cis and trans isomers present a pronounced bond length alternation pattern characterized by a C=C central double bond slightly shorter than that reported by the CASPT2 structures. The comparison between different basis sets indicates converged values of geometrical parameters, energy differences, and dipole moments even when the smallest wavefunction is used. The compactness of the wavefunction as well as the scalability of VMC optimization algorithms on massively parallel computers opens the way to perform full structural optimizations of conjugated biomolecules of hundreds of electrons by correlated methods like Quantum Monte Carlo. Copyright © 2012 Wiley Periodicals, Inc.

  12. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  13. Estimation of Stochastic Frontier Models with Fixed Effects through Monte Carlo Maximum Likelihood

    Directory of Open Access Journals (Sweden)

    Grigorios Emvalomatis

    2011-01-01

    Full Text Available Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are updated using information from the data and are robust to possible correlation of the group-specific constant terms with the explanatory variables. Monte Carlo experiments are performed in the specific context of stochastic frontier models to examine and compare the sampling properties of the proposed estimator with those of the random-effects and correlated random-effects estimators. The results suggest that the estimator is unbiased even in short panels. An application to a cross-country panel of EU manufacturing industries is presented as well. The proposed estimator produces a distribution of efficiency scores suggesting that these industries are highly efficient, while the other estimators suggest much poorer performance.

  14. Measurement and Monte Carlo modeling of the spatial response of scintillation screens

    Energy Technology Data Exchange (ETDEWEB)

    Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)

    2007-11-01

    In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.

  15. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  16. Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly

    Directory of Open Access Journals (Sweden)

    Oettingen Mikołaj

    2017-01-01

    Full Text Available The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH.

  17. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    Energy Technology Data Exchange (ETDEWEB)

    Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  18. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    for all the distinct elements. The dynamic modulus and phase angle from uniaxial complex modulus tests of the asphalt mixtures in the laboratory have been collected. A macro-scale Burger’s model was first established and the input parameters of Burger’s contact model were calibrated by fitting....... The ball density effect on the internal stress distribution of the asphalt mixture model has been studied when using this method. Furthermore, the internal stresses under dynamic loading have been studied. The agreement between the predicted and the laboratory test results of the complex modulus shows......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties...

  19. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    containing supercritical methane, argon, nitrogen, and carbon dioxide gases at high pressure. With this model it is unnecessary to solve the time-consuming density profile equations of the gradient theory model. The model has been tested on a number of mixtures at low and high pressures. The results show...... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile.......In this research work, we assumed that the densities of each component in a mixture are linearly distributed across the interface between the coexisting vapor and liquid phases, and we developed a linear gradient theory model for computing interfacial tensions of mixtures, especially mixtures...

  20. Novel Methods for Surface EMG Analysis and Exploration Based on Multi-Modal Gaussian Mixture Models

    National Research Council Canada - National Science Library

    Vögele, Anna Magdalena; Zsoldos, Rebeka R; Krüger, Björn; Licka, Theresia

    2016-01-01

    .... It is based on fitting Gaussian mixture models (GMMs) to surface EMG data (sEMG). This approach enables researchers/users to isolate parts of the overall muscle activation within locomotion EMG data...

  1. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  2. A predictive model of natural gas mixture combustion in internal combustion engines

    Directory of Open Access Journals (Sweden)

    Henry Espinoza

    2007-05-01

    Full Text Available This study shows the development of a predictive natural gas mixture combustion model for conventional com-bustion (ignition engines. The model was based on resolving two areas; one having unburned combustion mixture and another having combustion products. Energy and matter conservation equations were solved for each crankshaft turn angle for each area. Nonlinear differential equations for each phase’s energy (considering compression, combustion and expansion were solved by applying the fourth-order Runge-Kutta method. The model also enabled studying different natural gas components’ composition and evaluating combustion in the presence of dry and humid air. Validation results are shown with experimental data, demonstrating the software’s precision and accuracy in the results so produced. The results showed cylinder pressure, unburned and burned mixture temperature, burned mass fraction and combustion reaction heat for the engine being modelled using a natural gas mixture.

  3. A predictive model of natural gas mixture combustion in internal combustion engines

    Directory of Open Access Journals (Sweden)

    Henry Espinoza

    2010-04-01

    Full Text Available This study shows the development of a predictive natural gas mixture combustion model for conventional com-bustion (ignition engines. The model was based on resolving two areas; one having unburned combustion mixture and another having combustion products. Energy and matter conservation equations were solved for each crankshaft turn angle for each area. Nonlinear differential equations for each phase’s energy (considering compression, combustion and expansion were solved by applying the fourth-order Runge-Kutta method. The model also enabled studying different natural gas components’ composition and evaluating combustion in the presence of dry and humid air. Validation results are shown with experimental data, demonstrating the software’s precision and accuracy in the results so produced. The results showed cylinder pressure, unburned and burned mixture temperature, burned mass fraction and combustion reaction heat for the engine being modelled using a natural gas mixture.

  4. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  5. Evaluation of model constant sensitivities for subfilter mixture fraction variance using adjoint and sensitivity derivative approaches

    Science.gov (United States)

    Griffin, Kevin; Mueller, Michael

    2017-11-01

    The subfilter mixture fraction variance is a critical quantity in Large Eddy Simulation (LES) models for turbulent nonpremixed combustion. In the transport equation for the subfilter mixture fraction variance, two terms require modeling: the subfilter mixture fraction dissipation rate and the subfilter scalar flux. Conventional models for both of these terms require specification of model constants: the subfilter mixture fraction dissipation rate model constant and the subfilter turbulent Schmidt number. In this work, two approaches are compared for computing the sensitivity of the subfilter mixture fraction variance to these two model constants. In the first approach, explicit transport equations are derived and solved for the sensitivity derivatives. In the second approach, the sensitivity is obtained from the continuous adjoint equation of the subfilter mixture fraction variance. To stabilize the forward solution of the adjoint equation in LES, an efficient bootstrapping approach is proposed. The two methods are applied to a non-reacting nonpremixed bluff body flow, and the relative magnitudes of the two model constant sensitivities are discussed. The two methods are compared in terms of computational cost and apparent accuracy.

  6. AO3D: A Monte Carlo code for modeling of environmental light propagation

    Energy Technology Data Exchange (ETDEWEB)

    Bates, David E. [Hawaii Institute of Geophysics and Planetology, School of Ocean and Environmental Technology, University of Hawaii, 2525 Correa Road, Honolulu, HI 96822 (United States)], E-mail: bates@higp.hawaii.edu; Porter, John N. [Hawaii Institute of Geophysics and Planetology, School of Ocean and Environmental Technology, University of Hawaii, 2525 Correa Road, Honolulu, HI 96822 (United States)

    2008-07-15

    A Monte Carlo radiative transfer program (Atmosphere-Ocean 3-Dimensional, AO3D) for modeling the coupled atmosphere-ocean environment has been developed. The code allows for the specification of optical properties for the atmosphere, land and ocean. Light rays are tracked as they pass between the atmosphere and the ocean, reflect off the ocean surface, the ocean floor, and off land, or are absorbed. In this version the polarization of light rays is not considered. The optical properties of each horizontally homogeneous layer within the atmosphere and ocean can be set on a layer-by-layer basis with a choice of phase functions, absorption and scattering coefficients, and index of refraction. A wind-dependent Cox and Munk ocean surface realization (with whitecaps) is implemented to model refraction and reflection from surface waves. Either spherical- or flat-Earth models can be used, and all refraction and reflection are accounted for. The AO3D model has been tested by parts, and as a whole by comparison with single- and total-scattering calculations from other radiative transfer codes. Comparisons with Monte Carlo calculations by Adams and Kattawar (agreement in TOA radiance within the published precision {approx}2%), MODTRAN4 (agreement in spherical-shell atmosphere (SSA) sky radiance within about 2%) and Coupled DIScrete Ordinate Radiative Transfer (COART) (agreement in plane-parallel (PP) sky radiance within 2%) are shown. Sun photometer measurements (including large air mass values) at the Mauna Loa Observatory are compared to AO3D simulations (for a spherical Earth) and suggest that a thin aerosol layer was present above the observatory at the time of the measurements.

  7. Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.

    Science.gov (United States)

    Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K

    2017-12-01

    It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.

  8. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  9. A mixture theory model for a particulate suspension flow in a thermal non-equilibrium context

    Energy Technology Data Exchange (ETDEWEB)

    Martins-Costa, M.L. [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Lab. de Mecanica Teorica e Aplicada; Gama, R.M. Saldanha da [Laboratorio Nacional de Computacao Cientifica (LNCC), Rio de Janeiro, RJ (Brazil)

    1998-07-01

    The present work proposes a local model for a particulate suspension flow employing the continuum theory of mixture - specially developed to deal with multiphase phenomena. The flow of a Newtonian fluid with small solid particles in suspension - in which thermal non-equilibrium is allowed - is described as a mixture of solid and fluid constituents coexisting superposed. Thermo-dynamically consistent constitutive hypotheses are derived in order an adequate model for suspensions. (author)

  10. A micromechanical finite element model for linear and damage-coupled viscoelastic behaviour of asphalt mixture

    Science.gov (United States)

    Dai, Qingli; Sadd, Martin H.; You, Zhanping

    2006-09-01

    This study presents a finite element (FE) micromechanical modelling approach for the simulation of linear and damage-coupled viscoelastic behaviour of asphalt mixture. Asphalt mixture is a composite material of graded aggregates bound with mastic (asphalt and fine aggregates). The microstructural model of asphalt mixture incorporates an equivalent lattice network structure whereby intergranular load transfer is simulated through an effective asphalt mastic zone. The finite element model integrates the ABAQUS user material subroutine with continuum elements for the effective asphalt mastic and rigid body elements for each aggregate. A unified approach is proposed using Schapery non-linear viscoelastic model for the rate-independent and rate-dependent damage behaviour. A finite element incremental algorithm with a recursive relationship for three-dimensional (3D) linear and damage-coupled viscoelastic behaviour is developed. This algorithm is used in a 3D user-defined material model for the asphalt mastic to predict global linear and damage-coupled viscoelastic behaviour of asphalt mixture.For linear viscoelastic study, the creep stiffnesses of mastic and asphalt mixture at different temperatures are measured in laboratory. A regression-fitting method is employed to calibrate generalized Maxwell models with Prony series and generate master stiffness curves for mastic and asphalt mixture. A computational model is developed with image analysis of sectioned surface of a test specimen. The viscoelastic prediction of mixture creep stiffness with the calibrated mastic material parameters is compared with mixture master stiffness curve over a reduced time period.In regard to damage-coupled viscoelastic behaviour, cyclic loading responses of linear and rate-independent damage-coupled viscoelastic materials are compared. Effects of particular microstructure parameters on the rate-independent damage-coupled viscoelastic behaviour are also investigated with finite element

  11. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  12. Monte Carlo based modeling of indocyanine green bolus tracking in the adult human head

    Science.gov (United States)

    Elliott, Jonathan T.; Diop, Mamadou; Tichauer, Kenneth M.; Lee, Ting-Yim; St. Lawrence, Keith

    2011-02-01

    The use of near-infrared spectroscopy (NIRS) is increasingly being investigated in critical care settings to assess cerebral hemodynamics, because of its potential for guiding therapy during the recovery period following brain injury. Cerebral blood flow (CBF) can be quantified by NIRS using indocyanine green (ICG) as an intravascular tracer. However, extracting accurate measurements from complex tissue geometries, such as the human head, is challenging and has hindered the clinical applications. With the development of fast Monte Carlo simulations that can take into account a priori anatomical information (e.g. near-infrared light propagation in tissue from MRI or CT imaging data), it is now possible to investigate signal contamination arising from the extracerebral layers, which can confound NIRS-CBF measurements. Here, we present a theoretical model that combines Monte Carlo simulations of broadband time-resolved near-infrared measurements with indicator-dilution theory to model time-dependent changes in light propagation following ICG bolus injection. Broadband, time-resolved near-infrared spectroscopy measurements were simulated for three source-detector positions. Individual simulations required 56 seconds for 5x108 photons, and a set of simulations consisting of baseline measurements at 40 wavelengths, and single-wavelength measurements at 160 time-points required on average 3.4 hours. To demonstrate the usefulness of our model, the propagation of errors associated with varying both the scalp blood flow and the scalp thickness was investigated. For each simulation the data were analyzed using four independent approaches-simple-subtraction blood flow index (ΔBFISS), time-resolved variance time-to-peak (ΔTTPTR), and absolute and relative CBF with depth-resolved NIRS (CBFDR and ΔCBFDR)-to assess cerebral hemodynamics.

  13. Markov model for characterizing neuropsychologic impairment and Monte Carlo simulation for optimizing efavirenz therapy.

    Science.gov (United States)

    Bisaso, Kuteesa R; Mukonzo, Jackson K; Ette, Ene I

    2015-11-01

    The study was undertaken to develop a pharmacokinetic-pharmacodynamic model to characterize efavirenz-induced neuropsychologic impairment, given preexistent impairment, which can be used for the optimization of efavirenz therapy via Monte Carlo simulations. The modeling was performed with NONMEM 7.2. A 1-compartment pharmacokinetic model was fitted to efavirenz concentration data from 196 Ugandan patients treated with a 600-mg daily efavirenz dose. Pharmacokinetic parameters and area under the curve (AUC) were derived. Neuropsychologic evaluation of the patients was done at baseline and in week 2 of antiretroviral therapy. A discrete-time 2-state first-order Markov model was developed to describe neuropsychologic impairment. Efavirenz AUC, day 3 efavirenz trough concentration, and female sex increased the probability (P01) of neuropsychologic impairment. Efavirenz oral clearance (CL/F) increased the probability (P10) of resolution of preexistent neuropsychologic impairment. The predictive performance of the reduced (final) model, given the data, incorporating AUC on P01and CL /F on P10, showed that the model adequately characterized the neuropsychologic impairment observed with efavirenz therapy. Simulations with the developed model predicted a 7% overall reduction in neuropsychologic impairment probability at 450 mg of efavirenz. We recommend a reduction in efavirenz dose from 600 to 450 mg, because the 450-mg dose has been shown to produce sustained antiretroviral efficacy. © 2015, The American College of Clinical Pharmacology.

  14. Boson-fermion mixture model of superconductivity and pseudogap phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Mamedov, Tofik

    2004-08-01

    A concept describing the origin of the pseudogap phase of high-T{sub c}-superconducting cuprates is discussed. Based on the idea about electron-composite boson mixture, existing below some value T{sub p} in cuprates, first, an analytical expression for T{sub p} is obtained. It is shown that T{sub p} depends on interaction parameter responsible for two electron-composite boson transformation, as well on the boson formation energy. Second, the composite boson condensation temperature T{sub c}, determined as a one below which the density of condensed bosons just ceases to be zero, is found. The reason why the behaviors of T{sub p} and T{sub c} in dependence on the interaction parameter may be so different is addressed.

  15. A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2016-01-01

    In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…

  16. Evaluation of Structural Equation Mixture Models: Parameter Estimates and Correct Class Assignment

    Science.gov (United States)

    Tueller, Stephen; Lubke, Gitta

    2010-01-01

    Structural equation mixture models (SEMMs) are latent class models that permit the estimation of a structural equation model within each class. Fitting SEMMs is illustrated using data from 1 wave of the Notre Dame Longitudinal Study of Aging. Based on the model used in the illustration, SEMM parameter estimation and correct class assignment are…

  17. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  18. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte Ca...

  19. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  20. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    Science.gov (United States)

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  1. A Three-dimensional Two-phase Mixture Model for Sediment Transport

    Science.gov (United States)

    Huang, Hai; Zhong, Deyu; Zhang, Hongwu; Zhang, Yinglong J.; Li, Xiaonan

    2017-04-01

    Suspended load often constitutes a large portion of the total load in a fluvial river. In classical fluvial numeric models, flows carrying suspended sediment are usually modeled by the Reynolds averaged equations directly borrowed from the classical fluid dynamics for single-phase flows with an advection-diffusion equation and single-phase turbulence model is adopted to close the equations. Due to the omission of the effect of the sediment on fluid, results from the classical models can deviate significantly from experimental and field observations. In this paper, we develop a three-dimensional numerical model based on two-phase mixture theory to study the sediment-laden flows. The two-phase mixture equations are closed by a two-phase mixture turbulence model derived from two-fluid turbulence model. The two-phase mixture model therefore inherits the essential capabilities of two-fluid models in considering inter-phase interaction, but without solving the full set of governing equations for the two-fluid models. Two-phase mixture equations have similar form to the governing equations for classical fluvial hydraulics, thus allowing us to embed the two-phase mixture model into SCHISM, a 3D unstructured-grid model for oceans, estuaries and rivers. We verify the new model with a set of experiments , and the results show that the new model is valid for sediment-laden flows covering a wide range of particle diameters and concentrations. We also apply the new model to the study of representative flood events in the Lower Yellow River (LYR), and investigate sediment distributions, velocity profiles, circulation flows in river bends, flood propagation and erosion and deposition patterns. The computed water surface elevation, cross-sectional bathymetry and sediment concentration show good agreement with the measured data.

  2. Gaussian Mixture Model with Variable Components for Full Waveform LiDAR Data Decomposition and RJMCMC Algorithm

    Directory of Open Access Journals (Sweden)

    ZHAO Quanhua

    2015-12-01

    Full Text Available Full waveform LiDAR data record the signal of the backscattered laser pulse. The elevation and the energy information of ground targets can be effectively obtained by decomposition of the full waveform LiDAR data. Therefore, waveform decomposition is the key to full waveform LiDAR data processing. However, in waveform decomposition, determining the number of the components is a focus and difficult problem. To this end, this paper presents a method which can automatically determine the number. First of all, a given full waveform LiDAR data is modeled on the assumption that energy recorded at elevation points satisfy Gaussian mixture distribution. The constraint function is defined to steer the model fitting the waveform. Correspondingly, a probability distribution based on the function is constructed by Gibbs. The Bayesian paradigm is followed to build waveform decomposition model. Then a RJMCMC (reversible jump Markov chain Monte Carlo scheme is used to simulate the decomposition model, which determines the number of the components and decomposes the waveform into a group of Gaussian distributions. In the RJMCMC algorithm, the move types are designed, including updating parameter vector, splitting or merging Gaussian components, birth or death Gaussian component. The results obtained from the ICESat-GLAS waveform data of different areas show that the proposed algorithm is efficient and promising.

  3. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  4. Monte Carlo modeling of cavity imaging in pure iron using back-scatter electron scanning microscopy

    Science.gov (United States)

    Yan, Qiang; Gigax, Jonathan; Chen, Di; Garner, F. A.; Shao, Lin

    2016-11-01

    Backscattered electrons (BSE) in a scanning electron microscope (SEM) can produce images of subsurface cavity distributions as a nondestructive characterization technique. Monte Carlo simulations were performed to understand the mechanism of void imaging and to identify key parameters in optimizing void resolution. The modeling explores an iron target of different thicknesses, electron beams of different energies, beam sizes, and scan pitch, evaluated for voids of different sizes and depths below the surface. The results show that the void image contrast is primarily caused by discontinuity of energy spectra of backscattered electrons, due to increased outward path lengths for those electrons which penetrate voids and are backscattered at deeper depths. Size resolution of voids at specific depths, and maximum detection depth of specific voids sizes are derived as a function of electron beam energy. The results are important for image optimization and data extraction.

  5. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  6. Monte Carlo methods for optimizing the piecewise constant Mumford-Shah segmentation model

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hiroshi; Sashida, Satoshi; Okabe, Yutaka [Department of Physics, Tokyo Metropolitan University, Hachioji, Tokyo 192-0397 (Japan); Lee, Hwee Kuan, E-mail: leehk@bii.a-star.edu.sg [Bioinformatics Institute, 30 Biopolis Street, No. 07-01, Matrix, Singapore 138671 (Singapore)

    2011-02-15

    Natural images are depicted in a computer as pixels on a square grid and neighboring pixels are generally highly correlated. This representation can be mapped naturally to a statistical physics framework on a square lattice. In this paper, we developed an effective use of statistical mechanics to solve the image segmentation problem, which is an outstanding problem in image processing. Our Monte Carlo method using several advanced techniques, including block-spin transformation, Eden clustering and simulated annealing, seeks the solution of the celebrated Mumford-Shah image segmentation model. In particular, the advantage of our method is prominent for the case of multiphase segmentation. Our results verify that statistical physics can be a very efficient approach for image processing.

  7. New software library of geometrical primitives for modelling of solids used in Monte Carlo detector simulations

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present our effort for the creation of a new software library of geometrical primitives, which are used for solid modelling in Monte Carlo detector simulations. We plan to replace and unify current geometrical primitive classes in the CERN software projects Geant4 and ROOT with this library. Each solid is represented by a C++ class with methods suited for measuring distances of particles from the surface of a solid and for determination as to whether the particles are located inside, outside or on the surface of the solid. We use numerical tolerance for determining whether the particles are located on the surface. The class methods also contain basic support for visualization. We use dedicated test suites for validation of the shape codes. These include also special performance and numerical value comparison tests for help with analysis of possible candidates of class methods as well as to verify that our new implementation proposals were designed and implemented properly. Currently, bridge classes are u...

  8. Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene

    CERN Document Server

    Smith, Dominik

    2013-01-01

    In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.

  9. Model of electronic energy relaxation in the test-particle Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Roblin, P.; Rosengard, A. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes d`Enrichissement; Nguyen, T.T. [Compagnie Internationale de Services en Informatique (CISI) - Centre d`Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1994-12-31

    We previously presented a new test-particle Monte Carlo method (1) (which we call PTMC), an iterative method for solving the Boltzmann equation, and now improved and very well-suited to the collisional steady gas flows. Here, we apply a statistical method, described by Anderson (2), to treat electronic translational energy transfer by a collisional process, to atomic uranium vapor. For our study, only three levels of its multiple energy states are considered: 0,620 cm{sup -1} and an average level grouping upper levels. After presenting two-dimensional results, we apply this model to the evaporation of uranium by electron bombardment and show that the PTMC results, for given initial electronic temperatures, are in good agreement with experimental radial velocity measurements. (author). 12 refs., 1 fig.

  10. Modeling of vision loss due to vitreous hemorrhage by Monte Carlo simulation.

    Science.gov (United States)

    Al-Saeed, Tarek A; El-Zaiat, Sayed Y

    2014-08-01

    Vitreous hemorrhage is the leaking of blood into the vitreous humor which results from different diseases. Vitreous hemorrhage leads to vision problems ranging from mild to severe cases in which blindness occurs. Since erythrocytes are the major scatterers in blood, we are modeling light propagation in vitreous humor with erythrocytes randomly distributed in it. We consider the total medium (vitreous humor plus erythrocytes) as a turbid medium and apply Monte Carlo simulation. Then, we calculate the parameters characterizing vision loss due to vitreous hemorrhage. This work shows that the increase of the volume fraction of erythrocytes results in a decrease of the total transmittance of the vitreous body and an increase in the radius of maximum transmittance, the width of the circular strip of bright area, and the radius of the shadow area.

  11. Magnetic phase diagram of the anisotropic double-exchange model a Monte Carlo study

    CERN Document Server

    Yi, H S; Hur, N H

    2000-01-01

    The magnetic phase diagram of highly anisotropic double-exchange model systems is investigated as a function of the ratio of the anisotropic hopping integrals, i.e., t sub c /t sub a sub b , on a three-dimensional lattice by using Monte Carlo calculations. The magnetic domain structure at low temperature is found to be a generic property of the strong anisotropy region. Moreover, the t sub c /t sub a sub b ratio is crucial in determining the anisotropic charge transport due to the relative spin orientation of the magnetic domains. As a result, we show the anisotropic hopping integral is the most likely cause of the magnetic domain structure. It is noted that the competition between the reduced interlayer double-exchange coupling and the thermal frustration of the ordered two-dimensional ferromagnetic layer seems to be crucial in understanding the properties of layered manganites.

  12. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    H. Machguth

    2008-12-01

    Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.

  13. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    Science.gov (United States)

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  14. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  15. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  16. Flash Point Measurements and Modeling for Ternary Partially Miscible Aqueous­Organic Mixtures

    OpenAIRE

    Liaw, Horng-Jang; Gerbaud, Vincent; Wu, Hsuan-Ta

    2010-01-01

    Flash point is the most important variable used to characterize the fire and explosion hazard of liquids. This paper presents the first partially miscible aqueousorganic mixtures flash point measurements and modeling for the ternary type-I mixtures, water + ethanol + 1-butanol, water + ethanol + 2-butanol, and the type-II mixture, water + 1-butanol + 2-butanol. Results reveal that the flash points are constant in each tie line. Handling the non-ideality of the liquid phase through the use of...

  17. Effect of Phenolic Compound Mixtures on the Viability of Listeria monocytogenes in Meat Model

    Directory of Open Access Journals (Sweden)

    María José Rodríguez Vaquero

    2011-01-01

    Full Text Available The aim of this work is to investigate the synergistic antibacterial effect of phenolic compound mixtures against Listeria monocytogenes in brain heart infusion (BHI medium, and to select the best mixture for testing their antibacterial activity in a meat model system. In BHI medium, the most effective mixtures were those of gallic and caffeic acids, gallic and protocatechuic acids, and rutin and quercetin. At the concentration of 200 mg/L, the mixtures of gallic and protocatechuic, then gallic and caffeic acids, and quercetin and rutin reduced the number of inoculated cells. At the concentration of 100 mg/L, only the quercetin and rutin mixture produced the same synergistic effect. These combinations were selected for testing in meat. At 20 °C, 100 mg/L of gallic and protocatechuic, then gallic and caffeic acid, and rutin and quercetin mixtures decreased the growth of L. monocytogenes, as compared to the control. The inhibitory effect of gallic and protocatechuic acid mixtures increased at the concentration of 200 mg/L. The death of inoculated cells was observed in the treatment with 100 mg/L of all combinations at 4 °C. With the addition of 200 mg/L of these combinations, the lethal effect increased. Gallic and caffeic acid, and rutin and quercetin were the most effective mixtures since after 14 days of incubation no viable cells of Listeria monocytogenes were detected. The lowest decimal reduction times of 1.0 and 0.95 day were found for gallic and caffeic acid, and rutin and quercetin mixtures, respectively. These results demonstrate that phenolic compound mixtures have synergistic antilisterial effect with an important bacterial reduction in meat. Therefore, it is possible to search for strategies to combine the synergistic antimicrobial effects of phenolic compounds with their natural biological properties.

  18. GPU accelerated Monte Carlo simulation of the 2D and 3D Ising model

    Science.gov (United States)

    Preis, Tobias; Virnau, Peter; Paul, Wolfgang; Schneider, Johannes J.

    2009-07-01

    The compute unified device architecture (CUDA) is a programming approach for performing scientific calculations on a graphics processing unit (GPU) as a data-parallel computing device. The programming interface allows to implement algorithms using extensions to standard C language. With continuously increased number of cores in combination with a high memory bandwidth, a recent GPU offers incredible resources for general purpose computing. First, we apply this new technology to Monte Carlo simulations of the two dimensional ferromagnetic square lattice Ising model. By implementing a variant of the checkerboard algorithm, results are obtained up to 60 times faster on the GPU than on a current CPU core. An implementation of the three dimensional ferromagnetic cubic lattice Ising model on a GPU is able to generate results up to 35 times faster than on a current CPU core. As proof of concept we calculate the critical temperature of the 2D and 3D Ising model using finite size scaling techniques. Theoretical results for the 2D Ising model and previous simulation results for the 3D Ising model can be reproduced.

  19. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  20. Shock structure and temperature overshoot in macroscopic multi-temperature model of mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Madjarević, Damir, E-mail: damirm@uns.ac.rs; Simić, Srboljub, E-mail: ssimic@uns.ac.rs [Department of Mechanics, Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, 21000 Novi Sad (Serbia); Ruggeri, Tommaso, E-mail: tommaso.ruggeri@unibo.it [Department of Mathematics and Research Center of Applied Mathematics, University of Bologna, Via Saragozza 8, 40123 Bologna (Italy)

    2014-10-15

    The paper discusses the shock structure in macroscopic multi-temperature model of gaseous mixtures, recently established within the framework of extended thermodynamics. The study is restricted to weak and moderate shocks in a binary mixture of ideal gases with negligible viscosity and heat conductivity. The model predicts the existence of temperature overshoot of heavier constituent, like more sophisticated approaches, but also puts in evidence its non-monotonic behavior not documented in other studies. This phenomenon is explained as a consequence of weak energy exchange between the constituents, either due to large mass difference, or large rarefaction of the mixture. In the range of small Mach number it is also shown that shock thickness (or equivalently, the inverse of Knudsen number) decreases with the increase of Mach number, as well as when the mixture tends to behave like a single-component gas (small mass difference and/or presence of one constituent in traces)

  1. Clustering in linear mixed models with approximate Dirichlet process mixtures using EM algorithm

    OpenAIRE

    Heinzl, Felix; Tutz, Gerhard

    2013-01-01

    In linear mixed models, the assumption of normally distributed random effects is often inappropriate and unnecessarily restrictive. The proposed approximate Dirichlet process mixture assumes a hierarchical Gaussian mixture that is based on the truncated version of the stick breaking presentation of the Dirichlet process. In addition to the weakening of distributional assumptions, the specification allows to identify clusters of observations with a similar random effects structure. An Expectat...

  2. Development of Energy-based Damage and Plasticity Models for Asphalt Concrete Mixtures

    OpenAIRE

    Onifade, Ibrahim

    2017-01-01

    Characterizing the full range of damage and plastic behaviour of asphalt mixtures under varying strain-rates and stress states is a complex and challenging task. One reason for this  is partly due to the strain rate and temperature dependent nature of the material as well as the variation in the properties of the constituent materials that make up the composite asphalt mixture. Existing stress-based models for asphalt concrete materials are developed based on mechanics principles, but these m...

  3. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  4. Mixture Models for the Analysis of Repeated Count Data.

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Böckenholt, U

    1995-01-01

    Repeated count data showing overdispersion are commonly analysed by using a Poisson model with varying intensity parameter. resulting in a mixed model. A mixed model with a gamma distribution for the Poisson parameter does not adequately fit a data set on 721 children's spelling errors. An

  5. On mixture model complexity estimation for music recommender systems

    NARCIS (Netherlands)

    Balkema, W.; van der Heijden, Ferdinand; Meijerink, B.

    2006-01-01

    Content-based music navigation systems are in need of robust music similarity measures. Current similarity measures model each song with the same model parameters. We propose methods to efficiently estimate the required number of model parameters of each individual song. First results of a study on

  6. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  7. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den

    2011-01-01

    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  8. Rasch Mixture Models for DIF Detection: A Comparison of Old and New Score Specifications

    Science.gov (United States)

    Frick, Hannah; Strobl, Carolin; Zeileis, Achim

    2015-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch…

  9. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  10. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  11. SU-E-T-468: Gamma Knife Perfexion Dosimetry: A Monte Carlo Model of One Sector.

    Science.gov (United States)

    Best, R; Gersh, J; Wiant, D; Bourland, J

    2012-06-01

    We have implemented a Monte Carlo (MC) based dose computation model of one sector of the Gamma Knife Perfexion (GK PFX) using the Penelope MC dosimetry codes. The single sector simulation was rotated about the z-axis to model all eight GK sectors. GK dosimetric aspects examined include: 1) output factors (OF) for each of the three GK collimator sizes (4, 8, 16 mm), 2) OFs for each source row and collimator size, and 3) dose distribution profiles along the x- and z-axes, compared to film measurements and dose calculations from the Leksell GammaPlan (LGP) workstation. We defined the internal GK PFX geometry in Penelope with the aid of vendor-supplied proprietary information. A single source per row was modeled for five rows for each of the 3 collimators (15 beams modeled). MC simulations were carried out on a Linux cluster. Phase space files (PSFs) were collected for the 15 modeled collimators then rotated about the z-axis to model the sector of 24 sources per collimator. 3D dose distributions from the MC model, film, and LGP DICOM-RT dose exports were analyzed using Matlab. For OF calculations, a 16 cm diameter dosimetry sphere was modeled with a virtual detector volume at its center. Good agreement is found for row- and total-output factors (greatest deviation of any type distributions along the x-axis and differ on the inferior side of the z-axis. Detailed geometric representations (radiation source, device components) of the GK PFX are required for high fidelity MC simulations. Calculated GK PFX OF values depend on the simulated detector volume size (4 mm OF most dependent). Our model shows strong agreement for the GK PFX OFs and dose profile curves compared to reference values. Non-disclosure agreement for proprietary information with Elekta AB. No financial contribution. © 2012 American Association of Physicists in Medicine.

  12. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... a simplified gradient theory (SGT) model for computing surface tensions. With this model, it is not required to solve the time-consuming density profile equations of the gradient theory model. The SRK EOS was applied to calculate the properties of the homogeneous fluid. First, the SGT model was used to predict...... surface tensions of 34 binary mixtures with an overall average absolute deviation of 3.46%. The results show good agreement between the predicted and experimental surface tensions. Next, the SGT model was applied to correlate surface tensions of binary mixtures containing alcohols, water or/and glycerol...

  13. Evidence in support of Gaussian mixture models for sparse time varying ocean acoustic response functions.

    Science.gov (United States)

    Gendron, Paul J

    2017-04-01

    A hierarchical Gaussian mixture model has been proposed to characterize sparse space-time varying shallow water acoustic response functions [Gendron, J. Acoust. Soc. Am. 139, 1923-1937 (2016)]. Considered here is an extension of this model to a uniform linear vertical array in order to provide an empirical validation of the mixture model for receivers of small aperture. An acoustic environment between source and moving receiver is predicated on a small proportion of relatively coherent paths obeying an ensemble frequency-angle-Doppler distribution. Provided are quantile-quantile plots of the discrete mixture model versus the empirical channel coefficients that lend credence to its use as a sparse model for acoustic response functions.

  14. Probabilistic clustering and shape modelling of white matter fibre bundles using regression mixtures.

    Science.gov (United States)

    Ratnarajah, Nagulan; Simmons, Andy; Hojjatoleslami, Ali

    2011-01-01

    We present a novel approach for probabilistic clustering of white matter fibre pathways using curve-based regression mixture modelling techniques in 3D curve space. The clustering algorithm is based on a principled method for probabilistic modelling of a set of fibre trajectories as individual sequences of points generated from a finite mixture model consisting of multivariate polynomial regression model components. Unsupervised learning is carried out using maximum likelihood principles. Specifically, conditional mixture is used together with an EM algorithm to estimate cluster membership. The result of clustering is a probabilistic assignment of fibre trajectories to each cluster and an estimate of cluster parameters. A statistical shape model is calculated for each clustered fibre bundle using fitted parameters of the probabilistic clustering. We illustrate the potential of our clustering approach on synthetic and real data.

  15. Learning Kinetic Monte Carlo Models of Condensed Phase High Temperature Chemistry from Molecular Dynamics

    Science.gov (United States)

    Yang, Qian; Sing-Long, Carlos; Chen, Enze; Reed, Evan

    2017-06-01

    Complex chemical processes, such as the decomposition of energetic materials and the chemistry of planetary interiors, are typically studied using large-scale molecular dynamics simulations that run for weeks on high performance parallel machines. These computations may involve thousands of atoms forming hundreds of molecular species and undergoing thousands of reactions. It is natural to wonder whether this wealth of data can be utilized to build more efficient, interpretable, and predictive models. In this talk, we will use techniques from statistical learning to develop a framework for constructing Kinetic Monte Carlo (KMC) models from molecular dynamics data. We will show that our KMC models can not only extrapolate the behavior of the chemical system by as much as an order of magnitude in time, but can also be used to study the dynamics of entirely different chemical trajectories with a high degree of fidelity. Then, we will discuss three different methods for reducing our learned KMC models, including a new and efficient data-driven algorithm using L1-regularization. We demonstrate our framework throughout on a system of high-temperature high-pressure liquid methane, thought to be a major component of gas giant planetary interiors.

  16. Assessment of a partial-equilibrium/Monte Carlo model for turbulent SYNGAS flames

    Energy Technology Data Exchange (ETDEWEB)

    Correa, S.M.; Gulati, A.

    1988-01-01

    Calculations and data for a turbulent jet flame of 40% CO, 30% H{sub 2}, and 30% N{sub 2} in coflowing air are compared extensively. The calculations are based on a partial-equilibrium model for the oxyhydrogen radical pool including CO, and on a velocity-composition joint probability density function (pdf), which closes the turbulent flux and mean chemical source terms. The pdf is joint between the three velocity components and two thermochemical scalars needed to describe partial-equilibrium conditions. The equation is solved numerically by a Monte Carlo technique. The data used are major species concentrations and temperature from pulsed Raman scattering. Difficulties with Raman measurements at high temperatures and of measuring CO{sub 2} directly are discussed. The Raman signals are taken from previous studies but here are corrected for high-temperature effects and CO{sub 2} vibrational spectra. Temperatures are obtained from the instantaneous density of the major species rather than from the Stokes/anti-Stokes ratio, which is more affected by chemiluminescence. The level of agreement between the model and the data is more favorable to the partial-equilibrium model than previously thought. The relative simplicity of the partial-equilibrium model makes it a candidate for practical calculations.

  17. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations

    Science.gov (United States)

    Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.

    2010-11-01

    We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.

  18. Structure-reactivity modeling using mixture-based representation of chemical reactions

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  19. Monte Carlo modeling in CT-based geometries: dosimetry for biological modeling experiments with particle beam radiation.

    Science.gov (United States)

    Diffenderfer, Eric S; Dolney, Derek; Schaettler, Maximilian; Sanzari, Jenine K; McDonough, James; Cengel, Keith A

    2014-03-01

    The space radiation environment imposes increased dangers of exposure to ionizing radiation, particularly during a solar particle event (SPE). These events consist primarily of low energy protons that produce a highly inhomogeneous dose distribution. Due to this inherent dose heterogeneity, experiments designed to investigate the radiobiological effects of SPE radiation present difficulties in evaluating and interpreting dose to sensitive organs. To address this challenge, we used the Geant4 Monte Carlo simulation framework to develop dosimetry software that uses computed tomography (CT) images and provides radiation transport simulations incorporating all relevant physical interaction processes. We found that this simulation accurately predicts measured data in phantoms and can be applied to model dose in radiobiological experiments with animal models exposed to charged particle (electron and proton) beams. This study clearly demonstrates the value of Monte Carlo radiation transport methods for two critically interrelated uses: (i) determining the overall dose distribution and dose levels to specific organ systems for animal experiments with SPE-like radiation, and (ii) interpreting the effect of random and systematic variations in experimental variables (e.g. animal movement during long exposures) on the dose distributions and consequent biological effects from SPE-like radiation exposure. The software developed and validated in this study represents a critically important new tool that allows integration of computational and biological modeling for evaluating the biological outcomes of exposures to inhomogeneous SPE-like radiation dose distributions, and has potential applications for other environmental and therapeutic exposure simulations.

  20. A virtual photon energy fluence model for Monte Carlo dose calculation.

    Science.gov (United States)

    Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan

    2003-03-01

    The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is

  1. Toxicity assessment of organic contaminants: evaluation of mixture effects in model industrial mixtures using 2n full factorial design.

    Science.gov (United States)

    Parvez, Shahid; Venkataraman, Chandra; Mukherji, Suparna

    2008-10-01

    Toxic organic chemicals present in industrial effluents were screened to design mixtures for examining the significant main and interaction effects among mixture components. A set of five four-component mixtures was selected by examining effluents from organic chemical, textile-dye, pulp-paper and petroleum refinery industries. The screening was based on their discharge, solubility, toxicity and volatility. A 2(n) full factorial approach was used in designing the mixtures, containing components at two dose levels, EC(10)(-) and EC(40)(+). Each mixture resulted in 16 combinations. Mixture toxicity was measured using the Vibrio fischeri bioluminescence inhibition assay. The main effects and binary, ternary and quaternary interaction effects were determined and the significance of effects was evaluated using normal order score and multifactor ANOVA. The organic chemicals retained after screening included, acetaldehyde, aniline, n-butanol, p-cresol, catechol, ethylbenzene, naphthalene, phenol, 1,2,4 trimethylbenzene and o-xylene. In all mixtures, the magnitude of main effects was more significant than the interaction effects. The trend in the main effect of components in any mixture was affected by the trends in the physico-chemical properties of the components, i.e., partition coefficient, molecular size and polarity. In some mixtures, a component with significantly higher concentration and significantly lower toxicity was found to depict a relatively high main effect, as observed for acetaldehyde in mixture I and n-butanol in mixture III. Normal order score approach failed to identify the significant interaction effects that could be identified using multifactor ANOVA. In general, the binary interactions were more significant than the ternary and quaternary interactions.

  2. Unsupervised learning of broad phonetic classes with a statistical mixture model

    Science.gov (United States)

    Lin, Ying

    2004-05-01

    Unsupervised learning of broad phonetic classes by infants was simulated using a statistical mixture model. A mixture model assumes that data are generated by a certain number of different sources-in this case, broad phonetic classes. With the phonetic labels removed, hand-transcribed segments from the TIMIT database were used in model-based clustering to obtain data-driven classes. Simple hidden Markov models were chosen to be the components of the mixture, with mel-cepstral coefficients as the front end. The mixture model was trained using an expectation-maximization-like algorithm. The EM-like algorithm was initialized by a K-means procedure and then applied to estimate the parameters of the mixture model after iteratively partitioning the clusters. The results of running this algorithm on the TIMIT segments suggested that the partitions may be interpreted as gradient acoustic features, and that to some degree the resulting clusters correspond to knowledge-based phonetic classes. Although such correspondences are rather rough, a careful examination of the clusters showed that the class membership of some sounds is highly dependent on their phonetic contexts. Thus, the clusters may reflect the preliminary phonological categories formed during language learning in early childhood.

  3. Environmental dose rate heterogeneity of beta radiation and its implications for luminescence dating: Monte Carlo modelling and experimental validation

    DEFF Research Database (Denmark)

    Nathan, R.P.; Thomas, P.J.; Jain, M.

    2003-01-01

    The recent development of rapid single sand-sized grain analyses in luminescence dating has necessitated the accurate interpretation of D-e distributions to recover a representative D-e acquired since the last bleaching event. Beta heterogeneity may adversely affect the variance and symmetry of D...... and identify the likely size of these effects on D-e distributions. The study employs the MCNP 4C Monte Carlo electron/photon transport model, supported by an experimental validation of the code in several case studies. We find good agreement between the experimental measurements and the Monte Carlo......-e distributions and it is important to characterise this effect, both to ensure that dose distributions are not misinterpreted, and that an accurate beta dose rate is employed in dating calculations. In this study, we make a first attempt providing a description of potential problems in heterogeneous environments...

  4. Monitoring and modeling of ultrasonic wave propagation in crystallizing mixtures

    Science.gov (United States)

    Marshall, T.; Challis, R. E.; Tebbutt, J. S.

    2002-05-01

    The utility of ultrasonic compression wave techniques for monitoring crystallization processes is investigated in a study of the seeded crystallization of copper II sulfate pentahydrate from aqueous solution. Simple models are applied to predict crystal yield, crystal size distribution and the changing nature of the continuous phase. A scattering model is used to predict the ultrasonic attenuation as crystallization proceeds. Experiments confirm that modeled attenuation is in agreement with measured results.

  5. Accurate Monte Carlo modelling of the back compartments of SPECT cameras

    Science.gov (United States)

    Rault, E.; Staelens, S.; Van Holen, R.; De Beenhouwer, J.; Vandenberghe, S.

    2011-01-01

    Today, new single photon emission computed tomography (SPECT) reconstruction techniques rely on accurate Monte Carlo (MC) simulations to optimize reconstructed images. However, existing MC scintillation camera models which usually include an accurate description of the collimator and crystal, lack correct implementation of the gamma camera's back compartments. In the case of dual isotope simultaneous acquisition (DISA), where backscattered photons from the highest energy isotope are detected in the imaging energy window of the second isotope, this approximation may induce simulation errors. Here, we investigate the influence of backscatter compartment modelling on the simulation accuracy of high-energy isotopes. Three models of a scintillation camera were simulated: a simple model (SM), composed only of a collimator and a NaI(Tl) crystal; an intermediate model (IM), adding a simplified description of the backscatter compartments to the previous model and a complete model (CM), accurately simulating the materials and geometries of the camera. The camera models were evaluated with point sources (67Ga, 99mTc, 111In, 123I, 131I and 18F) in air without a collimator, in air with a collimator and in water with a collimator. In the latter case, sensitivities and point-spread functions (PSFs) simulated in the photopeak window with the IM and CM are close to the measured values (error below 10.5%). In the backscatter energy window, however, the IM and CM overestimate the FWHM of the detected PSF by 52% and 23%, respectively, while the SM underestimates it by 34%. The backscatter peak fluence is also overestimated by 20% and 10% with the IM and CM, respectively, whereas it is underestimated by 60% with the SM. The results show that an accurate description of the backscatter compartments is required for SPECT simulations of high-energy isotopes (above 300 keV) when the backscatter energy window is of interest.

  6. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  7. EURADOS intercomparison on measurements and Monte Carlo modelling for the assessment of americium in a USTUR leg phantom.

    Science.gov (United States)

    Lopez, M A; Broggio, D; Capello, K; Cardenas-Mendez, E; El-Faramawy, N; Franck, D; James, A C; Kramer, G H; Lacerenza, G; Lynch, T P; Navarro, J F; Navarro, T; Perez, B; Rühm, W; Tolmachev, S Y; Weitzenegger, E

    2011-03-01

    A collaboration of the EURADOS working group on 'Internal Dosimetry' and the United States Transuranium and Uranium Registries (USTUR) has taken place to carry out an intercomparison on measurements and Monte Carlo modelling determining americium deposited in the bone of a USTUR leg phantom. Preliminary results and conclusions of this intercomparison exercise are presented here.

  8. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS

  9. A finite mixture model for genotype and environment interactions: Detecting latent population heterogeneity

    Science.gov (United States)

    Gillespie, Nathan A.; Neale, Michael C.

    2013-01-01

    Approaches such as DeFries-Fulker extremes regression (LaBuda et al., 1986) are commonly used in genetically informative studies to assess whether familial resemblance varies as a function of the scores of pairs of twins. While useful for detecting such effects, formal modelling of differences in variance components as a function of pairs' trait scores is rarely attempted. We therefore present a finite mixture model which specifies that the population consists of latent groups which may differ in i) their means, and ii) the relative impact of genetic and environmental factors on within-group variation and covariation. This model may be considered as a special case of a factor mixture model, which combines the features of a latent class model with those of a latent trait model. Various models for the class membership of twin pairs may be employed, including additive genetic, common environment, specific environment or major locus (QTL) factors. Simulation results based on variance components derived from Turkheimer and colleagues (2003), illustrate the impact of factors such as the difference in group means and variance components on the feasibility of correctly estimating the parameters of the mixture model. Model-fitting analyses estimated group heritability as .49, which is significantly greater than heritability for the rest of the population in early childhood. These results suggest that factor mixture modelling is sufficiently robust for detecting heterogeneous populations even when group mean differences are modest. PMID:16790151

  10. Mixtures of skewed Kalman filters

    KAUST Repository

    Kim, Hyoungmoon

    2014-01-01

    Normal state-space models are prevalent, but to increase the applicability of the Kalman filter, we propose mixtures of skewed, and extended skewed, Kalman filters. To do so, the closed skew-normal distribution is extended to a scale mixture class of closed skew-normal distributions. Some basic properties are derived and a class of closed skew. t distributions is obtained. Our suggested family of distributions is skewed and has heavy tails too, so it is appropriate for robust analysis. Our proposed special sequential Monte Carlo methods use a random mixture of the closed skew-normal distributions to approximate a target distribution. Hence it is possible to handle skewed and heavy tailed data simultaneously. These methods are illustrated with numerical experiments. © 2013 Elsevier Inc.

  11. Using physiologically based pharmacokinetic models to estimate the health risk of mixtures of trihalomethanes from reclaimed water.

    Science.gov (United States)

    Niu, Zhiguang; Zang, Xue; Zhang, Ying

    2015-03-21

    To estimate the health risk of mixture of trihalomethanes (THMs) from reclaimed water during toilet flushing, the interaction-based Hazard Index (HI(interaction-based)) and the mixture carcinogenic risk (CRM) according to tissue dose were conducted through the integrated use of both the exposure concentrations model and the physiologically based pharmacokinetic (PBPK) model of THMs. Monte Carlo simulations were employed to implement the probabilistic risk analysis and sensitivity analysis. Nine samples were analyzed, which were collected from J Water Reclamation Plant (JWRP) in Tianjin of China. The results indicated that the mean HI(interaction-based) (=0.85) was lower than the acceptable risk level (=1). The probability that the HI(interaction-based) exceeded the acceptable risk level is 22.97%. For carcinogenic risk, the CRM ranges from 9.41×10(-7) to 3.54×10(-5), with a mean of 5.49×10(-6). Moreover, the probability of exceeding the acceptable risk level (1×10(-6)) is near 100%. And the values of HI(interaction-based) from sample no. 1, 5, and 7 exceeded 1, while the values of CRM for all samples exceeded 1×10(-6). Consequently, the reclaimed water used for flushing toilets should be paid more attention, though non-carcinogenic effect is relatively small. Furthermore, the concentrations of DBCM had greater impact on both the carcinogenic and non-carcinogenic risk based on sensitivity analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  13. Transportation Cost Assessment by Means of a Monte Carlo Simulation in a Transshipment Model

    Directory of Open Access Journals (Sweden)

    Gordana Dukić

    2008-09-01

    Full Text Available The task of transport management is to organize the transportof goods from a number of sources to a number of destinationswith minimum total costs. The basic transportation modelassumes direct transport of goods from a source to a destinationwith constant unit transportation costs. In practice, however,goods are frequently transported through several transientpoints where they need to be transshipped. In such circumstancestransport planning and organization become increasinglycomplex. This is especially noticeable in water transport.Most of the issues are directly connected to port operations, asthey are the transshipment hubs. Since transportation is under anumber of influences, in today 's turbulent operating conditionsthe assumption on fixed unit transportation costs cannot betaken as realistic. In order to improve decision making in thetransportation domain, this paper will present a stochastictransshipment model in which cost estimate is based on MonteCarlo simulation. Simulated values of unit costs are used to devisean adequate linear programming model, the solving ofwhich determines the values of total minimum transportationcosts. After repeating the simulation for a sufficient number oftimes, the distribution of total minimum costs can be formed,which is the basis for the pertinent confidence interval estimation.It follows that the design, testing and application of thepresented model requires a combination of quantitative optimizationmethods, simulation and elements of inferential statistics,all with the support of computer and adequate software.

  14. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    Science.gov (United States)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  15. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  16. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  17. Study on Quantification Method Based on Monte Carlo Sampling for Multiunit Probabilistic Safety Assessment Models

    Directory of Open Access Journals (Sweden)

    Kyemin Oh

    2017-06-01

    Full Text Available In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA. One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  18. Efficient Coupling of Fluid-Plasma and Monte-Carlo-Neutrals Models for Edge Plasma Transport

    Science.gov (United States)

    Dimits, A. M.; Cohen, B. I.; Friedman, A.; Joseph, I.; Lodestro, L. L.; Rensink, M. E.; Rognlien, T. D.; Sjogreen, B.; Stotler, D. P.; Umansky, M. V.

    2017-10-01

    UEDGE has been valuable for modeling transport in the tokamak edge and scrape-off layer due in part to its efficient fully implicit solution of coupled fluid neutrals and plasma models. We are developing an implicit coupling of the kinetic Monte-Carlo (MC) code DEGAS-2, as the neutrals model component, to the UEDGE plasma component, based on an extension of the Jacobian-free Newton-Krylov (JFNK) method to MC residuals. The coupling components build on the methods and coding already present in UEDGE. For the linear Krylov iterations, a procedure has been developed to ``extract'' a good preconditioner from that of UEDGE. This preconditioner may also be used to greatly accelerate the convergence rate of a relaxed fixed-point iteration, which may provide a useful ``intermediate'' algorithm. The JFNK method also requires calculation of Jacobian-vector products, for which any finite-difference procedure is inaccurate when a MC component is present. A semi-analytical procedure that retains the standard MC accuracy and fully kinetic neutrals physics is therefore being developed. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 15-ERD-059, by PPPL under Contract DE-AC02-09CH11466, and supported in part by the U.S. DOE, OFES.

  19. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  20. Mixture Density Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  1. A Model-Selection-Based Self-Splitting Gaussian Mixture Learning with Application to Speaker Identification

    Directory of Open Access Journals (Sweden)

    Shih-Sian Cheng

    2004-12-01

    Full Text Available We propose a self-splitting Gaussian mixture learning (SGML algorithm for Gaussian mixture modelling. The SGML algorithm is deterministic and is able to find an appropriate number of components of the Gaussian mixture model (GMM based on a self-splitting validity measure, Bayesian information criterion (BIC. It starts with a single component in the feature space and splits adaptively during the learning process until the most appropriate number of components is found. The SGML algorithm also performs well in learning the GMM with a given component number. In our experiments on clustering of a synthetic data set and the text-independent speaker identification task, we have observed the ability of the SGML for model-based clustering and automatically determining the model complexity of the speaker GMMs for speaker identification.

  2. Modeling of a heat pump charged with a non-azeotropic refrigerant mixture

    Science.gov (United States)

    Domanski, P.

    1986-01-01

    An analysis of the vapor compression cycle and the main components of an air-to-air heat pump charged with a binary non-azeotropic mixture has been performed for steady-state operation. The general heat pump simulation model HPBI has been formulated which is based on independent, analytical models of system components and the logic linking them together. The logic of the program requires an iterative solution of refrigerant pressure and enthalpy balances, and refrigerant mixture and individual mixture component mass inventories. The modeling effort emphasis was on the local thermodynamic phenomena which were described by fundamental heat transfer equations and equation of state relationships among material properties. In the compressor model several refrigerant locations were identified and the processes taking place between these locations accounted for all significant heat and pressure losses.

  3. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  4. An analytic method for the placebo-based pattern-mixture model.

    Science.gov (United States)

    Lu, Kaifeng

    2014-03-30

    Pattern-mixture models provide a general and flexible framework for sensitivity analyses of nonignorable missing data. The placebo-based pattern-mixture model (Little and Yau, Biometrics 1996; 52:1324-1333) treats missing data in a transparent and clinically interpretable manner and has been used as sensitivity analysis for monotone missing data in longitudinal studies. The standard multiple imputation approach (Rubin, Multiple Imputation for Nonresponse in Surveys, 1987) is often used to implement the placebo-based pattern-mixture model. We show that Rubin's variance estimate of the multiple imputation estimator of treatment effect can be overly conservative in this setting. As an alternative to multiple imputation, we derive an analytic expression of the treatment effect for the placebo-based pattern-mixture model and propose a posterior simulation or delta method for the inference about the treatment effect. Simulation studies demonstrate that the proposed methods provide consistent variance estimates and outperform the imputation methods in terms of power for the placebo-based pattern-mixture model. We illustrate the methods using data from a clinical study of major depressive disorders. Copyright © 2013 John Wiley & Sons, Ltd.

  5. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  6. Mixture priors for Bayesian performance monitoring 2: variable-constituent model

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, Corwin L. [Statwood Consulting, 2905 Covington Road, Silver Spring, MD 20910 (United States)]. E-mail: cory@StatwoodConsulting.com; Youngblood, Robert W. [ISL, Inc., 11140 Rockville Pike, Suite 500, Rockville, MD 20852 (United States)]. E-mail: ryoungblood@islinc.com

    2005-08-01

    This paper uses mixture priors for Bayesian assessment of performance. In any Bayesian performance assessment, a prior distribution for performance parameter(s) is updated based on current performance information. The performance assessment is then based on the posterior distribution for the parameter(s). This paper uses a mixture prior, a mixture of conjugate distributions, which is itself conjugate and which is useful when performance may have changed recently. The present paper illustrates the process using simple models for reliability, involving parameters such as failure rates and demand failure probabilities. When few failures are observed the resulting posterior distributions tend to resemble the priors. However, when more failures are observed, the posteriors tend to change character in a rapid nonlinear way. This behavior is arguably appropriate for many applications. Choosing realistic parameters for the mixture prior is not simple, but even the crude methods given here lead to estimators that show qualitatively good behavior in examples.

  7. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    Science.gov (United States)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  8. Development and validation of a metal mixture bioavailability model (MMBM) to predict chronic toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia.

    Science.gov (United States)

    Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-01-01

    Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSEMix = 16; RMSEZn only = 18; RMSENi only = 17; RMSEPb only = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water

  9. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  10. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  11. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  12. Monte Carlo mathematical modeling of the interactions between light and skin tissue of newborns

    Science.gov (United States)

    Pushkareva, Alexandra; Kozyreva, Olga

    2017-02-01

    A model of interactions between light and skin tissue was reviewed. For the present study the skin of newborns was examined. The characteristics of newborns skin tissue were taken into account when modeling. In the developed model the skin was introduced in three layers: the epidermis, the basal layer and the dermis. The thickness of the skin layers corresponds with the structure of newborns skin. Absorbance of each layer in the visible and near infrared regions of the spectrum was determined by the absorption of three main skin chromophores: blood, melanin and water. The formula of the scattering and absorption coefficients of blood is given in this study. This paper presents the study of the blood oxygenation effect on the signal of diffusely scattered radiation for three distances between the source and the receiver of radiation: 0.3 mm, 0.6 mm and 1.5 mm. The calculation was obtained using the Monte Carlo mathematical modeling. A detailed description of the model is given. The adequacy of the suggested model has been tested by comparing calculated characteristic with the experimental results obtained by means of double integral sphere. The results show that the wavelength range which provides sufficiently accurate measurements depends on the distance between the source and the receiver of radiation and certain data is provided. For the distance of 0.3 mm this range is at 700-780 nm, 950-1000 nm; for 0.6 mm it is at 640-670 nm, 760-780 nm, and 850-870 nm; for 1.5 mm at 620-740 nm.

  13. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cases......- udders relative to SCS from IMI+ udders. Further, the genetic correlation between SCS of IMI- and SCS of IMI+ was 0.61, and heritability for liability to putative mastitis was 0.07. Models B2 and C allocated approximately 30% of SCS records to IMI+, but for model B1 this fraction was only 10......%. The correlation between estimated breeding values for liability to putative mastitis based on the model (SCS for model A) and estimated breeding values for liability to clinical mastitis from the national evaluation was greatest for model B1, followed by models A, C, and B2. This may be explained by model B1...

  14. Use of a Modified Vector Model for Odor Intensity Prediction of Odorant Mixtures

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2015-03-01

    Full Text Available Odor intensity (OI indicates the perceived intensity of an odor by the human nose, and it is usually rated by specialized assessors. In order to avoid restrictions on assessor participation in OI evaluations, the Vector Model which calculates the OI of a mixture as the vector sum of its unmixed components’ odor intensities was modified. Based on a detected linear relation between the OI and the logarithm of odor activity value (OAV—a ratio between chemical concentration and odor threshold of individual odorants, OI of the unmixed component was replaced with its corresponding logarithm of OAV. The interaction coefficient (cosα which represented the degree of interaction between two constituents was also measured in a simplified way. Through a series of odor intensity matching tests for binary, ternary and quaternary odor mixtures, the modified Vector Model provided an effective way of relating the OI of an odor mixture with the lnOAV values of its constituents. Thus, OI of an odor mixture could be directly predicted by employing the modified Vector Model after usual quantitative analysis. Besides, it was considered that the modified Vector Model was applicable for odor mixtures which consisted of odorants with the same chemical functional groups and similar molecular structures.

  15. Use of a Modified Vector Model for Odor Intensity Prediction of Odorant Mixtures

    Science.gov (United States)

    Yan, Luchun; Liu, Jiemin; Fang, Di

    2015-01-01

    Odor intensity (OI) indicates the perceived intensity of an odor by the human nose, and it is usually rated by specialized assessors. In order to avoid restrictions on assessor participation in OI evaluations, the Vector Model which calculates the OI of a mixture as the vector sum of its unmixed components’ odor intensities was modified. Based on a detected linear relation between the OI and the logarithm of odor activity value (OAV—a ratio between chemical concentration and odor threshold) of individual odorants, OI of the unmixed component was replaced with its corresponding logarithm of OAV. The interaction coefficient (cosα) which represented the degree of interaction between two constituents was also measured in a simplified way. Through a series of odor intensity matching tests for binary, ternary and quaternary odor mixtures, the modified Vector Model provided an effective way of relating the OI of an odor mixture with the lnOAV values of its constituents. Thus, OI of an odor mixture could be directly predicted by employing the modified Vector Model after usual quantitative analysis. Besides, it was considered that the modified Vector Model was applicable for odor mixtures which consisted of odorants with the same chemical functional groups and similar molecular structures. PMID:25760055

  16. The Robust EM-type Algorithms for Log-concave Mixtures of Regression Models.

    Science.gov (United States)

    Hu, Hao; Yao, Weixin; Wu, Yichao

    2017-07-01

    Finite mixture of regression (FMR) models can be reformulated as incomplete data problems and they can be estimated via the expectation-maximization (EM) algorithm. The main drawback is the strong parametric assumption such as FMR models with normal distributed residuals. The estimation might be biased if the model is misspecified. To relax the parametric assumption about the component error densities, a new method is proposed to estimate the mixture regression parameters by only assuming that the components have log-concave error densities but the specific parametric family is unknown. Two EM-type algorithms for the mixtures of regression models with log-concave error densities are proposed. Numerical studies are made to compare the performance of our algorithms with the normal mixture EM algorithms. When the component error densities are not normal, the new methods have much smaller MSEs when compared with the standard normal mixture EM algorithms. When the underlying component error densities are normal, the new methods have comparable performance to the normal EM algorithm.

  17. Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry

    Science.gov (United States)

    Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.

    2017-02-01

    The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms

  18. Model-based approaches to synthesize microarray data : A unifying review using mixture of SEMs

    NARCIS (Netherlands)

    Martella, F.; Vermunt, J.K.

    2013-01-01

    Several statistical methods are nowadays available for the analysis of gene expression data recorded through microarray technology. In this article, we take a closer look at several Gaussian mixture models which have recently been proposed to model gene expression data. It can be shown that these

  19. On Inclusion of Covariates for Class Enumeration of Growth Mixture Models

    Science.gov (United States)

    Li, Libo; Hser, Yih-Ing

    2011-01-01

    In this article, we directly question the common practice in growth mixture model (GMM) applications that exclusively rely on the fitting model without covariates for GMM class enumeration. We provide theoretical and simulation evidence to demonstrate that exclusion of covariates from GMM class enumeration could be problematic in many cases. Based…

  20. Smoothed particle hydrodynamics model for phase separating fluid mixtures. I. General equations

    NARCIS (Netherlands)

    Thieulot, C; Janssen, LPBM; Espanol, P

    We present a thermodynamically consistent discrete fluid particle model for the simulation of a recently proposed set of hydrodynamic equations for a phase separating van der Waals fluid mixture [P. Espanol and C.A.P. Thieulot, J. Chem. Phys. 118, 9109 (2003)]. The discrete model is formulated by