WorldWideScience

Sample records for models introduce mixtures

  1. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  2. Prevalence Incidence Mixture Models

    Science.gov (United States)

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  3. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  4. Introducing and modeling inefficiency contributions

    DEFF Research Database (Denmark)

    Asmild, Mette; Kronborg, Dorte; Matthews, Kent

    2016-01-01

    -called inefficiency contributions, which are defined as the relative contributions from specific variables to the overall levels of inefficiencies. A statistical model for distinguishing the inefficiency contributions between subgroups is proposed and the method is illustrated on a data set on Chinese banks....

  5. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  6. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  7. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  8. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  9. Stochastic radiative transfer model for mixture of discontinuous vegetation canopies

    International Nuclear Information System (INIS)

    Shabanov, Nikolay V.; Huang, D.; Knjazikhin, Y.; Dickinson, R.E.; Myneni, Ranga B.

    2007-01-01

    Modeling of the radiation regime of a mixture of vegetation species is a fundamental problem of the Earth's land remote sensing and climate applications. The major existing approaches, including the linear mixture model and the turbid medium (TM) mixture radiative transfer model, provide only an approximate solution to this problem. In this study, we developed the stochastic mixture radiative transfer (SMRT) model, a mathematically exact tool to evaluate radiation regime in a natural canopy with spatially varying optical properties, that is, canopy, which exhibits a structured mixture of vegetation species and gaps. The model solves for the radiation quantities, direct input to the remote sensing/climate applications: mean radiation fluxes over whole mixture and over individual species. The canopy structure is parameterized in the SMRT model in terms of two stochastic moments: the probability of finding species and the conditional pair-correlation of species. The second moment is responsible for the 3D radiation effects, namely, radiation streaming through gaps without interaction with vegetation and variation of the radiation fluxes between different species. We performed analytical and numerical analysis of the radiation effects, simulated with the SMRT model for the three cases of canopy structure: (a) non-ordered mixture of species and gaps (TM); (b) ordered mixture of species without gaps; and (c) ordered mixture of species with gaps. The analysis indicates that the variation of radiation fluxes between different species is proportional to the variation of species optical properties (leaf albedo, density of foliage, etc.) Gaps introduce significant disturbance to the radiation regime in the canopy as their optical properties constitute major contrast to those of any vegetation species. The SMRT model resolves deficiencies of the major existing mixture models: ignorance of species radiation coupling via multiple scattering of photons (the linear mixture model

  10. Introducing Waqf Based Takaful Model in India

    Directory of Open Access Journals (Sweden)

    Syed Ahmed Salman

    2014-03-01

    Full Text Available Objective – Waqf is a unique feature of the socioeconomic system of Islam in a multi- religious and developing country like India. India is a rich country with waqf assets. The history of waqf in India can be traced back to 800 years ago. Most of the researchers, suggest how waqf can be used a tool to mitigate the poverty of Muslims. India has the third highest Muslim population after Indonesia and Pakistan. However, the majority of Muslims belong to the low income group and they are in need of help. It is believed that waqf can be utilized for the betterment of Indian Muslim community. Among the available uses of waqf assets, the main objective of this paper is to introduce waqf based takaful model in India. In addition, how this proposed model can be adopted in India is highlighted.Methods – Library research is applied since this paper relies on secondary data by thoroughlyreviewing the most relevant literature.Result – India as a rich country with waqf assets should fully utilize the resources to help the Muslims through takaful.Conclusion – In this study, we have proposed waqf based takaful model with the combination of the concepts mudarabah and wakalah for India. We recommend this model based on the background of the  country and situations. Since we have not tested the viability of this model in India, future research should be continued on this testing.Keywords : Wakaf, Takaful, Kemiskinan dan India

  11. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  12. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste)

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture

  13. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...

  14. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    association between prices that can be attributed, among others, to unobserved neighborhood effects. In this paper, a model of spatial association for housing markets is introduced. Spatial association is treated in the context of spatial heterogeneity, which is explicitly modeled in both a global and a local....... The identified mixtures are considered as the different spatial housing submarkets. The main advantage of the approach is that submarkets are recovered by the housing prices data compared to submarkets imposed by administrative or geographical criteria. The Finite Mixture Model is estimated using the Figueiredo...

  15. On modeling of structured multiphase mixtures

    International Nuclear Information System (INIS)

    Dobran, F.

    1987-01-01

    The usual modeling of multiphase mixtures involves a set of conservation and balance equations of mass, momentum, energy and entropy (the basic set) constructed by an averaging procedure or postulated. The averaged models are constructed by averaging, over space or time segments, the local macroscopic field equations of each phase, whereas the postulated models are usually motivated by the single phase multicomponent mixture models. In both situations, the resulting equations yield superimposed continua models and are closed by the constitutive equations which place restrictions on the possible material response during the motion and phase change. In modeling the structured multiphase mixtures, the modeling of intrinsic motion of grains or particles is accomplished by adjoining to the basic set of field equations the additional balance equations, thereby placing restrictions on the motion of phases only within the imposed extrinsic and intrinsic sources. The use of the additional balance equations has been primarily advocated in the postulatory theories of multiphase mixtures and are usually derived through very special assumptions of the material deformation. Nevertheless, the resulting mixture models can predict a wide variety of complex phenomena such as the Mohr-Coulomb yield criterion in granular media, Rayleigh bubble equation, wave dispersion and dilatancy. Fundamental to the construction of structured models of multiphase mixtures are the problems pertaining to the existence and number of additional balance equations to model the structural characteristics of a mixture. Utilizing a volume averaging procedure it is possible not only to derive the basic set of field equation discussed above, but also a very general set of additional balance equations for modeling of structural properties of the mixture

  16. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  17. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  18. Probabilistic mixture-based image modelling

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Havlíček, Vojtěch; Grim, Jiří

    2011-01-01

    Roč. 47, č. 3 (2011), s. 482-500 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:CESNET(CZ) 387/2010; GA MŠk(CZ) 2C06019; GA ČR(CZ) GA103/11/0335 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF texture modelling * discrete distribution mixtures * Bernoulli mixture * Gaussian mixture * multi-spectral texture modelling Subject RIV: BD - Theory of Information Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/RO/haindl-0360244.pdf

  19. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  20. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    Science.gov (United States)

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  1. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  2. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  3. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  4. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  5. Text document classification based on mixture models

    Czech Academy of Sciences Publication Activity Database

    Novovičová, Jana; Malík, Antonín

    2004-01-01

    Roč. 40, č. 3 (2004), s. 293-304 ISSN 0023-5954 R&D Projects: GA AV ČR IAA2075302; GA ČR GA102/03/0049; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : text classification * text categorization * multinomial mixture model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.224, year: 2004

  6. A turbulence model in mixtures. First part: Statistical description of mixture

    International Nuclear Information System (INIS)

    Besnard, D.

    1987-03-01

    Classical theory of mixtures gives a model for molecular mixtures. This kind of model is based on a small gradient approximation for concentration, temperature, and pression. We present here a mixture model, allowing for large gradients in the flow. We also show that, with a local balance assumption between material diffusion and flow gradients evolution, we obtain a model similar to those mentioned above [fr

  7. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  8. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  9. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  10. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...

  11. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  12. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  13. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  14. mixtools: An R Package for Analyzing Mixture Models

    Directory of Open Access Journals (Sweden)

    Tatiana Benaglia

    2009-10-01

    Full Text Available The mixtools package for R provides a set of functions for analyzing a variety of finite mixture models. These functions include both traditional methods, such as EM algorithms for univariate and multivariate normal mixtures, and newer methods that reflect some recent research in finite mixture models. In the latter category, mixtools provides algorithms for estimating parameters in a wide range of different mixture-of-regression contexts, in multinomial mixtures such as those arising from discretizing continuous multivariate data, in nonparametric situations where the multivariate component densities are completely unspecified, and in semiparametric situations such as a univariate location mixture of symmetric but otherwise unspecified densities. Many of the algorithms of the mixtools package are EM algorithms or are based on EM-like ideas, so this article includes an overview of EM algorithms for finite mixture models.

  15. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  16. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  17. Modeling the effects of binary mixtures on survival in time.

    NARCIS (Netherlands)

    Baas, J.; van Houte, B.P.P.; van Gestel, C.A.M.; Kooijman, S.A.L.M.

    2007-01-01

    In general, effects of mixtures are difficult to describe, and most of the models in use are descriptive in nature and lack a strong mechanistic basis. The aim of this experiment was to develop a process-based model for the interpretation of mixture toxicity measurements, with effects of binary

  18. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  19. Modeling of Multicomponent Mixture Separation Processes Using Hollow fiber Membrane

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sin-Ah; Kim, Jin-Kuk; Lee, Young Moo; Yeo, Yeong-Koo [Hanyang University, Seoul (Korea, Republic of)

    2015-02-15

    So far, most of research activities on modeling of membrane separation processes have been focused on binary feed mixture. But, in actual separation operations, binary feed is hard to find and most separation processes involve multicomponent feed mixture. In this work models for membrane separation processes treating multicomponent feed mixture are developed. Various model types are investigated and validity of proposed models are analysed based on experimental data obtained using hollowfiber membranes. The proposed separation models show quick convergence and exhibit good tracking performance.

  20. Modelling interactions in grass-clover mixtures

    NARCIS (Netherlands)

    Nassiri Mahallati, M.

    1998-01-01

    The study described in this thesis focuses on a quantitative understanding of the complex interactions in binary mixtures of perennial ryegrass (Lolium perenne L.) and white clover (Trifolium repens L.) under cutting. The first part of the study describes the dynamics of growth, production

  1. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    Knowledge of the thermodynamic properties and phase equilibria of mixtures containing carbon dioxide (CO2) is important in several industrial processes such as enhanced oil recovery, carbon capture and storage, and supercritical extractions, where CO2 is used as a solvent. Despite this importance...

  2. Modeling of columnar and equiaxed solidification of binary mixtures

    International Nuclear Information System (INIS)

    Roux, P.

    2005-12-01

    This work deals with the modelling of dendritic solidification in binary mixtures. Large scale phenomena are represented by volume averaging of the local conservation equations. This method allows to rigorously derive the partial differential equations of averaged fields and the closure problems associated to the deviations. Such problems can be resolved numerically on periodic cells, representative of dendritic structures, in order to give a precise evaluation of macroscopic transfer coefficients (Drag coefficients, exchange coefficients, diffusion-dispersion tensors...). The method had already been applied for a model of columnar dendritic mushy zone and it is extended to the case of equiaxed dendritic solidification, where solid grains can move. The two-phase flow is modelled with an Eulerian-Eulerian approach and the novelty is to account for the dispersion of solid velocity through the kinetic agitation of the particles. A coupling of the two models is proposed thanks to an original adaptation of the columnar model, allowing for undercooling calculation: a solid-liquid interfacial area density is introduced and calculated. At last, direct numerical simulations of crystal growth are proposed with a diffuse interface method for a representation of local phenomena. (author)

  3. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    Science.gov (United States)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  4. Introducing Students to Gas Chromatography-Mass Spectrometry Analysis and Determination of Kerosene Components in a Complex Mixture

    Science.gov (United States)

    Pacot, Giselle Mae M.; Lee, Lyn May; Chin, Sung-Tong; Marriott, Philip J.

    2016-01-01

    Gas chromatography-mass spectrometry (GC-MS) and GC-tandem MS (GC-MS/MS) are useful in many separation and characterization procedures. GC-MS is now a common tool in industry and research, and increasingly, GC-MS/MS is applied to the measurement of trace components in complex mixtures. This report describes an upper-level undergraduate experiment…

  5. A Dirichlet process mixture model for brain MRI tissue classification.

    Science.gov (United States)

    Ferreira da Silva, Adelino R

    2007-04-01

    Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.

  6. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  7. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  8. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  9. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  10. New models for predicting thermophysical properties of ionic liquid mixtures.

    Science.gov (United States)

    Huang, Ying; Zhang, Xiangping; Zhao, Yongsheng; Zeng, Shaojuan; Dong, Haifeng; Zhang, Suojiang

    2015-10-28

    Potential applications of ILs require the knowledge of the physicochemical properties of ionic liquid (IL) mixtures. In this work, a series of semi-empirical models were developed to predict the density, surface tension, heat capacity and thermal conductivity of IL mixtures. Each semi-empirical model only contains one new characteristic parameter, which can be determined using one experimental data point. In addition, as another effective tool, artificial neural network (ANN) models were also established. The two kinds of models were verified by a total of 2304 experimental data points for binary mixtures of ILs and molecular compounds. The overall average absolute deviations (AARDs) of both the semi-empirical and ANN models are less than 2%. Compared to previously reported models, these new semi-empirical models require fewer adjustable parameters and can be applied in a wider range of applications.

  11. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid–liqu...

  12. Modelling of spark to ignition transition in gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Akram, M.

    1996-10-01

    This thesis pertains to the models for studying sparking in chemically inert gases. The processes taking place in a spark to flame transition can be segregated into physical and chemical processes, and this study is focused on physical processes. The plasma is regarded as a single-substance material. One and two-dimensional models are developed. The transfer of electrical energy into thermal energy of the gas and its redistribution in space and time along with the evolution of a plasma kernel is studied in the time domain ranging from 10 ns to 40 micros. In the case of ultra-fast sparks, the propagation of the shock and its reflection from a rigid wall is presented. The influence of electrode shape and the gap size on the flow structure development is found to be a dominating factor. It is observed that the flow structure that has developed in the early stage more or less prevails at later stages and strongly influences the shape and evolution of the hot kernel. The electrode geometry and configuration are responsible for the development of the flow structure. The strength of the vortices generated in the flow field is influenced by the power input to the gap and their location of emergence is dictated by the electrode shape and configuration. The heat transfer after 2 micros in the case of ultra-fast sparks is dominated by convection and diffusion. The strong mixing produced by hydrodynamic effects and the electrode geometry give the indication that the magnetic pinch effect might be negligible. Finally, a model for a multicomponent gas mixture is presented. The chemical kinetics mechanism for dissociation and ionization is introduced. 56 refs

  13. Introducing MERGANSER: A Flexible Framework for Ecological Niche Modeling

    Science.gov (United States)

    Klawonn, M.; Dow, E. M.

    2015-12-01

    Ecological Niche Modeling (ENM) is a collection of techniques to find a "fundamental niche", the range of environmental conditions suitable for a species' survival in the absence of inter-species interactions, given a set of environmental parameters. Traditional approaches to ENM face a number of obstacles including limited data accessibility, data management problems, computational costs, interface usability, and model validation. The MERGANSER system, which stands for Modeling Ecological Residency Given A Normalized Set of Environmental Records, addresses these issues through powerful data persistence and flexible data access, coupled with a clear presentation of results and fine-tuned control over model parameters. MERGANSER leverages data measuring 72 weather related phenomena, land cover, soil type, population, species occurrence, general species information, and elevation, totaling over 1.5 TB of data. To the best of the authors' knowledge, MERGANSER uses higher-resolution spatial data sets than previously published models. Since MERGANSER stores data in an instance of Apache SOLR, layers generated in support of niche models are accessible to users via simplified Apache Lucene queries. This is made even simpler via an HTTP front end that generates Lucene queries automatically. Specifically, a user need only enter the name of a place and a species to run a model. Using this approach to synthesizing model layers, the MERGANSER system has successfully reproduced previously published niche model results with a simplified user experience. Input layers for the model are generated dynamically using OpenStreetMap and SOLR's spatial search functionality. Models are then run using either user-specified or automatically determined parameters after normalizing them into a common grid. Finally, results are visualized in the web interface, which allows for quick validation. Model results and all surrounding metadata are also accessible to the user for further study.

  14. Modelling phase equilibria for acid gas mixtures using the CPA equation of state. Part V: Multicomponent mixtures containing CO2 and alcohols

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios M.

    2015-01-01

    of CPA for ternary and multicomponent CO2 mixtures containing alcohols (methanol, ethanol or propanol) water and hydrocarbons. This work belongs to a series of studies aiming to arrive in a single "engineering approach" for applying CPA to acid gas mixtures, without introducing significant changes...... to the model. In this direction, CPA results were obtained using various approaches, i.e. different association schemes for pure CO2 (assuming that it is a non-associating compound, or that it is a self-associating fluid with two, three or four association sites) and different possibilities for modelling...... mixtures of CO2 with water and alcohols (only use of one interaction parameter kij or assuming cross-association interactions and obtaining the relevant parameters either via a combining rule or using an experimental value for the cross-association energy). It is concluded that CPA is a powerful model...

  15. An Active Learning Exercise for Introducing Agent-Based Modeling

    Science.gov (United States)

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  16. How to Introduce Mathematic Modeling in Industrial Design Education

    NARCIS (Netherlands)

    Langereis, G.R.; Hu, J.; Feijs, L.M.G.; Stillmann, G.A.; Kaiser, G.; Blum, W.B.; Brown, J.P.

    2013-01-01

    With competency based learning in a project driven environment, we are facing a different perspective of how students perceive mathematical modelling. In this chapter, a model is proposed where conventional education is seen as a process from mathematics to design, while competency driven approaches

  17. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  18. Investigating Individual Differences in Toddler Search with Mixture Models

    Science.gov (United States)

    Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina

    2015-01-01

    Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…

  19. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  20. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    Science.gov (United States)

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  1. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  2. Introducing positive discrimination in predictive models (Chapter 14)

    NARCIS (Netherlands)

    Calders, T.G.K.; Verwer, S.E.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter we give three solutions for the discrimination-aware classification problem that are based upon Bayesian classifiers. These classifiers model the complete probability distribution by making strong independence assumptions. First we discuss the necessity of having discrimination-free

  3. Introducing the Collaborative Learning Modeling Language (ColeML)

    DEFF Research Database (Denmark)

    Bundsgaard, Jeppe

    2014-01-01

    in this area, represented by, for example, the Workflow Management Coalition (Hollingsworth, 1995) and the very widespread standard Business Process Modeling and Notation (BPMN), has been criticized on the basis of research in knowledge work processes. Inspiration for ColeML is found in this research area...

  4. Models for the computation of opacity of mixtures

    International Nuclear Information System (INIS)

    Klapisch, Marcel; Busquet, Michel

    2013-01-01

    We compare four models for the partial densities of the components of mixtures. These models yield different opacities as shown on polystyrene, acrylic and polyimide in local thermodynamical equilibrium (LTE). Two of these models, the ‘whole volume partial pressure’ model (M1) and its modification (M2) are not thermodynamically consistent (TC). The other two models are TC and minimize free energy. M3, the ‘partial volume equal pressure’ model, uses equality of chemical potential. M4 uses commonality of free electron density. The latter two give essentially identical results in LTE, but M4’s convergence is slower. M4 is easily generalized to non-LTE conditions. Non-LTE effects are shown by the variation of the Planck mean opacity of the mixtures with temperature and density. (paper)

  5. Introducing serendipity in a social network model of knowledge diffusion

    International Nuclear Information System (INIS)

    Cremonini, Marco

    2016-01-01

    Highlights: • Serendipity as a control mechanism for knowledge diffusion in social network. • Local communication enhanced in the periphery of a network. • Prevalence of hub nodes in the network core mitigated. • Potential disruptive effect on network formation of uncontrolled serendipity. - Abstract: In this paper, we study serendipity as a possible strategy to control the behavior of an agent-based network model of knowledge diffusion. The idea of considering serendipity in a strategic way has been first explored in Network Learning and Information Seeking studies. After presenting the major contributions of serendipity studies to digital environments, we discuss the extension to our model: Agents are enriched with random topics for establishing new communication according to different strategies. The results show how important network properties could be influenced, like reducing the prevalence of hubs in the network’s core and increasing local communication in the periphery, similar to the effects of more traditional self-organization methods. Therefore, from this initial study, when serendipity is opportunistically directed, it appears to behave as an effective and applicable approach to social network control.

  6. Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models

    OpenAIRE

    Martin Burda; Artem Prokhorov

    2012-01-01

    Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...

  7. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  8. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...

  9. Application of association models to mixtures containing alkanolamines

    DEFF Research Database (Denmark)

    Avlund, Ane Søgaard; Eriksen, Daniel Kunisch; Kontogeorgis, Georgios

    2011-01-01

    Two association models,the CPA and sPC-SAFT equations of state, are applied to binarymixtures containing alkanolamines and hydrocarbons or water. CPA is applied to mixtures of MEA and DEA, while sPC-SAFT is applied to MEA–n-heptane liquid–liquid equilibria and MEA–water vapor–liquid equilibria. T...

  10. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  11. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...

  12. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  13. Detecting Math Anxiety with a Mixture Partial Credit Model

    Science.gov (United States)

    Ölmez, Ibrahim Burak; Cohen, Allan S.

    2017-01-01

    The purpose of this study was to investigate a new methodology for detection of differences in middle grades students' math anxiety. A mixture partial credit model analysis revealed two distinct latent classes based on homogeneities in response patterns within each latent class. Students in Class 1 had less anxiety about apprehension of math…

  14. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  15. Mixture models with entropy regularization for community detection in networks

    Science.gov (United States)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  16. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  17. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  18. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  19. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  20. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict......The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... multicomponent adsorption equilibria on the basis of single-component adsorption data. We compare the MPTA and IAST models to a large set of experimental data, obtaining reasonable good agreement with experimental data and high degree of predictability. Some limitations of both models are also discussed....

  1. Hydrogenic ionization model for mixtures in non-LTE plasmas

    International Nuclear Information System (INIS)

    Djaoui, A.

    1999-01-01

    The Hydrogenic Ionization Model for Mixtures (HIMM) is a non-Local Thermodynamic Equilibrium (non-LTE), time-dependent ionization model for laser-produced plasmas containing mixtures of elements (species). In this version, both collisional and radiative rates are taken into account. An ionization distribution for each species which is consistent with the ambient electron density is obtained by use of an iterative procedure in a single calculation for all species. Energy levels for each shell having a given principal quantum number and for each ion stage of each species in the mixture are calculated using screening constants. Steady-state non-LTE as well as LTE solutions are also provided. The non-LTE rate equations converge to the LTE solution at sufficiently high densities or as the radiation temperature approaches the electron temperature. The model is particularly useful at low temperatures where convergence problems are usually encountered in our previous models. We apply our model to typical situation in x-ray laser research, laser-produced plasmas and inertial confinement fusion. Our results compare well with previously published results for a selenium plasma. (author)

  2. Color Texture Segmentation by Decomposition of Gaussian Mixture Model

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 287-296 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA MŠk 2C06019 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * gaussian mixture model * EM algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/grim-color texture segmentation by decomposition of gaussian mixture model.pdf

  3. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  4. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time...... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  5. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  6. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  7. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  8. Determining of migraine prognosis using latent growth mixture models.

    Science.gov (United States)

    Tasdelen, Bahar; Ozge, Aynur; Kaleagasi, Hakan; Erdogan, Semra; Mengi, Tufan

    2011-04-01

    This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies, participants are classified with respect to baseline status and followed within a certain time period. However, latent growth mixture model is the most suitable method, which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence, we planned this comprehensive study to identify prognostic factors in migraine. The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity, frequency, and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures, semiparametric and group-based mixture modeling approach, were applied to define the developmental trajectories. While the three-group model for the severity (mild, moderate, severe) and frequency (low, medium, high) of headache appeared to be appropriate, the four-group model for the duration (low, medium, high, extremely high) was more suitable. The severity of headache increased in the patients with nausea, vomiting, photophobia and phonophobia. The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration. Nausea, vomiting and photophobia were the most significant factors to identify developmental trajectories. The remission time was not the same for the severity, frequency, and duration of headache.

  9. Spatially adaptive mixture modeling for analysis of FMRI time series.

    Science.gov (United States)

    Vincent, Thomas; Risser, Laurent; Ciuciu, Philippe

    2010-04-01

    Within-subject analysis in fMRI essentially addresses two problems, the detection of brain regions eliciting evoked activity and the estimation of the underlying dynamics. In Makni et aL, 2005 and Makni et aL, 2008, a detection-estimation framework has been proposed to tackle these problems jointly, since they are connected to one another. In the Bayesian formalism, detection is achieved by modeling activating and nonactivating voxels through independent mixture models (IMM) within each region while hemodynamic response estimation is performed at a regional scale in a nonparametric way. Instead of IMMs, in this paper we take advantage of spatial mixture models (SMM) for their nonlinear spatial regularizing properties. The proposed method is unsupervised and spatially adaptive in the sense that the amount of spatial correlation is automatically tuned from the data and this setting automatically varies across brain regions. In addition, the level of regularization is specific to each experimental condition since both the signal-to-noise ratio and the activation pattern may vary across stimulus types in a given brain region. These aspects require the precise estimation of multiple partition functions of underlying Ising fields. This is addressed efficiently using first path sampling for a small subset of fields and then using a recently developed fast extrapolation technique for the large remaining set. Simulation results emphasize that detection relying on supervised SMM outperforms its IMM counterpart and that unsupervised spatial mixture models achieve similar results without any hand-tuning of the correlation parameter. On real datasets, the gain is illustrated in a localizer fMRI experiment: brain activations appear more spatially resolved using SMM in comparison with classical general linear model (GLM)-based approaches, while estimating a specific parcel-based HRF shape. Our approach therefore validates the treatment of unsmoothed fMRI data without fixed GLM

  10. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary......The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... and three quaternary mixtures are considered. It is shown that overall excellent correlation for binary, mixtures and satisfactory prediction results for multicomponent systems are obtained. There are significant differences between the various modeling approaches and the best results are obtained when...

  11. Effective dielectric mixture model for characterization of diesel contaminated soil

    International Nuclear Information System (INIS)

    Al-Mattarneh, H.M.A.

    2007-01-01

    Human exposure to contaminated soil by diesel isomers can have serious health consequences like neurological diseases or cancer. The potential of dielectric measuring techniques for electromagnetic characterization of contaminated soils was investigated in this paper. The purpose of the research was to develop an empirical dielectric mixture model for soil hydrocarbon contamination application. The paper described the basic theory and elaborated in dielectric mixture theory. The analytical and empirical models were explained in simple algebraic formulas. The experimental study was then described with reference to materials, properties and experimental results. The results of the analytical models were also mathematically explained. The proposed semi-empirical model was also presented. According to the result of the electromagnetic properties of dry soil contaminated with diesel, the diesel presence had no significant effect on the electromagnetic properties of dry soil. It was concluded that diesel had no contribution to the soil electrical conductivity, which confirmed the nonconductive character of diesel. The results of diesel-contaminated soil at saturation condition indicated that both dielectric constant and loss factors of soil were decreased with increasing diesel content. 15 refs., 2 tabs., 9 figs

  12. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  13. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  14. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  15. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    NARCIS (Netherlands)

    Martens, M.A.W.; Janssen, M.J.; Ruijssenaars, A.J.J.M.; Riksen-Walraven, J.M.A.

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital

  16. A mixture model for robust registration in Kinect sensor

    Science.gov (United States)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  17. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  18. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  19. Tractography segmentation using a hierarchical Dirichlet processes mixture model.

    Science.gov (United States)

    Wang, Xiaogang; Grimson, W Eric L; Westin, Carl-Fredrik

    2011-01-01

    In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Clustering disaggregated load profiles using a Dirichlet process mixture model

    International Nuclear Information System (INIS)

    Granell, Ramon; Axon, Colin J.; Wallom, David C.H.

    2015-01-01

    Highlights: • We show that the Dirichlet process mixture model is scaleable. • Our model does not require the number of clusters as an input. • Our model creates clusters only by the features of the demand profiles. • We have used both residential and commercial data sets. - Abstract: The increasing availability of substantial quantities of power-use data in both the residential and commercial sectors raises the possibility of mining the data to the advantage of both consumers and network operations. We present a Bayesian non-parametric model to cluster load profiles from households and business premises. Evaluators show that our model performs as well as other popular clustering methods, but unlike most other methods it does not require the number of clusters to be predetermined by the user. We used the so-called ‘Chinese restaurant process’ method to solve the model, making use of the Dirichlet-multinomial distribution. The number of clusters grew logarithmically with the quantity of data, making the technique suitable for scaling to large data sets. We were able to show that the model could distinguish features such as the nationality, household size, and type of dwelling between the cluster memberships

  1. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  2. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  3. A mixture model-based approach to the clustering of microarray expression data.

    Science.gov (United States)

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  4. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  5. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    International Nuclear Information System (INIS)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio

    2013-01-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO 4 (NIS-inhibitor) dosed at a fixed ratio of EC 10 that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of

  6. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...... together whereas short windows are more unstable and influenced by noise and we find that our heuristic correctly identifies an adequate level of complexity. On single subject resting state fMRI data we find that dynamic models generally outperform static models and using the proposed heuristic points...

  7. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  8. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  9. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.

  10. A Community of Practice Model for Introducing Mobile Tablets to University Faculty

    Science.gov (United States)

    Drouin, Michelle; Vartanian, Lesa Rae; Birk, Samantha

    2014-01-01

    We examined the effectiveness of a community of practice (CoP) model for introducing tablets to 139 faculty members at a higher education institution. Using a CoP within a systems model, we used large- and small-group mentorship to foster collaboration among faculty members. Most faculty members agreed that the project was well organized and…

  11. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    Science.gov (United States)

    Martens, Marga A. W.; Janssen, Marleen J.; Ruijssenaars, Wied A. J. J. M.; Riksen-Walraven, J. Marianne

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital deaf-blindness. The model is theoretically underpinned,…

  12. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  13. Modelling phase equilibria for acid gas mixtures using the CPA equation of state. Part VI. Multicomponent mixtures with glycols relevant to oil and gas and to liquid or supercritical CO2 transport applications

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios M.

    2016-01-01

    to data on ternary and multicomponent mixtures) to model the phase behaviour of ternary and quaternary systems with CO2 and glycols. It is concluded that CPA performs satisfactorily for most multicomponent systems considered. Some differences between the various modelling approaches are observed....... This work is the last part of a series of studies, which aim to arrive in a single "engineering approach" for applying CPA to acid gas mixtures, without introducing significant changes to the model. An overall assessment, based also on the obtained results of this series (Tsivintzelis et al., 2010, 2011...

  14. Modeling of non-additive mixture properties using the Online CHEmical database and Modeling environment (OCHEM

    Directory of Open Access Journals (Sweden)

    Oprisiu Ioana

    2013-01-01

    Full Text Available Abstract The Online Chemical Modeling Environment (OCHEM, http://ochem.eu is a web-based platform that provides tools for automation of typical steps necessary to create a predictive QSAR/QSPR model. The platform consists of two major subsystems: a database of experimental measurements and a modeling framework. So far, OCHEM has been limited to the processing of individual compounds. In this work, we extended OCHEM with a new ability to store and model properties of binary non-additive mixtures. The developed system is publicly accessible, meaning that any user on the Web can store new data for binary mixtures and develop models to predict their non-additive properties. The database already contains almost 10,000 data points for the density, bubble point, and azeotropic behavior of binary mixtures. For these data, we developed models for both qualitative (azeotrope/zeotrope and quantitative endpoints (density and bubble points using different learning methods and specially developed descriptors for mixtures. The prediction performance of the models was similar to or more accurate than results reported in previous studies. Thus, we have developed and made publicly available a powerful system for modeling mixtures of chemical compounds on the Web.

  15. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  16. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  17. Induced polarization of clay-sand mixtures: experiments and modeling

    International Nuclear Information System (INIS)

    Okay, G.; Leroy, P.; Tournassat, C.; Ghorbani, A.; Jougnot, D.; Cosenza, P.; Camerlynck, C.; Cabrera, J.; Florsch, N.; Revil, A.

    2012-01-01

    were performed with a cylindrical four-electrode sample-holder (cylinder made of PVC with 30 cm in length and 19 cm in diameter) associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO 4 electrodes. These electrodes were installed at 10 cm from the base of the sample holder and regularly spaced (each 90 degree). The results illustrate the strong impact of the Cationic Exchange Capacity (CEC) of the clay minerals upon the complex conductivity. The amplitude of the in-phase conductivity of the kaolinite-clay samples is strongly dependent to saturating fluid salinity for all volumetric clay fractions, whereas the in-phase conductivity of the smectite-clay samples is quite independent on the salinity, except at the low clay content (5% and 1% of clay in volume). This is due to the strong and constant surface conductivity of smectite associated with its very high CEC. The quadrature conductivity increases steadily with the CEC and the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents (1% in volume of kaolinite-clay). This is due to the constant surface site density of Na counter-ions in the Stern layer of clay materials. At the lowest clay content (1%), the magnitude of the quadrature conductivity increases with the salinity, as expected for silica sands. In this case, the surface site density of Na counter-ions in the Stern layer increases with salinity. The experimental data show good agreement with predicted values given by our Spectral Induced Polarization (SIP) model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex

  18. Modelling the effect of mixture components on permeation through skin.

    Science.gov (United States)

    Ghafourian, T; Samaras, E G; Brooks, J D; Riviere, J E

    2010-10-15

    A vehicle influences the concentration of penetrant within the membrane, affecting its diffusivity in the skin and rate of transport. Despite the huge amount of effort made for the understanding and modelling of the skin absorption of chemicals, a reliable estimation of the skin penetration potential from formulations remains a challenging objective. In this investigation, quantitative structure-activity relationship (QSAR) was employed to relate the skin permeation of compounds to the chemical properties of the mixture ingredients and the molecular structures of the penetrants. The skin permeability dataset consisted of permeability coefficients of 12 different penetrants each blended in 24 different solvent mixtures measured from finite-dose diffusion cell studies using porcine skin. Stepwise regression analysis resulted in a QSAR employing two penetrant descriptors and one solvent property. The penetrant descriptors were octanol/water partition coefficient, logP and the ninth order path molecular connectivity index, and the solvent property was the difference between boiling and melting points. The negative relationship between skin permeability coefficient and logP was attributed to the fact that most of the drugs in this particular dataset are extremely lipophilic in comparison with the compounds in the common skin permeability datasets used in QSAR. The findings show that compounds formulated in vehicles with small boiling and melting point gaps will be expected to have higher permeation through skin. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.396. The chemical space of the dataset was compared with that of the known skin permeability datasets and gaps were identified for future skin permeability measurements. Copyright 2010 Elsevier B.V. All rights reserved.

  19. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2015-01-01

    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  20. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  1. Polymer mixtures in confined geometries: Model systems to explore ...

    Indian Academy of Sciences (India)

    to mean field behavior for very long chains, the critical behavior of mixtures confined into thin film geometry falls in the 2d Ising class irrespective of chain length. ..... AB interface does not approach the wall; (b) corresponds to a temperature .... Very recently, these theoretical studies have been extended to polymer mixtures.

  2. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  3. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  4. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  5. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  6. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  7. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    Science.gov (United States)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  8. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  9. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  10. Zero-truncated panel Poisson mixture models: Estimating the impact on tourism benefits in Fukushima Prefecture.

    Science.gov (United States)

    Narukawa, Masaki; Nohara, Katsuhito

    2018-04-01

    This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Real time tracking by LOPF algorithm with mixture model

    Science.gov (United States)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  12. A Note on the Use of Mixture Models for Individual Prediction.

    Science.gov (United States)

    Cole, Veronica T; Bauer, Daniel J

    Mixture models capture heterogeneity in data by decomposing the population into latent subgroups, each of which is governed by its own subgroup-specific set of parameters. Despite the flexibility and widespread use of these models, most applications have focused solely on making inferences for whole or sub-populations, rather than individual cases. The current article presents a general framework for computing marginal and conditional predicted values for individuals using mixture model results. These predicted values can be used to characterize covariate effects, examine the fit of the model for specific individuals, or forecast future observations from previous ones. Two empirical examples are provided to demonstrate the usefulness of individual predicted values in applications of mixture models. The first example examines the relative timing of initiation of substance use using a multiple event process survival mixture model whereas the second example evaluates changes in depressive symptoms over adolescence using a growth mixture model.

  13. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  14. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  15. Introducing a model for competitiveness of suppliers in supply chain through game theory approach

    Directory of Open Access Journals (Sweden)

    Hengameh Cighary Deljavan

    2012-10-01

    Full Text Available Cighary Deljavan and Fariba Sadeghi PDF (300 KAbstract: The purpose of the present study is to introduce a model for competitiveness of suppliers in supply chain through game theory approach in one of the automobile companies of Iran. In this study, the game is based on price and non-price factors and this company is going to estimate the real profit obtained from collaboration with each of supply chain members. This happens by considering the governing competitive condition based on game theory before entering a bit for purchase of α piece as spare part among 8 companies supplying this piece as the supply chain members. According to experts in this industry, the quality is the main non-price competitiveness factor after price. In the current research models, the model introduced by Lu and Tsao (2011 [Lu, J.C., Tsao, Y.C., & Charoensiriwath, C. (2011. Competition Under manufacturer Service and retail price. Economic Modeling, 28,1256-1264.] with two manufacturers- one distributer, being appropriate for the research data, has been considered as the basis and implemented for case study and then it has been extended to n-manufacturers-one common retailer. Following price elasticity of demand, potential size of market or maximum product demand, retailer price, production price, wholesale price, demand amount, manufacturer and retailer profit are estimated under three scenario of manufacturer Stackelberg, Retailer Sackelberg and Vertical Nash. Therefore, by comparing them, price balance points and optimum level of services are specified and the better optimum scenario can be determined. Sensitivity analysis is performed for new model and manufacturers are ranked based on manufacture profit, Retailer profit and customer satisfaction. Finally, in this research in addition to introducing-person game model, customer satisfaction, which has been presented in the previous models as a missed circle are analyzed.

  16. Introducing Mudbox

    CERN Document Server

    Kermanikian, Ara

    2010-01-01

    One of the first books on Autodesk's new Mudbox 3D modeling and sculpting tool!. Autodesk's Mudbox was used to create photorealistic creatures for The Dark Knight , The Mist , and others films. Now you can join the crowd interested in learning this exciting new digital modeling and sculpting tool with this complete guide. Get up to speed on all of Mudbox's features and functions, learn how sculpt and paint, and master the art of using effective workflows to make it all go easier.: Introduces Autodesk's Mudbox, an exciting 3D modeling and sculpting tool that enables you to create photorealistic

  17. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.; Zaiane, O.R.; Srivastav, J.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  18. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  19. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...

  20. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  1. Pattern-mixture models for analyzing normal outcome data with proxy respondents.

    Science.gov (United States)

    Shardell, Michelle; Hicks, Gregory E; Miller, Ram R; Langenberg, Patricia; Magaziner, Jay

    2010-06-30

    Studies of older adults often involve interview questions regarding subjective constructs such as perceived disability. In some studies, when subjects are unable (e.g. due to cognitive impairment) or unwilling to respond to these questions, proxies (e.g. relatives or other care givers) are recruited to provide responses in place of the subject. Proxies are usually not approached to respond on behalf of subjects who respond for themselves; thus, for each subject, data from only one of the subject or proxy are available. Typically, proxy responses are simply substituted for missing subject responses, and standard complete-data analyses are performed. However, this approach may introduce measurement error and produce biased parameter estimates. In this paper, we propose using pattern-mixture models that relate non-identifiable parameters to identifiable parameters to analyze data with proxy respondents. We posit three interpretable pattern-mixture restrictions to be used with proxy data, and we propose estimation procedures using maximum likelihood and multiple imputation. The methods are applied to a cohort of elderly hip-fracture patients. (c) 2010 John Wiley & Sons, Ltd.

  2. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  3. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  4. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  5. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into

  6. Gassmann Modeling of Acoustic Properties of Sand-clay Mixtures

    Science.gov (United States)

    Gurevich, B.; Carcione, J. M.

    The feasibility of modeling elastic properties of a fluid-saturated sand-clay mixture rock is analyzed by assuming that the rock is composed of macroscopic regions of sand and clay. The elastic properties of such a composite rock are computed using two alternative schemes.The first scheme, which we call the composite Gassmann (CG) scheme, uses Gassmann equations to compute elastic moduli of the saturated sand and clay from their respective dry moduli. The effective elastic moduli of the fluid-saturated composite rock are then computed by applying one of the mixing laws commonly used to estimate elastic properties of composite materials.In the second scheme which we call the Berryman-Milton scheme, the elastic moduli of the dry composite rock matrix are computed from the moduli of dry sand and clay matrices using the same composite mixing law used in the first scheme. Next, the saturated composite rock moduli are computed using the equations of Brown and Korringa, which, together with the expressions for the coefficients derived by Berryman and Milton, provide an extension of Gassmann equations to rocks with a heterogeneous solid matrix.For both schemes, the moduli of the dry homogeneous sand and clay matrices are assumed to obey the Krief's velocity-porosity relationship. As a mixing law we use the self-consistent coherent potential approximation proposed by Berryman.The calculated dependence of compressional and shear velocities on porosity and clay content for a given set of parameters using the two schemes depends on the distribution of total porosity between the sand and clay regions. If the distribution of total porosity between sand and clay is relatively uniform, the predictions of the two schemes in the porosity range up to 0.3 are very similar to each other. For higher porosities and medium-to-large clay content the elastic moduli predicted by CG scheme are significantly higher than those predicted by the BM scheme.This difference is explained by the fact

  7. Diffusion models for mixtures using a stiff dissipative hyperbolic formalism

    OpenAIRE

    Boudin , Laurent; Grec , Bérénice; Pavan , Vincent

    2018-01-01

    In this article, we are interested in a system of uid equations for mixtures with a sti relaxation term of Maxwell-Stefan diusion type. We use the formalism developed by Chen, Levermore, Liu in [4] to obtain a limit system of Fick type where the species velocities tend to align to a bulk velocity when the relaxation parameter remains small.

  8. Analysis of errors introduced by geographic coordinate systems on weather numeric prediction modeling

    Directory of Open Access Journals (Sweden)

    Y. Cao

    2017-09-01

    Full Text Available Most atmospheric models, including the Weather Research and Forecasting (WRF model, use a spherical geographic coordinate system to internally represent input data and perform computations. However, most geographic information system (GIS input data used by the models are based on a spheroid datum because it better represents the actual geometry of the earth. WRF and other atmospheric models use these GIS input layers as if they were in a spherical coordinate system without accounting for the difference in datum. When GIS layers are not properly reprojected, latitudinal errors of up to 21 km in the midlatitudes are introduced. Recent studies have suggested that for very high-resolution applications, the difference in datum in the GIS input data (e.g., terrain land use, orography should be taken into account. However, the magnitude of errors introduced by the difference in coordinate systems remains unclear. This research quantifies the effect of using a spherical vs. a spheroid datum for the input GIS layers used by WRF to study greenhouse gas transport and dispersion in northeast Pennsylvania.

  9. Sound speed models for a noncondensible gas-steam-water mixture

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1984-01-01

    An analytical expression is derived for the homogeneous equilibrium speed of sound in a mixture of noncondensible gas, steam, and water. The expression is based on the Gibbs free energy interphase equilibrium condition for a Gibbs-Dalton mixture in contact with a pure liquid phase. Several simplified models are discussed including the homogeneous frozen model. These idealized models can be used as a reference for data comparison and also serve as a basis for empirically corrected nonhomogeneous and nonequilibrium models

  10. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  11. Model-based experimental design for assessing effects of mixtures of chemicals

    International Nuclear Information System (INIS)

    Baas, Jan; Stefanowicz, Anna M.; Klimek, Beata; Laskowski, Ryszard; Kooijman, Sebastiaan A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  12. Introducing renewable energy and industrial restructuring to reduce GHG emission: Application of a dynamic simulation model

    International Nuclear Information System (INIS)

    Song, Junnian; Yang, Wei; Higano, Yoshiro; Wang, Xian’en

    2015-01-01

    Highlights: • Renewable energy development is expanded and introduced into socioeconomic activities. • A dynamic optimization simulation model is developed based on input–output approach. • Regional economic, energy and environmental impacts are assessed dynamically. • Industrial and energy structure is adjusted optimally for GHG emission reduction. - Abstract: Specifying the renewable energy development as new energy industries to be newly introduced into current socioeconomic activities, this study develops a dynamic simulation model with input–output approach to make comprehensive assessment of the impacts on economic development, energy consumption and GHG emission under distinct levels of GHG emission constraints involving targeted GHG emission reduction policies (ERPs) and industrial restructuring. The model is applied to Jilin City to conduct 16 terms of dynamic simulation work with GRP as objective function subject to mass, value and energy balances aided by the extended input–output table with renewable energy industries introduced. Simulation results indicate that achievement of GHG emission reduction target is contributed by renewable energy industries, ERPs and industrial restructuring collectively, which reshape the terminal energy consumption structure with a larger proportion of renewable energy. Wind power, hydropower and biomass combustion power industries account for more in the power generation structure implying better industrial prospects. Mining, chemical, petroleum processing, non-metal, metal and thermal power industries are major targets for industrial restructuring. This method is crucial for understanding the role of renewable energy development in GHG mitigation efforts and other energy-related planning settings, allowing to explore the optimal level for relationships among all socioeconomic activities and facilitate to simultaneous pursuit of economic development, energy utilization and environmental preservation

  13. Introducing Modeling Transition Diagrams as a Tool to Connect Mathematical Modeling to Mathematical Thinking

    Science.gov (United States)

    Czocher, Jennifer A.

    2016-01-01

    This study contributes a methodological tool to reconstruct the cognitive processes and mathematical activities carried out by mathematical modelers. Represented as Modeling Transition Diagrams (MTDs), individual modeling routes were constructed for four engineering undergraduate students. Findings stress the importance and limitations of using…

  14. On Two Mixture-Based Clustering Approaches Used in Modeling an Insurance Portfolio

    Directory of Open Access Journals (Sweden)

    Tatjana Miljkovic

    2018-05-01

    Full Text Available We review two complementary mixture-based clustering approaches for modeling unobserved heterogeneity in an insurance portfolio: the generalized linear mixed cluster-weighted model (CWM and mixture-based clustering for an ordered stereotype model (OSM. The latter is for modeling of ordinal variables, and the former is for modeling losses as a function of mixed-type of covariates. The article extends the idea of mixture modeling to a multivariate classification for the purpose of testing unobserved heterogeneity in an insurance portfolio. The application of both methods is illustrated on a well-known French automobile portfolio, in which the model fitting is performed using the expectation-maximization (EM algorithm. Our findings show that these mixture-based clustering methods can be used to further test unobserved heterogeneity in an insurance portfolio and as such may be considered in insurance pricing, underwriting, and risk management.

  15. Enzyme-polyelectrolyte complexes in water-ethanol mixtures: negatively charged groups artificially introduced into alpha-chymotrypsin provide additional activation and stabilization effects.

    Science.gov (United States)

    Kudryashova, E V; Gladilin, A K; Vakurov, A V; Heitz, F; Levashov, A V; Mozhaev, V V

    1997-07-20

    Formation of noncovalent complexes between alpha-chymotrypsin (CT) and a polyelectrolyte, polybrene (PB), has been shown to produce two major effects on enzymatic reactions in binary mixtures of polar organic cosolvents with water. (i) At moderate concentrations of organic cosolvents (10% to 30% v/v), enzymatic activity of CT is higher than in aqueous solutions, and this activation effect is more significant for CT in complex with PB (5- to 7-fold) than for free enzyme (1.5- to 2.5-fold). (ii) The range of cosolvent concentrations that the enzyme tolerates without complete loss of catalytic activity is much broader. For enhancement of enzyme stability in the complex with the polycation, the number of negatively charged groups in the protein has been artificially increased by using chemical modification with pyromellitic and succinic anhydrides. Additional activation effect at moderate concentrations of ethanol and enhanced resistance of the enzyme toward inactivation at high concentrations of the organic solvent have been observed for the modified preparations of CT in the complex with PB as compared with an analogous complex of the native enzyme. Structural changes behind alterations in enzyme activity in water-ethanol mixtures have been studied by the method of circular dichroism (CD). Protein conformation of all CT preparations has not changed significantly up to 30% v/v of ethanol where activation effects in enzymatic catalysis were most pronounced. At higher concentrations of ethanol, structural changes in the protein have been observed for different forms of CT that were well correlated with a decrease in enzymatic activity. (c) 1997 John Wiley & Sons, Inc. Biotechnol Bioeng 55: 267-277, 1997.

  16. Introducing memory and association mechanism into a biologically inspired visual model.

    Science.gov (United States)

    Qiao, Hong; Li, Yinlin; Tang, Tang; Wang, Peng

    2014-09-01

    A famous biologically inspired hierarchical model (HMAX model), which was proposed recently and corresponds to V1 to V4 of the ventral pathway in primate visual cortex, has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of position- and scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental evidence, we introduce the memory and association mechanism into the HMAX model. The main contributions of the work are: 1) mimicking the active memory and association mechanism and adding the top down adjustment to the HMAX model, which is the first try to add the active adjustment to this famous model and 2) from the perspective of information, algorithms based on the new model can reduce the computation storage and have a good recognition performance. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with a much lower memory requirement.

  17. Model-based experimental design for assessing effects of mixtures of chemicals

    NARCIS (Netherlands)

    Baas, J.; Stefanowicz, A.M.; Klimek, B.; Laskowski, R.; Kooijman, S.A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for

  18. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  19. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    Web  Presentation...Software  .....................................................  20   Figure  6.  Published   Web  Page  from  Data  Collection...the  term  Model  Based  Engineering  (MBE),  Model  Driven  Engineering  ( MDE ),  or  Model-­‐Based  Systems  

  20. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  1. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    Science.gov (United States)

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  2. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  3. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    Directory of Open Access Journals (Sweden)

    Orlov Alexey

    2016-01-01

    Full Text Available This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge cascades for separation of multicomponent isotope mixtures.

  4. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    Science.gov (United States)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  5. Introducing a boreal wetland model within the Earth System model framework

    Science.gov (United States)

    Getzieh, R. J.; Brovkin, V.; Reick, C.; Kleinen, T.; Raddatz, T.; Raivonen, M.; Sevanto, S.

    2009-04-01

    Wetlands of the northern high latitudes with their low temperatures and waterlogged conditions are prerequisite for peat accumulation. They store at least 25% of the global soil organic carbon and constitute currently the largest natural source of methane. These boreal and subarctic peat carbon pools are sensitive to climate change since the ratio of carbon sequestration and emission is closely dependent on hydrology and temperature. Global biogeochemistry models used for simulations of CO2 dynamics in the past and future climates usually ignore changes in the peat storages. Our approach aims at the evaluation of the boreal wetland feedback to climate through the CO2 and CH4 fluxes on decadal to millennial time scales. A generic model of organic matter accumulation and decay in boreal wetlands is under development in the MPI for Meteorology in cooperation with the University of Helsinki. Our approach is to develop a wetland model which is consistent with the physical and biogeochemical components of the land surface module JSBACH as a part of the Earth System model framework ECHAM5-MPIOM-JSBACH. As prototypes, we use modelling approach by Frolking et al. (2001) for the peat dynamics and the wetland model by Wania (2007) for vegetation cover and plant productivity. An initial distribution of wetlands follows the GLWD-3 map by Lehner and Döll (2004). First results of the modelling approach will be presented. References: Frolking, S. E., N. T. Roulet, T. R. Moore, P. J. H. Richard, M. Lavoie and S. D. Muller (2001): Modeling Northern Peatland Decomposition and Peat Accumulation, Ecosystems, 4, 479-498. Lehner, B., Döll P. (2004): Development and validation of a global database of lakes, reservoirs and wetlands. Journal of Hydrology 296 (1-4), 1-22. Wania, R. (2007): Modelling northern peatland land surface processes, vegetation dynamics and methane emissions. PhD thesis, University of Bristol, 122 pp.

  6. Introducing vaccination against serogroup B meningococcal disease: an economic and mathematical modelling study of potential impact.

    Science.gov (United States)

    Christensen, Hannah; Hickman, Matthew; Edmunds, W John; Trotter, Caroline L

    2013-05-28

    Meningococcal disease remains an important cause of morbidity and mortality worldwide. The first broadly effective vaccine against group B disease (which causes considerable meningococcal disease in Europe, the Americas and Australasia) was licensed in the EU in January 2013; our objective was to estimate the potential impact of introducing such a vaccine in England. We developed two models to estimate the impact of introducing a new 'MenB' vaccine. The cohort model assumes the vaccine protects against disease only; the transmission dynamic model also allows the vaccine to protect against carriage (accounting for herd effects). We used these, and economic models, to estimate the case reduction and cost-effectiveness of a number of different vaccine strategies. We estimate 27% of meningococcal disease cases could be prevented over the lifetime of an English birth cohort by vaccinating infants at 2,3,4 and 12 months of age with a vaccine that prevents disease only; this strategy could be cost-effective at £9 per vaccine dose. Substantial reductions in disease (71%) can be produced after 10 years by routinely vaccinating infants in combination with a large-scale catch-up campaign, using a vaccine which protects against carriage as well as disease; this could be cost-effective at £17 per vaccine dose. New 'MenB' vaccines could substantially reduce disease in England and be cost-effective if competitively priced, particularly if the vaccines can prevent carriage as well as disease. These results are relevant to other countries, with a similar epidemiology to England, considering the introduction of a new 'MenB' vaccine. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  8. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    Energy Technology Data Exchange (ETDEWEB)

    Snodin, Benedict E. K., E-mail: benedict.snodin@chem.ox.ac.uk; Mosayebi, Majid; Schreck, John S.; Romano, Flavio; Doye, Jonathan P. K., E-mail: jonathan.doye@chem.ox.ac.uk [Physical and Theoretical Chemistry Laboratory, Department of Chemistry, University of Oxford, South Parks Road, Oxford OX1 3QZ (United Kingdom); Randisi, Ferdinando [Life Sciences Interface Doctoral Training Center, South Parks Road, Oxford OX1 3QU (United Kingdom); Rudolf Peierls Centre for Theoretical Physics, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Šulc, Petr [Center for Studies in Physics and Biology, The Rockefeller University, 1230 York Avenue, New York, New York 10065 (United States); Ouldridge, Thomas E. [Department of Mathematics, Imperial College, 180 Queen’s Gate, London SW7 2AZ (United Kingdom); Tsukanov, Roman; Nir, Eyal [Department of Chemistry and the Ilse Katz Institute for Nanoscale Science and Technology, Ben-Gurion University of the Negev, Beer Sheva (Israel); Louis, Ard A. [Rudolf Peierls Centre for Theoretical Physics, 1 Keble Road, Oxford OX1 3NP (United Kingdom)

    2015-06-21

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na{sup +}] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.

  9. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    International Nuclear Information System (INIS)

    Snodin, Benedict E. K.; Mosayebi, Majid; Schreck, John S.; Romano, Flavio; Doye, Jonathan P. K.; Randisi, Ferdinando; Šulc, Petr; Ouldridge, Thomas E.; Tsukanov, Roman; Nir, Eyal; Louis, Ard A.

    2015-01-01

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na + ] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA

  10. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank; Sun, Ke

    2017-01-01

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified

  11. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  12. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    Science.gov (United States)

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  14. A predictive model of natural gas mixture combustion in internal combustion engines

    Directory of Open Access Journals (Sweden)

    Henry Espinoza

    2007-05-01

    Full Text Available This study shows the development of a predictive natural gas mixture combustion model for conventional com-bustion (ignition engines. The model was based on resolving two areas; one having unburned combustion mixture and another having combustion products. Energy and matter conservation equations were solved for each crankshaft turn angle for each area. Nonlinear differential equations for each phase’s energy (considering compression, combustion and expansion were solved by applying the fourth-order Runge-Kutta method. The model also enabled studying different natural gas components’ composition and evaluating combustion in the presence of dry and humid air. Validation results are shown with experimental data, demonstrating the software’s precision and accuracy in the results so produced. The results showed cylinder pressure, unburned and burned mixture temperature, burned mass fraction and combustion reaction heat for the engine being modelled using a natural gas mixture.

  15. Mixture estimation with state-space components and Markov model of switching

    Czech Academy of Sciences Publication Activity Database

    Nagy, Ivan; Suzdaleva, Evgenia

    2013-01-01

    Roč. 37, č. 24 (2013), s. 9970-9984 ISSN 0307-904X R&D Projects: GA TA ČR TA01030123 Institutional support: RVO:67985556 Keywords : probabilistic dynamic mixtures, * probability density function * state-space models * recursive mixture estimation * Bayesian dynamic decision making under uncertainty * Kerridge inaccuracy Subject RIV: BC - Control Systems Theory Impact factor: 2.158, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/nagy-mixture estimation with state-space components and markov model of switching.pdf

  16. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  17. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    Science.gov (United States)

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  18. Introducing Model Based Systems Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    BPMN ).  This  is  when  the...to  a  model-­‐centric   approach.     The   AGM   was   developed   using   the   iGrafx6   tool   with   BPMN   [12... BPMN  notation  as  shown  in  Figure  14.  It   provides  a  time-­‐sequenced  perspective  on  the  process

  19. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Science.gov (United States)

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  20. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Directory of Open Access Journals (Sweden)

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  1. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Science.gov (United States)

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  2. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  3. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  4. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    International Nuclear Information System (INIS)

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  5. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  6. Finite element modeling of single-walled carbon nanotubes with introducing a new wall thickness

    International Nuclear Information System (INIS)

    Jalalahmadi, B; Naghdabadi, R

    2007-01-01

    A three-dimensional finite element (FE) model for armchair, zigzag and chiral single-walled carbon nanotubes (SWCNTs) is proposed. By considering the covalent bonds as connecting elements between carbon atoms, a nanotube is simulated as a space frame-like structure. Here, the carbon atoms act as joints of the connecting elements. To create the FE models, nodes are placed at the locations of carbon atoms and the bonds between them are modeled using three-dimensional elastic beam elements. Using Morse atomic potential, the elastic moduli of beam elements are obtained via considering a linkage between molecular and continuum mechanics. Also, a new wall thickness ( bond diameter) equal to 0.1296 nm is introduced. In order to demonstrate the applicability of FE model and new wall thickness, the influence of tube wall thickness, diameter and chirality on the Young's modulus of SWCNTs is investigated. It is found that the choice of wall thickness significantly affects the calculation of Young's modulus. For the values of wall thickness used in the literature, the Young's moduli are estimated which agree very well with the corresponding theoretical results and experimental measurements. We also investigate the dependence of elastic moduli on diameter and chirality of the nanotube. The larger tube diameter, the higher Young's modulus of SWCNT. The Young's modulus of chiral SWCNTs is found to be generally larger than that of armchair and zigzag SWCNTs. The presented results demonstrate that the proposed FE model and wall thickness may provide a valuable tool for studying the mechanical behavior of carbon nanotubes and their application in nano-composites

  7. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...

  8. A Generic Model for Prediction of Separation Performance of Olefin/Paraffin Mixture by Glassy Polymer Membranes

    Directory of Open Access Journals (Sweden)

    A.A. Ghoreyshi

    2008-02-01

    Full Text Available The separation of olefin/paraffin mixtures is an important process in petrochemical industries, which is traditionally performed by low temperature distillation with a high-energy consumption, or complex extractive distillationand adsorption techniques. Membrane separation process is emerging as an alternative for traditional separation processes with respect to low energy and simple operation. Investigations made by various researchers on polymeric membranes it is found that special glassy polymers render them as suitable materials for olefin/paraffin mixture separation. In this regard, having some knowledge on the possible transport mechanism of these processes would play a significant role in their design and applications. In this study, separation behavior of olefin/paraffin mixtures through glassy polymers was modeled by three different approaches: the so-called dual transport model, the basic adsorption-diffusion theory and the general Maxwell-Stefan formulation. The systems chosen to validate the developed transport models are separation of ethane-ethylene mixture by 6FDA-6FpDA polyimide membrane and propane-propylene mixture by 6FDA-TrMPD polyimide membrane for which the individual sorption and permeation data are available in the literature. Acritical examination of dual transport model shows that this model fails clearly to predict even the proper trend for selectivities. The adjustment of pemeabilities by accounting for the contribution of non-selective bulk flow in the transport model introduced no improvement in the predictability of the model. The modeling results based on the basic adsorption-diffusion theory revealed that in this approach only using mixed permeability data, an acceptable result is attainable which fades out the advantages of predictibility of multicomponent separation performance from pure component data. Finally, the results obtained from the model developed based on Maxwell-Stefan formulation approach show a

  9. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    Science.gov (United States)

    Portner, H.; Wolf, A.; Bugmann, H.

    2009-04-01

    Many biogeochemical models have been applied to study the response of the carbon cycle to changes in climate, whereby the process of carbon uptake (photosynthesis) has usually gained more attention than the equally important process of carbon release by respiration. The decomposition of soil organic matter is driven by a combination of factors with a prominent one being soil temperature [Berg and Laskowski(2005)]. One uncertainty concerns the response function used to describe the sensitivity of soil organic matter decomposition to temperature. This relationship is often described by one out of a set of similar exponential functions, but it has not been investigated how uncertainties in the choice of the response function influence the long term predictions of biogeochemical models. We built upon the well-established LPJ-GUESS model [Smith et al.(2001)]. We tested five candidate functions and calibrated them against eight datasets from different Ameriflux and CarboEuropeIP sites [Hibbard et al.(2006)]. We used a simple Exponential function with a constant Q10, the Arrhenius function, the Gaussian function [Tuomi et al.(2008), O'Connell(1990)], the Van't Hoff function [Van't Hoff(1901)] and the Lloyd&Taylor function [Lloyd and Taylor(1994)]. We assessed the impact of uncertainty in model formulation of temperature response on estimates of present and future long-term carbon storage in ecosystems and hence on the CO2 feedback potential to the atmosphere. We specifically investigated the relative importance of model formulation and the error introduced by using different data sets for the parameterization. Our results suggested that the Exponential and Arrhenius functions are inappropriate, as they overestimated the respiration rates at lower temperatures. The Gaussian, Van't Hoff and Lloyd&Taylor functions all fit the observed data better, whereby the functions of Gaussian and Van't Hoff underestimated the response at higher temperatures. We suggest, that the

  10. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  11. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  12. Effects of Test Conditions on APA Rutting and Prediction Modeling for Asphalt Mixtures

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-01-01

    Full Text Available APA rutting tests were conducted for six kinds of asphalt mixtures under air-dry and immersing conditions. The influences of test conditions, including load, temperature, air voids, and moisture, on APA rutting depth were analyzed by using grey correlation method, and the APA rutting depth prediction model was established. Results show that the modified asphalt mixtures have bigger rutting depth ratios of air-dry to immersing conditions, indicating that the modified asphalt mixtures have better antirutting properties and water stability than the matrix asphalt mixtures. The grey correlation degrees of temperature, load, air void, and immersing conditions on APA rutting depth decrease successively, which means that temperature is the most significant influencing factor. The proposed indoor APA rutting prediction model has good prediction accuracy, and the correlation coefficient between the predicted and the measured rutting depths is 96.3%.

  13. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    and the reliability of which have been validated. The dynamic modulus of asphalt mixtures were predicted by conducting Discrete Element simulation under dynamic strain control loading. In order to reduce the calculation time, a method based on frequency–temperature superposition principle has been implemented......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties....... The ball density effect on the internal stress distribution of the asphalt mixture model has been studied when using this method. Furthermore, the internal stresses under dynamic loading have been studied. The agreement between the predicted and the laboratory test results of the complex modulus shows...

  14. Identification of damage in composite structures using Gaussian mixture model-processed Lamb waves

    Science.gov (United States)

    Wang, Qiang; Ma, Shuxian; Yue, Dong

    2018-04-01

    Composite materials have comprehensively better properties than traditional materials, and therefore have been more and more widely used, especially because of its higher strength-weight ratio. However, the damage of composite structures is usually varied and complicated. In order to ensure the security of these structures, it is necessary to monitor and distinguish the structural damage in a timely manner. Lamb wave-based structural health monitoring (SHM) has been proved to be effective in online structural damage detection and evaluation; furthermore, the characteristic parameters of the multi-mode Lamb wave varies in response to different types of damage in the composite material. This paper studies the damage identification approach for composite structures using the Lamb wave and the Gaussian mixture model (GMM). The algorithm and principle of the GMM, and the parameter estimation, is introduced. Multi-statistical characteristic parameters of the excited Lamb waves are extracted, and the parameter space with reduced dimensions is adopted by principal component analysis (PCA). The damage identification system using the GMM is then established through training. Experiments on a glass fiber-reinforced epoxy composite laminate plate are conducted to verify the feasibility of the proposed approach in terms of damage classification. The experimental results show that different types of damage can be identified according to the value of the likelihood function of the GMM.

  15. Speech Enhancement Using Gaussian Mixture Models, Explicit Bayesian Estimation and Wiener Filtering

    Directory of Open Access Journals (Sweden)

    M. H. Savoji

    2014-09-01

    Full Text Available Gaussian Mixture Models (GMMs of power spectral densities of speech and noise are used with explicit Bayesian estimations in Wiener filtering of noisy speech. No assumption is made on the nature or stationarity of the noise. No voice activity detection (VAD or any other means is employed to estimate the input SNR. The GMM mean vectors are used to form sets of over-determined system of equations whose solutions lead to the first estimates of speech and noise power spectra. The noise source is also identified and the input SNR estimated in this first step. These first estimates are then refined using approximate but explicit MMSE and MAP estimation formulations. The refined estimates are then used in a Wiener filter to reduce noise and enhance the noisy speech. The proposed schemes show good results. Nevertheless, it is shown that the MAP explicit solution, introduced here for the first time, reduces the computation time to less than one third with a slight higher improvement in SNR and PESQ score and also less distortion in comparison to the MMSE solution.

  16. Excess Properties of Aqueous Mixtures of Methanol: Simple Models Versus Experiment

    Czech Academy of Sciences Publication Activity Database

    Vlček, Lukáš; Nezbeda, Ivo

    roč. 131-132, - (2007), s. 158-162 ISSN 0167-7322. [International Conference on Solution Chemistry /29./. Portorož, 21.08.2005-25.08.2005] R&D Projects: GA AV ČR(CZ) IAA4072303; GA AV ČR(CZ) 1ET400720409 Institutional research plan: CEZ:AV0Z40720504 Keywords : aqueous mixtures * primitive models * water-alcohol mixtures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.982, year: 2007

  17. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  18. Does introduced fauna influence soil erosion? A field and modelling assessment.

    Science.gov (United States)

    Hancock, G R; Lowry, J B C; Dever, C; Braggins, M

    2015-06-15

    Pigs (Sus scrofa) are recognised as having significant ecological impacts in many areas of the world including northern Australia. The full consequences of the introduction of pigs are difficult to quantify as the impacts may only be detected over the long-term and there is a lack of quantitative information on the impacts of feral pigs globally. In this study the effect of feral pigs is quantified in an undisturbed catchment in the monsoonal tropics of northern Australia. Over a three-year period, field data showed that the areal extent of pig disturbance ranged from 0.3-3.3% of the survey area. The mass of material exhumed through these activities ranged from 4.3 t ha(-1) yr(-1) to 36.0 t ha(-1) yr(-1). The findings demonstrate that large introduced species such as feral pigs are disturbing large areas as well as exhuming considerable volumes of soil. A numerical landscape evolution and soil erosion model was used to assess the effect of this disturbance on catchment scale erosion rates. The modelling demonstrated that simulated pig disturbance in previously undisturbed areas produced lower erosion rates compared to those areas which had not been impacted by pigs. This is attributed to the pig disturbance increasing surface roughness and trapping sediment. This suggests that in this specific environment, disturbance by pigs does not enhance erosion. However, this conclusion is prefaced by two important caveats. First, the long term impact of soil disturbance is still very uncertain. Secondly, modelling results show a clear differentiation between those from an undisturbed environment and those from a post-mining landscape, in which pig disturbance may enhance erosion. Copyright © 2015. Published by Elsevier B.V.

  19. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  20. A globally accurate theory for a class of binary mixture models

    Science.gov (United States)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  1. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  2. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  3. Monitoring and modeling of ultrasonic wave propagation in crystallizing mixtures

    Science.gov (United States)

    Marshall, T.; Challis, R. E.; Tebbutt, J. S.

    2002-05-01

    The utility of ultrasonic compression wave techniques for monitoring crystallization processes is investigated in a study of the seeded crystallization of copper II sulfate pentahydrate from aqueous solution. Simple models are applied to predict crystal yield, crystal size distribution and the changing nature of the continuous phase. A scattering model is used to predict the ultrasonic attenuation as crystallization proceeds. Experiments confirm that modeled attenuation is in agreement with measured results.

  4. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  5. Modeling of the effect of intentionally introduced traps on hole transport in single-crystal rubrene

    KAUST Repository

    Dacuñ a, Javier; Desai, Amit; Xie, Wei; Salleo, Alberto

    2014-01-01

    Defects have been intentionally introduced in a rubrene single crystal by means of two different mechanisms: ultraviolet ozone (UVO) exposure and x-ray irradiation. A complete drift-diffusion model based on the mobility edge (ME) concept, which takes into account asymmetries and nonuniformities in the semiconductor, is used to estimate the energetic and spatial distribution of trap states. The trap distribution for pristine devices can be decomposed into two well defined regions: a shallow region ascribed to structural disorder and a deeper region ascribed to defects. UVO and x ray increase the hole trap concentration in the semiconductor with different energetic and spatial signatures. The former creates traps near the top surface in the 0.3-0.4 eV region, while the latter induces a wider distribution of traps extending from the band edge with a spatial distribution that peaks near the top and bottom interfaces. In addition to inducing hole trap states in the transport gap, both processes are shown to reduce the mobility with respect to a pristine crystal. © 2014 American Physical Society.

  6. Modeling of the effect of intentionally introduced traps on hole transport in single-crystal rubrene

    KAUST Repository

    Dacuña, Javier

    2014-06-05

    Defects have been intentionally introduced in a rubrene single crystal by means of two different mechanisms: ultraviolet ozone (UVO) exposure and x-ray irradiation. A complete drift-diffusion model based on the mobility edge (ME) concept, which takes into account asymmetries and nonuniformities in the semiconductor, is used to estimate the energetic and spatial distribution of trap states. The trap distribution for pristine devices can be decomposed into two well defined regions: a shallow region ascribed to structural disorder and a deeper region ascribed to defects. UVO and x ray increase the hole trap concentration in the semiconductor with different energetic and spatial signatures. The former creates traps near the top surface in the 0.3-0.4 eV region, while the latter induces a wider distribution of traps extending from the band edge with a spatial distribution that peaks near the top and bottom interfaces. In addition to inducing hole trap states in the transport gap, both processes are shown to reduce the mobility with respect to a pristine crystal. © 2014 American Physical Society.

  7. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Directory of Open Access Journals (Sweden)

    M. P. Mittermaier

    2008-05-01

    Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.

    The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  8. Modelling phase equilibria for acid gas mixtures using the CPA equation of state. Part VI. Multicomponent mixtures with glycols relevant to oil and gas and to liquid or supercritical CO_2 transport applications

    International Nuclear Information System (INIS)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios M.

    2016-01-01

    Highlights: • CPA EoS was applied to predict the phase behaviour of multicomponent mixtures containing CO_2, glycols, water and alkanes. • Mixtures relevant to oil and gas, CO_2 capture and liquid or supercritical CO_2 transport applications were investigated. • Results are presented using various modelling approaches/association schemes. • The predicting ability of the model was evaluated against experimental data. • Conclusions for the best modelling approach are drawn. - Abstract: In this work the Cubic Plus Association (CPA) equation of state is applied to multicomponent mixtures containing CO_2 with alkanes, water, and glycols. Various modelling approaches are used i.e. different association schemes for pure CO_2 (assuming that it is a non-associating compound, or that it is a self-associating fluid with two, three or four association sites) and different possibilities for modelling mixtures of CO_2 with other hydrogen bonding fluids (only use of one interaction parameter k_i_j or assuming cross association interactions and obtaining the relevant parameters either via a combining rule or using an experimental value for the cross association energy). Initially, new binary interaction parameters were estimated for (CO_2 + glycol) binary mixtures. Having the binary parameters from the binary systems, the model was applied in a predictive way (i.e. no parameters were adjusted to data on ternary and multicomponent mixtures) to model the phase behaviour of ternary and quaternary systems with CO_2 and glycols. It is concluded that CPA performs satisfactorily for most multicomponent systems considered. Some differences between the various modelling approaches are observed. This work is the last part of a series of studies, which aim to arrive in a single “engineering approach” for applying CPA to acid gas mixtures, without introducing significant changes to the model. An overall assessment, based also on the obtained results of this series (Tsivintzelis

  9. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  10. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  11. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    International Nuclear Information System (INIS)

    Teng, S.; Tebby, C.; Barcellini-Couget, S.; De Sousa, G.; Brochot, C.; Rahmani, R.; Pery, A.R.R.

    2016-01-01

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  12. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    Energy Technology Data Exchange (ETDEWEB)

    Teng, S.; Tebby, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Barcellini-Couget, S. [ODESIA Neosciences, Sophia Antipolis, 400 route des chappes, 06903 Sophia Antipolis (France); De Sousa, G. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Brochot, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Rahmani, R. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Pery, A.R.R., E-mail: alexandre.pery@agroparistech.fr [AgroParisTech, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France); INRA, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France)

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  13. Modelling time course gene expression data with finite mixtures of linear additive models.

    Science.gov (United States)

    Grün, Bettina; Scharl, Theresa; Leisch, Friedrich

    2012-01-15

    A model class of finite mixtures of linear additive models is presented. The component-specific parameters in the regression models are estimated using regularized likelihood methods. The advantages of the regularization are that (i) the pre-specified maximum degrees of freedom for the splines is less crucial than for unregularized estimation and that (ii) for each component individually a suitable degree of freedom is selected in an automatic way. The performance is evaluated in a simulation study with artificial data as well as on a yeast cell cycle dataset of gene expression levels over time. The latest release version of the R package flexmix is available from CRAN (http://cran.r-project.org/).

  14. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  15. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    Science.gov (United States)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  16. The phase behavior of a hard sphere chain model of a binary n-alkane mixture

    International Nuclear Information System (INIS)

    Malanoski, A. P.; Monson, P. A.

    2000-01-01

    Monte Carlo computer simulations have been used to study the solid and fluid phase properties as well as phase equilibrium in a flexible, united atom, hard sphere chain model of n-heptane/n-octane mixtures. We describe a methodology for calculating the chemical potentials for the components in the mixture based on a technique used previously for atomic mixtures. The mixture was found to conform accurately to ideal solution behavior in the fluid phase. However, much greater nonidealities were seen in the solid phase. Phase equilibrium calculations indicate a phase diagram with solid-fluid phase equilibrium and a eutectic point. The components are only miscible in the solid phase for dilute solutions of the shorter chains in the longer chains. (c) 2000 American Institute of Physics

  17. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  18. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    Science.gov (United States)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  19. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  20. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    Science.gov (United States)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation

  1. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  2. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    Science.gov (United States)

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Modeling of nanoscale liquid mixture transport by density functional hydrodynamics

    Science.gov (United States)

    Dinariev, Oleg Yu.; Evseev, Nikolay V.

    2017-06-01

    Modeling of multiphase compositional hydrodynamics at nanoscale is performed by means of density functional hydrodynamics (DFH). DFH is the method based on density functional theory and continuum mechanics. This method has been developed by the authors over 20 years and used for modeling in various multiphase hydrodynamic applications. In this paper, DFH was further extended to encompass phenomena inherent in liquids at nanoscale. The new DFH extension is based on the introduction of external potentials for chemical components. These potentials are localized in the vicinity of solid surfaces and take account of the van der Waals forces. A set of numerical examples, including disjoining pressure, film precursors, anomalous rheology, liquid in contact with heterogeneous surface, capillary condensation, and forward and reverse osmosis, is presented to demonstrate modeling capabilities.

  4. Introducing the Interactive Model for the Training of Audiovisual Translators and Analysis of Multimodal Texts

    Directory of Open Access Journals (Sweden)

    Pietro Luigi Iaia

    2015-07-01

    Full Text Available Abstract – This paper introduces the ‘Interactive Model’ of audiovisual translation developed in the context of my PhD research on the cognitive-semantic, functional and socio-cultural features of the Italian-dubbing translation of a corpus of humorous texts. The Model is based on two interactive macro-phases – ‘Multimodal Critical Analysis of Scripts’ (MuCrAS and ‘Multimodal Re-Textualization of Scripts’ (MuReTS. Its construction and application are justified by a multidisciplinary approach to the analysis and translation of audiovisual texts, so as to focus on the linguistic and extralinguistic dimensions affecting both the reception of source texts and the production of target ones (Chaume 2004; Díaz Cintas 2004. By resorting to Critical Discourse Analysis (Fairclough 1995, 2001, to a process-based approach to translation and to a socio-semiotic analysis of multimodal texts (van Leeuwen 2004; Kress and van Leeuwen 2006, the Model is meant to be applied to the training of audiovisual translators and discourse analysts in order to help them enquire into the levels of pragmalinguistic equivalence between the source and the target versions. Finally, a practical application shall be discussed, detailing the Italian rendering of a comic sketch from the American late-night talk show Conan.Abstract – Questo studio introduce il ‘Modello Interattivo’ di traduzione audiovisiva sviluppato durante il mio dottorato di ricerca incentrato sulle caratteristiche cognitivo-semantiche, funzionali e socio-culturali della traduzione italiana per il doppiaggio di un corpus di testi comici. Il Modello è costituito da due fasi: la prima, di ‘Analisi critica e multimodale degli script’ (MuCrAS e la seconda, di ‘Ritestualizzazione critica e multimodale degli script’ (MuReTS, e la sua costruzione e applicazione sono frutto di un approccio multidisciplinare all’analisi e traduzione dei testi audiovisivi, al fine di esaminare le

  5. Using peer teaching to introduce the Pharmaceutical Care Model to incoming pharmacy students.

    Science.gov (United States)

    Kolar, Claire; Hager, Keri; Janke, Kristin K

    2018-02-01

    The aim of this initiative was to design and evaluate a peer teaching activity where pairs of second-year pharmacy students introduced the Pharmaceutical Care Model and discussed success in the broader first-year pharmacy curriculum with pairs of first year students. Second-year pharmacy students individually created concept maps illustrating the main components of pharmaceutical care to be used as teaching tools with first-year students. First-year students were given a brief introduction to pharmaceutical care by faculty and prepared questions to ask their second-year colleagues. Two second-year students were then matched with two first-year students for a two-part peer teaching event. Each student completed documentation of the peer experience, which included questions about the effectiveness of the teaching, changes to be made in the future, and the usefulness of the exercise. The documentation was analyzed via content analysis and instructors evaluated the concept maps based on their effectiveness as a teaching tool for novices. A rubric was used to evaluate 166 concept maps of which 145 were rated good, 18 were rated as better, and 3 as best. Themes emerging from the content analysis included: positive impact of teaching and learning pharmaceutical care, value of broader curriculum discussion, and beneficial first- and second-year connections. A structured peer teaching event outside the traditional classroom setting can create a space for: teaching and learning to occur, student-student connections to be made, and advice on the curriculum to be shared. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only f...

  7. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  8. Growth Kinetics and Modeling of Direct Oxynitride Growth with NO-O2 Gas Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Everist, Sarah; Nelson, Jerry; Sharangpani, Rahul; Smith, Paul Martin; Tay, Sing-Pin; Thakur, Randhir

    1999-05-03

    We have modeled growth kinetics of oxynitrides grown in NO-O2 gas mixtures from first principles using modified Deal-Grove equations. Retardation of oxygen diffusion through the nitrided dielectric was assumed to be the dominant growth-limiting step. The model was validated against experimentally obtained curves with good agreement. Excellent uniformity, which exceeded expected walues, was observed.

  9. Differential expression among tissues in morbidly obese individuals using a finite mixture model under BLUP approach

    DEFF Research Database (Denmark)

    Kogelman, Lisette; Trabzuni, Daniah; Bonder, Marc Jan

    effects of the interactions between tissues and probes using BLUP (Best Linear Unbiased Prediction) linear models correcting for gender, which were subsequently used in a finite mixture model to detect DE genes in each tissue. This approach evades the multiple-testing problem and is able to detect...

  10. Smoothed particle hydrodynamics model for phase separating fluid mixtures. I. General equations

    NARCIS (Netherlands)

    Thieulot, C; Janssen, LPBM; Espanol, P

    We present a thermodynamically consistent discrete fluid particle model for the simulation of a recently proposed set of hydrodynamic equations for a phase separating van der Waals fluid mixture [P. Espanol and C.A.P. Thieulot, J. Chem. Phys. 118, 9109 (2003)]. The discrete model is formulated by

  11. Rheology of petrolatum-paraffin oil mixtures : Applications to analogue modelling of geological processes

    NARCIS (Netherlands)

    Duarte, João C.; Schellart, Wouter P.; Cruden, Alexander R.

    2014-01-01

    Paraffins have been widely used in analogue modelling of geological processes. Petrolatum and paraffin oil are commonly used to lubricate model boundaries and to simulate weak layers. In this paper, we present rheological tests of petrolatum, paraffin oil and several homogeneous mixtures of the two.

  12. Maximum likelihood pixel labeling using a spatially variant finite mixture model

    International Nuclear Information System (INIS)

    Gopal, S.S.; Hebert, T.J.

    1996-01-01

    We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain

  13. Modelling of phase equilibria for associating mixtures using an equation of state

    International Nuclear Information System (INIS)

    Ferreira, Olga; Brignole, Esteban A.; Macedo, Eugenia A.

    2004-01-01

    In the present work, the group contribution with association equation of state (GCA-EoS) is extended to represent phase equilibria in mixtures containing acids, esters, and ketones, with water, alcohols, and any number of inert components. Association effects are represented by a group-contribution approach. Self- and cross-association between the associating groups present in these mixtures are considered. The GCA-EoS model is compared to the group-contribution method MHV2, which does not take into account explicitly association effects. The results obtained with the GCA-EoS model are, in general, more accurate when compared to the ones achieved by the MHV2 equation with less number of parameters. Model predictions are presented for binary self- and cross-associating mixtures

  14. Implications of introducing realistic fire response traits in a Dynamic Global Vegetation Model

    Science.gov (United States)

    Kelley, D.; Harrison, S. P.; Prentice, I. C.

    2013-12-01

    Bark thickness is a key trait protecting woody plants against fire damage, while the ability to resprout is a trait that confers competitive advantage over non-resprouting individuals in fire-prone landscapes. Neither trait is well represented in fire-enabled dynamic global vegetation models (DGVMs). Here we describe a version of the Land Processes and eXchanges (LPX-Mv1) DGVM that incorporates both of these traits in a realistic way. From a synthesis of a large number of field studies, we show there is considerable innate variability in bark thickness between species within a plant-functional type (PFT). Furthermore, bark thickness is an adaptive trait at ecosystem level, increasing with fire frequency. We use the data to specify the range of bark thicknesses characteristic of each model PFT. We allow this distribution to change dynamically: thinner-barked trees are killed preferentially by fire, shifting the distribution of bark thicknesses represented in a model grid cell. We use the PFT-specific bark-thickness probability range for saplings during re-establishment. Since it is rare to destroy all trees in a grid cell, this treatment results in average bark thickness increasing with fire frequency and intensity. Resprouting is a prominent adaptation of temperate and tropical trees in fire-prone areas. The ability to resprout from above-ground tissue (apical or epicormic resprouting) results in the fastest recovery of total biomass after disturbance; resprouting from basal or below-ground meristems results in slower recovery, while non-resprouting species must regenerate from seed and therefore take the longest time to recover. Our analyses show that resprouting species have thicker bark than non-resprouting species. Investment in resprouting is accompanied by reduced efficacy of regeneration from seed. We introduce resprouting PFTs in LPX-Mv1 by specifying an appropriate range of bark thickness, allowing resprouters to survive fire and regenerate vegetatively in

  15. HTCC - a heat transfer model for gas-steam mixtures

    International Nuclear Information System (INIS)

    Papadimitriou, P.

    1983-01-01

    The mathematical model HTCC (Heat Transfer Coefficient in Containment) has been developed for RALOC after a loss-of-coolant accident in order to determine the local heat transfer coefficients for transfer between the containment atmosphere and the walls of the reactor building. The model considers the current values of room and wall temperature, the concentration of steam and non-condensible gases, geometry data and those of fluid dynamics together with thermodynamic parameters and from these determines the heat transfer mechanisms due to convection, radiation and condensation. The HTCC is implemented in the RALOC program. Comparative analyses of computed temperature profiles, for HEDL Standard problems A and B on hydrogen distribution, and of computed temperature profiles determined during the heat-up phase in the CSE-A5 experiment show a good agreement with experimental data. (orig.) [de

  16. Factoring variations in natural images with deep Gaussian mixture models

    OpenAIRE

    van den Oord, Aäron; Schrauwen, Benjamin

    2014-01-01

    Generative models can be seen as the swiss army knives of machine learning, as many problems can be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning may lie in Deep Learning methods, given their success in supervised learning. However, one of the cur- rent problems with deep unsupervised learning methods, is that they often are harder to scale. As ...

  17. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  18. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Directory of Open Access Journals (Sweden)

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  19. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte...... Carlo simulations evaluate the properties of the estimation procedures. Results show that the proposed model is viable and flexible for purposes of forecasting volatility. Model uncertainty is accounted for by employing Bayesian model averaging. Bayesian model averaging provides very competitive...... forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications....

  20. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    International Nuclear Information System (INIS)

    Almasi, Mohammad

    2014-01-01

    Experimental and calculated partial molar volumes (V ¯ m,1 ) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V m E , isobaric thermal expansivity α p , partial molar volumes V ¯ m,i and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures

  1. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    Energy Technology Data Exchange (ETDEWEB)

    Almasi, Mohammad, E-mail: m.almasi@khouzestan.srbiau.ac.ir

    2014-09-10

    Experimental and calculated partial molar volumes (V{sup ¯}{sub m,1}) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V{sub m}{sup E}, isobaric thermal expansivity α{sub p}, partial molar volumes V{sup ¯}{sub m,i} and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures.

  2. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro.

    Directory of Open Access Journals (Sweden)

    Niels Hadrup

    Full Text Available Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA, independent action (IA and generalized concentration addition (GCA models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot

  3. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...

  4. Application of fuzzy logic to determine the odour intensity of model gas mixtures using electronic nose

    Science.gov (United States)

    Szulczyński, Bartosz; Gębicki, Jacek; Namieśnik, Jacek

    2018-01-01

    The paper presents the possibility of application of fuzzy logic to determine the odour intensity of model, ternary gas mixtures (α-pinene, toluene and triethylamine) using electronic nose prototype. The results obtained using fuzzy logic algorithms were compared with the values obtained using multiple linear regression (MLR) model and sensory analysis. As the results of the studies, it was found the electronic nose prototype along with the fuzzy logic pattern recognition system can be successfully used to estimate the odour intensity of tested gas mixtures. The correctness of the results obtained using fuzzy logic was equal to 68%.

  5. A Mixture Model of Consumers' Intended Purchase Decisions for Genetically Modified Foods

    OpenAIRE

    Kristine M. Grimsrud; Robert P. Berrens; Ron C. Mittelhammer

    2006-01-01

    A finite probability mixture model is used to analyze the existence of multiple market segments for a pre-market good. The approach has at least two principal benefits. First, the model is capable of identifying likely market segments and their differentiating characteristics. Second, the model can be used to estimate the discount different consumer groups require to purchase the good. The model is illustrated using stated preference survey data collected on consumer responses to the potentia...

  6. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    Science.gov (United States)

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Ion swarm data for electrical discharge modeling in air and flue gas mixtures

    International Nuclear Information System (INIS)

    Nelson, D.; Benhenni, M.; Eichwald, O.; Yousfi, M.

    2003-01-01

    The first step of this work is the determination of the elastic and inelastic ion-molecule collision cross sections for the main ions (N 2 + , O 2 + , CO 2 + , H 2 O + and O - ) usually present either in the air or flue gas discharges. The obtained cross section sets, given for ion kinetic energies not exceeding 100 eV, correspond to the interactions of each ion with its parent molecule (symmetric case) or nonparent molecule (asymmetric case). Then by using these different cross section sets, it is possible to obtain the ion swarm data for the different gas mixtures involving N 2 , CO 2 , H 2 O and O 2 molecules whatever their relative proportions. These ion swarm data are obtained from an optimized Monte Carlo method well adapted for the ion transport in gas mixtures. This also allows us to clearly show that the classical linear approximations usually applied for the ion swarm data in mixtures such as Blanc's law are far to be valid. Then, the ion swarm data are given in three cases of gas mixtures: a dry air (80% N 2 , 20% O 2 ), a ternary gas mixture (82% N 2 , 12% CO 2 , 6% O 2 ) and a typical flue gas (76% N 2 , 12% CO 2 , 6% O 2 , 6% H 2 O). From these reliable ion swarm data, electrical discharge modeling for a wire to plane electrode configuration has been carried out in these three mixtures at the atmospheric pressure for different applied voltages. Under the same discharge conditions, large discrepancies in the streamer formation and propagation have been observed in these three mixture cases. They are due to the deviations existing not only between the different effective electron-molecule ionization rates but also between the ion transport properties mainly because of the presence of a highly polar molecule such as H 2 O. This emphasizes the necessity to properly consider the ion transport in the discharge modeling

  8. Modeling when people quit: Bayesian censored geometric models with hierarchical and latent-mixture extensions.

    Science.gov (United States)

    Okada, Kensuke; Vandekerckhove, Joachim; Lee, Michael D

    2018-02-01

    People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.

  9. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.

    2016-12-15

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  10. Semiparametric accelerated failure time cure rate mixture models with competing risks.

    Science.gov (United States)

    Choi, Sangbum; Zhu, Liang; Huang, Xuelin

    2018-01-15

    Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non-ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause-specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause-conditional survival function that are combined through a multinomial logistic model within the cure-mixture modeling framework. The cure-mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel-based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel-smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    Science.gov (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  12. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    International Nuclear Information System (INIS)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar

    2016-01-01

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  13. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    OpenAIRE

    Orlov Alexey; Ushakov Anton; Sovach Victor

    2016-01-01

    This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge casca...

  14. Mathematical model of nonstationary hydraulic processes in gas centrifuge cascade for separation of multicomponent isotope mixtures

    OpenAIRE

    Orlov, Aleksey Alekseevich; Ushakov, Anton; Sovach, Victor

    2017-01-01

    The article presents results of development of a mathematical model of nonstationary hydraulic processes in gas centrifuge cascade for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of silicon isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary hydraulic processes in gas centrifuge cascades for separation...

  15. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part IV. Applications to mixtures of CO2 with alkanes

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Ali, Shahid; Kontogeorgis, Georgios

    2015-01-01

    The thermodynamic properties of pure gaseous, liquid or supercritical CO2 and CO2 mixtures with hydrocarbons and other compounds such as water, alcohols, and glycols are very important in many processes in the oil and gas industry. Design of such processes requires use of accurate thermodynamic...... models, capable of predicting the complex phase behavior of multicomponent mixtures as well as their volumetric properties. In this direction, over the last several years, the cubic-plus-association (CPA) thermodynamic model has been successfully used for describing volumetric properties and phase...

  16. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  17. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    Science.gov (United States)

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  18. Soot modeling of counterflow diffusion flames of ethylene-based binary mixture fuels

    KAUST Repository

    Wang, Yu; Raj, Abhijeet Dhayal; Chung, Suk-Ho

    2015-01-01

    of ethylene and its binary mixtures with methane, ethane and propane based on the method of moments. The soot model has 36 soot nucleation reactions from 8 PAH molecules including pyrene and larger PAHs. Soot surface growth reactions were based on a modified

  19. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  20. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  1. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  2. Modelling and simulation of an energy transport phenomenon in a solid-fluid mixture

    International Nuclear Information System (INIS)

    Costa, M.L.M.; Sampaio, R.; Gama, R.M.S. da.

    1989-08-01

    In the present work a model for a local description of the energy transfer phenomenon in a binary (solid-fluid) saturated mixture is proposed. The heat transfer in a saturated flow (through a porous medium) between two parallel plates is simulated by using the Finite Volumes Method. (author) [pt

  3. Using the Mixture Rasch Model to Explore Knowledge Resources Students Invoke in Mathematic and Science Assessments

    Science.gov (United States)

    Zhang, Danhui; Orrill, Chandra; Campbell, Todd

    2015-01-01

    The purpose of this study was to investigate whether mixture Rasch models followed by qualitative item-by-item analysis of selected Programme for International Student Assessment (PISA) mathematics and science items offered insight into knowledge students invoke in mathematics and science separately and combined. The researchers administered an…

  4. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite

  5. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available Models for Sub- pixel Coastal Land Cover Classification M. Ritchie Dr. M. Lück-Vogel Dr. P. Debba Dr. V. Goodall ISRSE - 37 Tshwane, South Africa 10 May 2017 2Study Area Africa South Africa FALSE BAY 3Strand Gordon’s Bay Study Area WorldView-2 Image.../Urban 1 10 10 Herbaceous Vegetation 1 5 5 Shadow 1 8 8 Sparse Vegetation 1 3 3 Water 1 10 10 Woody Vegetation 1 5 5 11 Maximum Likelihood Classification (MLC) 12 Gaussian Mixture Discriminant Analysis (GMDA) 13 A B C t-distribution Mixture Discriminant...

  6. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  7. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    Thermodynamic properties and phase equilibria of associating mixtures cannot often be satisfactorily modelled using conventional models such as cubic equations of state. CPA (cubic-plus-association) is an equation of state (EoS), which combines the SRK EoS with the association term of SAFT. For non......-alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... thermodynamic models especially those combining cubic EoS with local composition activity coefficient models are included. (C) 2007 Elsevier B.V. All rights reserved....

  8. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  9. Dynamic mean field theory for lattice gas models of fluid mixtures confined in mesoporous materials.

    Science.gov (United States)

    Edison, J R; Monson, P A

    2013-11-12

    We present the extension of dynamic mean field theory (DMFT) for fluids in porous materials (Monson, P. A. J. Chem. Phys. 2008, 128, 084701) to the case of mixtures. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable equilibrium states for fluids in pores after a change in the bulk pressure or composition. It is especially useful for studying systems where there are capillary condensation or evaporation transitions. Nucleation processes associated with these transitions are emergent features of the theory and can be visualized via the time dependence of the density distribution and composition distribution in the system. For mixtures an important component of the dynamics is relaxation of the composition distribution in the system, especially in the neighborhood of vapor-liquid interfaces. We consider two different types of mixtures, modeling hydrocarbon adsorption in carbon-like slit pores. We first present results on bulk phase equilibria of the mixtures and then the equilibrium (stable/metastable) behavior of these mixtures in a finite slit pore and an inkbottle pore. We then use DMFT to describe the evolution of the density and composition in the pore in the approach to equilibrium after changing the state of the bulk fluid via composition or pressure changes.

  10. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.

    2007-01-01

    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, eve...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved....... though pure-component liquid densities and vapour pressures are predicted equally accurately for the associating compound. As was the case in the study of pure components, there exists some confusion in the literature about the correct interpretation and comparison of experimental data and theoretical...

  11. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  12. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  13. A general mixture theory. I. Mixtures of spherical molecules

    Science.gov (United States)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  14. A Mixture Model and a Hidden Markov Model to Simultaneously Detect Recombination Breakpoints and Reconstruct Phylogenies

    Directory of Open Access Journals (Sweden)

    Bastien Boussau

    2009-06-01

    Full Text Available Homologous recombination is a pervasive biological process that affects sequences in all living organisms and viruses. In the presence of recombination, the evolutionary history of an alignment of homologous sequences cannot be properly depicted by a single bifurcating tree: some sites have evolved along a specific phylogenetic tree, others have followed another path. Methods available to analyse recombination in sequences usually involve an analysis of the alignment through sliding-windows, or are particularly demanding in computational resources, and are often limited to nucleotide sequences. In this article, we propose and implement a Mixture Model on trees and a phylogenetic Hidden Markov Model to reveal recombination breakpoints while searching for the various evolutionary histories that are present in an alignment known to have undergone homologous recombination. These models are sufficiently efficient to be applied to dozens of sequences on a single desktop computer, and can handle equivalently nucleotide or protein sequences. We estimate their accuracy on simulated sequences and test them on real data.

  15. A Mixture Model and a Hidden Markov Model to Simultaneously Detect Recombination Breakpoints and Reconstruct Phylogenies

    Directory of Open Access Journals (Sweden)

    Bastien Boussau

    2009-01-01

    Full Text Available Homologous recombination is a pervasive biological process that affects sequences in all living organisms and viruses. In the presence of recombination, the evolutionary history of an alignment of homologous sequences cannot be properly depicted by a single bifurcating tree: some sites have evolved along a specific phylogenetic tree, others have followed another path. Methods available to analyse recombination in sequences usually involve an analysis of the alignment through sliding-windows, or are particularly demanding in computational resources, and are often limited to nucleotide sequences. In this article, we propose and implement a Mixture Model on trees and a phylogenetic Hidden Markov Model to reveal recombination breakpoints while searching for the various evolutionary histories that are present in an alignment known to have undergone homologous recombination. These models are sufficiently efficient to be applied to dozens of sequences on a single desktop computer, and can handle equivalently nucleotide or protein sequences. We estimate their accuracy on simulated sequences and test them on real data.

  16. Deposition behaviour of model biofuel ash in mixtures with quartz sand. Part 1: Experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Mischa Theis; Christian Mueller; Bengt-Johan Skrifvars; Mikko Hupa; Honghi Tran [Aabo Akademi Process Chemistry Centre, Aabo (Finland). Combustion and Materials Chemistry

    2006-10-15

    Model biofuel ash of well-defined size and melting properties was fed into an entrained flow reactor (EFR) to simulate the deposition behaviour of commercially applied biofuel mixtures in large-scale boilers. The aim was to obtain consistent experimental data that can be used for validation of computational fluid dynamics (CFD)-based deposition models. The results showed that while up to 80 wt% of the feed was lost to the EFR wall, the composition of the model ash particles collected at the reactor exit did not change. When model ashes were fed into the reactor individually, the ash particles were found to be sticky when they contained more than 15 wt% molten phase. When model ashes were fed in mixtures with silica sand, it was found that only a small amount of sand particles was captured in the deposits; the majority rebounded upon impact. The presence of sand in the feed mixture reduced the deposit buildup by more than could be expected from linear interpolation between the model ash and the sand. The results suggested that sand addition to model ash may prevent deposit buildup through erosion. 22 refs., 6 figs., 3 tabs.

  17. The STIRPAT Analysis on Carbon Emission in Chinese Cities: An Asymmetric Laplace Distribution Mixture Model

    Directory of Open Access Journals (Sweden)

    Shanshan Wang

    2017-12-01

    Full Text Available In cities’ policy-making, it is a hot issue to grasp the determinants of carbon dioxide emission in Chinese cities. And the common method is to use the STIRPAT model, where its coefficients represent the influence intensity of each determinants of carbon emission. However, less work discusses estimation accuracy, especially in the framework of non-normal distribution and heterogeneity among cities’ emission. To improve the estimation accuracy, this paper employs a new method to estimate the STIRPAT model. The method uses a mixture of Asymmetric Laplace distributions (ALDs to approximate the true distribution of the error term. Meantime, a designed two-layer EM algorithm is used to obtain estimators. We test the robustness via the comparison results of five different models. We find that the ALDs Mixture Model is more reliable the others. Further, a significant Kuznets curve relationship is identified in China.

  18. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli

    2015-03-01

    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  19. Modulational instability, solitons and periodic waves in a model of quantum degenerate boson-fermion mixtures

    International Nuclear Information System (INIS)

    Belmonte-Beitia, Juan; Perez-Garcia, Victor M.; Vekslerchik, Vadym

    2007-01-01

    In this paper, we study a system of coupled nonlinear Schroedinger equations modelling a quantum degenerate mixture of bosons and fermions. We analyze the stability of plane waves, give precise conditions for the existence of solitons and write explicit solutions in the form of periodic waves. We also check that the solitons observed previously in numerical simulations of the model correspond exactly to our explicit solutions and see how plane waves destabilize to form periodic waves

  20. On Partial Defaults in Portfolio Credit Risk : A Poisson Mixture Model Approach

    OpenAIRE

    Weißbach, Rafael; von Lieres und Wilkau, Carsten

    2005-01-01

    Most credit portfolio models exclusively calculate the loss distribution for a portfolio of performing counterparts. Conservative default definitions cause considerable insecurity about the loss for a long time after the default. We present three approaches to account for defaulted counterparts in the calculation of the economic capital. Two of the approaches are based on the Poisson mixture model CreditRisk+ and derive a loss distribution for an integrated portfolio. The third method treats ...

  1. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are

  2. An odor interaction model of binary odorant mixtures by a partial differential equation method.

    Science.gov (United States)

    Yan, Luchun; Liu, Jiemin; Wang, Guihua; Wu, Chuandong

    2014-07-09

    A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE) method. Based on the measurement method (tangent-intercept method) of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture's odor intensity to the individual odorant's relative odor activity value (OAV). Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors) also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  3. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    Science.gov (United States)

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  4. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    Science.gov (United States)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  5. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  6. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to

  7. Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.

    Science.gov (United States)

    Tan, Yin-Yin Sarah; Mueller, Shane T

    2016-09-01

    Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.

  8. Theory of synergistic effects: Hill-type response surfaces as 'null-interaction' models for mixtures.

    Science.gov (United States)

    Schindler, Michael

    2017-08-02

    The classification of effects caused by mixtures of agents as synergistic, antagonistic or additive depends critically on the reference model of 'null interaction'. Two main approaches are currently in use, the Additive Dose (ADM) or concentration addition (CA) and the Multiplicative Survival (MSM) or independent action (IA) models. We compare several response surface models to a newly developed Hill response surface, obtained by solving a logistic partial differential equation (PDE). Assuming that a mixture of chemicals with individual Hill-type dose-response curves can be described by an n-dimensional logistic function, Hill's differential equation for pure agents is replaced by a PDE for mixtures whose solution provides Hill surfaces as 'null-interaction' models and relies neither on Bliss independence or Loewe additivity nor uses Chou's unified general theory. An n-dimensional logistic PDE decribing the Hill-type response of n-component mixtures is solved. Appropriate boundary conditions ensure the correct asymptotic behaviour. Mathematica 11 (Wolfram, Mathematica Version 11.0, 2016) is used for the mathematics and graphics presented in this article. The Hill response surface ansatz can be applied to mixtures of compounds with arbitrary Hill parameters. Restrictions which are required when deriving analytical expressions for response surfaces from other principles, are unnecessary. Many approaches based on Loewe additivity turn out be special cases of the Hill approach whose increased flexibility permits a better description of 'null-effect' responses. Missing sham-compliance of Bliss IA, known as Colby's model in agrochemistry, leads to incompatibility with the Hill surface ansatz. Examples of binary and ternary mixtures illustrate the differences between the approaches. For Hill-slopes close to one and doses below the half-maximum effect doses MSM (Colby, Bliss, Finney, Abbott) predicts synergistic effects where the Hill model indicates 'null

  9. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  10. Data Requirements and Modeling for Gas Hydrate-Related Mixtures and a Comparison of Two Association Models

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Aloupis, Georgios; Kontogeorgis, Georgios M.

    2017-01-01

    the performance of the CPA and sPC-SAFT EOS for modeling the fluid-phase equilibria of gas hydrate-related systems and will try to explore how the models can help in suggesting experimental measurements. These systems contain water, hydrocarbon (alkane or aromatic), and either methanol or monoethylene glycol...... parameter sets have been chosen for the sPC-SAFT EOS for a fair comparison. The comparisons are made for pure fluid properties, vapor liquid-equilibria, and liquid liquid equilibria of binary and ternary mixtures as well as vapor liquid liquid equilibria of quaternary mixtures. The results show, from...

  11. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  12. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    International Nuclear Information System (INIS)

    Liu, Chang; Fiol, Núria; Villaescusa, Isabel; Poch, Jordi

    2016-01-01

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data

  13. Separation of a multicomponent mixture by gaseous diffusion: modelization of the enrichment in a capillary - application to a pilot cascade

    International Nuclear Information System (INIS)

    Doneddu, F.

    1982-01-01

    Starting from the modelization of gaseous flow in a porous medium (flow in a capillary), we generalize the law of enrichment in an infinite cylindrical capillary, established for an isotropic linear mixture, to a multicomponent mixture. A generalization is given of the notion of separation yields and characteristic pressure classically used for separations of isotropic linear mixtures. We present formulas for diagonalizing the diffusion operator, modelization of a multistage, gaseous diffusion cascade and comparison with the experimental results of a drain cascade (N 2 -SF 6 -UF 6 mixture). [fr

  14. Introducing formalism in economics: The growth model of John von Neumann

    Directory of Open Access Journals (Sweden)

    Gloria-Palermo Sandye

    2010-01-01

    Full Text Available The objective is to interpret John von Neumann's growth model as a decisive step of the forthcoming formalist revolution of the 1950s in economics. This model gave rise to an impressive variety of comments about its classical or neoclassical underpinnings. We go beyond this traditional criterion and interpret rather this model as the manifestation of von Neumann's involvement in the formalist programme of mathematician David Hilbert. We discuss the impact of Kurt Gödel's discoveries on this programme. We show that the growth model reflects the pragmatic turn of the formalist programme after Gödel and proposes the extension of modern axiomatisation to economics.

  15. Mathematical Modelling in Engineering: A Proposal to Introduce Linear Algebra Concepts

    Science.gov (United States)

    Cárcamo Bahamonde, Andrea; Gómez Urgelles, Joan; Fortuny Aymemí, Josep

    2016-01-01

    The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasise the development of mathematical abilities primarily associated with modelling and interpreting, which are not exclusively calculus abilities. Considering this, an instructional design was created based on mathematical modelling and…

  16. Teacher Emotion Research: Introducing a Conceptual Model to Guide Future Research

    Science.gov (United States)

    Fried, Leanne; Mansfield, Caroline; Dobozy, Eva

    2015-01-01

    This article reports on the development of a conceptual model of teacher emotion through a review of teacher emotion research published between 2003 and 2013. By examining 82 publications regarding teacher emotion, the main aim of the review was to identify how teacher emotion was conceptualised in the literature and develop a conceptual model to…

  17. Introducing labour productivity changes into models used for economic impact analysis in tourism

    NARCIS (Netherlands)

    Klijs, Jeroen; Peerlings, Jack; Heijman, Wim

    2017-01-01

    In tourism management, traditional input-output models are often applied to calculate economic impacts, including employment impacts. These models imply that increases in output are translated into proportional increases in labour, indicating constant labour productivity. In non-linear input-

  18. Introducing a new open source GIS user interface for the SWAT model

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  19. Introducing the Clean-Tech Adoption Model: A California Case Study

    NARCIS (Netherlands)

    Bijlveld, P.C. (Paul); Riezebos, P. (Peter); Wierstra, E. (Erik)

    2012-01-01

    Abstract. The Clean-Tech Adoption Model (C-TAM) explains the adoption process of clean technology. Based on the Unified Theory of Acceptance and Usage of Technology (UTAUT) combined with qualitative research and empirical data gathering, the model predicts adoption based on the perceived quality,

  20. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  2. Estimating animal abundance with N-mixture models using the R-INLA package for R

    KAUST Repository

    Meehan, Timothy D.

    2017-05-03

    Successful management of wildlife populations requires accurate estimates of abundance. Abundance estimates can be confounded by imperfect detection during wildlife surveys. N-mixture models enable quantification of detection probability and often produce abundance estimates that are less biased. The purpose of this study was to demonstrate the use of the R-INLA package to analyze N-mixture models and to compare performance of R-INLA to two other common approaches -- JAGS (via the runjags package), which uses Markov chain Monte Carlo and allows Bayesian inference, and unmarked, which uses Maximum Likelihood and allows frequentist inference. We show that R-INLA is an attractive option for analyzing N-mixture models when (1) familiar model syntax and data format (relative to other R packages) are desired, (2) survey level covariates of detection are not essential, (3) fast computing times are necessary (R-INLA is 10 times faster than unmarked, 300 times faster than JAGS), and (4) Bayesian inference is preferred.

  3. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  4. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  5. Introducing WISDEM:An Integrated System Modeling for Wind Turbines and Plant (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Dykes, K.; Graf, P.; Scott, G.; Ning, A.; King, R.; Guo, Y.; Parsons, T.; Damiani, R.; Felker, F.; Veers, P.

    2015-01-01

    The National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. This work illustrates a few case studies with WISDEM that focus on the design and analysis of wind turbines and plants at different system levels.

  6. Introducing Subrid-scale Cloud Feedbacks to Radiation for Regional Meteorological and Cllimate Modeling

    Science.gov (United States)

    Convection systems and associated cloudiness directly influence regional and local radiation budgets, and dynamics and thermodynamics through feedbacks. However, most subgrid-scale convective parameterizations in regional weather and climate models do not consider cumulus cloud ...

  7. Introducing an integrated climate change perspective in POPs modelling, monitoring and regulation

    International Nuclear Information System (INIS)

    Lamon, L.; Dalla Valle, M.; Critto, A.; Marcomini, A.

    2009-01-01

    This paper presents a review on the implications of climate change on the monitoring, modelling and regulation of persistent organic pollutants (POPs). Current research gaps are also identified and discussed. Long-term data sets are essential to identify relationships between climate fluctuations and changes in chemical species distribution. Reconstructing the influence of climatic changes on POPs environmental behaviour is very challenging in some local studies, and some insights can be obtained by the few available dated sediment cores or by studying POPs response to inter-annual climate fluctuations. Knowledge gaps and future projections can be studied by developing and applying various modelling tools, identifying compounds susceptibility to climate change, local and global effects, orienting international policies. Long-term monitoring strategies and modelling exercises taking into account climate change should be considered when devising new regulatory plans in chemicals management. - Climate change implications on POPs are addressed here with special attention to monitoring, modelling and regulation issues.

  8. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  9. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  10. Methodology for predicting oily mixture properties in the mathematical modeling of molecular distillation

    Directory of Open Access Journals (Sweden)

    M. F. Gayol

    2017-06-01

    Full Text Available A methodology for predicting the thermodynamic and transport properties of a multi-component oily mixture, in which the different mixture components are grouped into a small number of pseudo components is shown. This prediction of properties is used in the mathematical modeling of molecular distillation, which consists of a system of differential equations in partial derivatives, according to the principles of the Transport Phenomena and is solved by an implicit finite difference method using a computer code. The mathematical model was validated with experimental data, specifically the molecular distillation of a deodorizer distillate (DD of sunflower oil. The results obtained were satisfactory, with errors less than 10% with respect to the experimental data in a temperature range in which it is possible to apply the proposed method.

  11. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  12. Methodology for predicting oily mixture properties in the mathematical modeling of molecular distillation

    International Nuclear Information System (INIS)

    Gayol, M.F.; Pramparo, M.C.; Miró Erdmann, S.M.

    2017-01-01

    A methodology for predicting the thermodynamic and transport properties of a multi-component oily mixture, in which the different mixture components are grouped into a small number of pseudo components is shown. This prediction of properties is used in the mathematical modeling of molecular distillation, which consists of a system of differential equations in partial derivatives, according to the principles of the Transport Phenomena and is solved by an implicit finite difference method using a computer code. The mathematical model was validated with experimental data, specifically the molecular distillation of a deodorizer distillate (DD) of sunflower oil. The results obtained were satisfactory, with errors less than 10% with respect to the experimental data in a temperature range in which it is possible to apply the proposed method. [es

  13. Modeling the flow of activated H2 + CH4 mixture by deposition of diamond nanostructures

    Directory of Open Access Journals (Sweden)

    Plotnikov Mikhail

    2017-01-01

    Full Text Available Algorithm of the direct simulation Monte Carlo method for the flow of hydrogen and methane mixture in a cylindrical channel is developed. Heterogeneous reactions on tungsten channel surfaces are included into the model. Their effects on flows are analyzed. A one-dimensional approach based on the solution of equilibrium chemical kinetics equations is used to analyze gas-phase methane decomposition. The obtained results may be useful for optimization of gas-dynamic sources of activated gas diamond synthesis.

  14. Catalytically stabilized combustion of lean methane-air-mixtures: a numerical model

    Energy Technology Data Exchange (ETDEWEB)

    Dogwiler, U; Benz, P; Mantharas, I [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-06-01

    The catalytically stabilized combustion of lean methane/air mixtures has been studied numerically under conditions closely resembling the ones prevailing in technical devices. A detailed numerical model has been developed for a laminar, stationary, 2-D channel flow with full heterogeneous and homogeneous reaction mechanisms. The computations provide direct information on the coupling between heterogeneous-homogeneous combustion and in particular on the means of homogeneous ignitions and stabilization. (author) 4 figs., 3 refs.

  15. C-Vine copula mixture model for clustering of residential electrical load pattern data

    OpenAIRE

    Sun, M; Konstantelos, I; Strbac, G

    2016-01-01

    The ongoing deployment of residential smart meters in numerous jurisdictions has led to an influx of electricity consumption data. This information presents a valuable opportunity to suppliers for better understanding their customer base and designing more effective tariff structures. In the past, various clustering methods have been proposed for meaningful customer partitioning. This paper presents a novel finite mixture modeling framework based on C-vine copulas (CVMM) for carrying out cons...

  16. Modeling of columnar and equiaxed solidification of binary mixtures; Modelisation de la solidification colonnaire et equiaxe de melanges binaires

    Energy Technology Data Exchange (ETDEWEB)

    Roux, P

    2005-12-15

    This work deals with the modelling of dendritic solidification in binary mixtures. Large scale phenomena are represented by volume averaging of the local conservation equations. This method allows to rigorously derive the partial differential equations of averaged fields and the closure problems associated to the deviations. Such problems can be resolved numerically on periodic cells, representative of dendritic structures, in order to give a precise evaluation of macroscopic transfer coefficients (Drag coefficients, exchange coefficients, diffusion-dispersion tensors...). The method had already been applied for a model of columnar dendritic mushy zone and it is extended to the case of equiaxed dendritic solidification, where solid grains can move. The two-phase flow is modelled with an Eulerian-Eulerian approach and the novelty is to account for the dispersion of solid velocity through the kinetic agitation of the particles. A coupling of the two models is proposed thanks to an original adaptation of the columnar model, allowing for undercooling calculation: a solid-liquid interfacial area density is introduced and calculated. At last, direct numerical simulations of crystal growth are proposed with a diffuse interface method for a representation of local phenomena. (author)

  17. Mathematical modelling in engineering: A proposal to introduce linear algebra concepts

    Directory of Open Access Journals (Sweden)

    Andrea Dorila Cárcamo

    2016-03-01

    Full Text Available The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts:  span and spanning set. This was applied to first year engineering students. Results suggest that this type of instructional design contributes to the construction of these mathematical concepts and can also favour first year engineering students understanding of key linear algebra concepts and potentiate the development of higher order skills.

  18. Introducing the Tripartite Digitization Model for Engaging with the Intangible Cultural Heritage of the City

    DEFF Research Database (Denmark)

    Rehm, Matthias; Rodil, Kasper

    2016-01-01

    In this paper we investigate the notion of intangible cultural heritage as a driver for smart city learning applications. To this end, we shortly explore the notion of intangible heritage before presenting the tripartite digitization model that was originally developed for indigenous cultural her...... heritage but can equally be applied to the smart city context. We then discuss parts of the model making use of a specific case study aiming at re-creating places in the city.......In this paper we investigate the notion of intangible cultural heritage as a driver for smart city learning applications. To this end, we shortly explore the notion of intangible heritage before presenting the tripartite digitization model that was originally developed for indigenous cultural...

  19. A Path Model of School Violence Perpetration: Introducing Online Game Addiction as a New Risk Factor.

    Science.gov (United States)

    Kim, Jae Yop; Lee, Jeen Suk; Oh, Sehun

    2015-08-10

    Drawing on the cognitive information-processing model of aggression and the general aggression model, we explored why adolescents become addicted to online games and how their immersion in online games affects school violence perpetration (SVP). For this purpose, we conducted statistical analyses on 1,775 elementary and middle school students who resided in northern districts of Seoul, South Korea. The results validated the proposed structural equation model and confirmed the statistical significance of the structural paths from the variables; that is, the paths from child abuse and self-esteem to SVP were significant. The levels of self-esteem and child abuse victimization affected SVP, and this effect was mediated by online game addiction (OGA). Furthermore, a multigroup path analysis showed significant gender differences in the path coefficients of the proposed model, indicating that gender exerted differential effects on adolescents' OGA and SVP. Based on these results, prevention and intervention methods to curb violence in schools have been proposed. © The Author(s) 2015.

  20. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  1. Exploring Use of New Media in Environmental Education Contexts: Introducing Visitors' Technology Use in Zoos Model

    Science.gov (United States)

    Yocco, Victor; Danter, Elizabeth H.; Heimlich, Joseph E.; Dunckel, Betty A.; Myers, Chris

    2011-01-01

    Modern zoological gardens have invested substantial resources in technology to deliver environmental education concepts to visitors. Investment in these media reflects a currently unsubstantiated belief that visitors will both use and learn from these media alongside more traditional and less costly displays. This paper proposes a model that…

  2. Introducing Meta-models for a More Efficient Hazard Mitigation Strategy with Rockfall Protection Barriers

    Science.gov (United States)

    Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane

    2018-04-01

    The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.

  3. Generalized Information Theory Meets Human Cognition: Introducing a Unified Framework to Model Uncertainty and Information Search.

    Science.gov (United States)

    Crupi, Vincenzo; Nelson, Jonathan D; Meder, Björn; Cevolani, Gustavo; Tentori, Katya

    2018-06-17

    Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. Copyright © 2018 Cognitive Science Society, Inc.

  4. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.

    2005-01-01

    Viscosity measurements of well-defined mixtures are useful in order to evaluate existing viscosity models. Recently, an extensive experimental study of the viscosity at pressures up to 140 MPa has been carried out for the binary systems methane + n-decane and methane toluene, between 293.15 and 3...

  5. Modeling plant interspecific interactions from experiments with perennial crop mixtures to predict optimal combinations.

    Science.gov (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-12-01

    The contribution of plant species richness to productivity and ecosystem functioning is a longstanding issue in ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modeling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modeled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e., a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficients- from, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modeling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. © 2017 by the Ecological Society of America.

  6. Fermi problems as tasks to introduce modelling: what we know and what else we should know

    Directory of Open Access Journals (Sweden)

    Lluís Albarracín

    2017-08-01

    Full Text Available Fermi problems have been widely used in Physics teaching at university level in the United States. Multiple recommendations for use in other educational areas can be found in the literature, as the case of mathematical modeling introduction, but its presence in math classrooms has not been yet achieved. We present these problems and discuss about its definition and characteristics that make them particularly interesting for the use of mathematics in real contexts. We also review those aspects that have been investigated from the perspective of mathematics education, especially the way in which students generate mathematical models to solve them and we aim some directions that should be addressed in future research.

  7. Teaching Trauma: A Model for Introducing Traumatic Materials in the Classroom

    Directory of Open Access Journals (Sweden)

    Jessica D. Cless

    2017-09-01

    Full Text Available niversity courses in disciplines such as social work, family studies, humanities, and other areas often use classroom materials that contain traumatic material (Barlow & Becker-Blease, 2012. While many recommendations based on trauma theory exist for instructors at the university level, these are often made in the context of clinical training programs, rather than at the undergraduate level across disciplines. Furthermore, no organized model exists to aid instructors in developing a trauma-informed pedagogy for teaching courses on traumatic stress, violence, and other topics that may pose a risk for secondary traumatic stress in the classroom (Kostouros, 2008. This paper seeks to bridge the gap between trauma theory and implementation of sensitive content in classrooms of higher education, and presents a model of trauma-informed teaching that was developed in the context of an undergraduate trauma studies program. Implications and future directions for research in the area of trauma-informed university classrooms are discussed.

  8. Introducing a rainfall compound distribution model based on weather patterns sub-sampling

    Directory of Open Access Journals (Sweden)

    F. Garavaglia

    2010-06-01

    Full Text Available This paper presents a probabilistic model for daily rainfall, using sub-sampling based on meteorological circulation. We classified eight typical but contrasted synoptic situations (weather patterns for France and surrounding areas, using a "bottom-up" approach, i.e. from the shape of the rain field to the synoptic situations described by geopotential fields. These weather patterns (WP provide a discriminating variable that is consistent with French climatology, and allows seasonal rainfall records to be split into more homogeneous sub-samples, in term of meteorological genesis.

    First results show how the combination of seasonal and WP sub-sampling strongly influences the identification of the asymptotic behaviour of rainfall probabilistic models. Furthermore, with this level of stratification, an asymptotic exponential behaviour of each sub-sample appears as a reasonable hypothesis. This first part is illustrated with two daily rainfall records from SE of France.

    The distribution of the multi-exponential weather patterns (MEWP is then defined as the composition, for a given season, of all WP sub-sample marginal distributions, weighted by the relative frequency of occurrence of each WP. This model is finally compared to Exponential and Generalized Pareto distributions, showing good features in terms of robustness and accuracy. These final statistical results are computed from a wide dataset of 478 rainfall chronicles spread on the southern half of France. All these data cover the 1953–2005 period.

  9. Introducing a model of organizational envy management among university faculty members: A mixed research approach

    Directory of Open Access Journals (Sweden)

    Maris Zarin Daneshvar

    2016-01-01

    Full Text Available The present study aimed at offering a model of organizational envy management among faculty members of Islamic Azad Universities of East Azerbaijan Province. A mixed method through involving qualitative data and then quantitative data emphasizing on quantitative analysis. Population of the study was the entire faculty members with associate or higher degree in the educational year of 2014-2015. In the qualitative stage 20 individuals (experts were selected to design the primary model and questionnaire, and to fit the model 316 faculty members were selected. In the qualitative section it was specified that influential variables on envy management in faculty members are health organizational climate, spiritual leadership, effective communication, job satisfaction and professional development of professors and approved, as well in the quantitative section findings showed that there is a significant relationship between effective variables so that in indirect analysis of effect of organizational climate on envy management, the variable of spiritual leadership via the variable of effective communication had little effect on envy management than variables of professional development and job satisfaction. It is concluded that university managers should provide conditions and backgrounds of envy management in the universities and enable professors for more effective roles without envy in the scientific climate of university to achieve in educational, research and servicing efficiency.

  10. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  11. Generalization of two-phase model with topology microstructure of mixture to Lagrange-Euler methodology

    International Nuclear Information System (INIS)

    Vladimir V Chudanov; Alexei A Leonov

    2005-01-01

    Full text of publication follows: One of the mathematical models (hyperbolic type) for describing evolution of compressible two-phase mixtures was offered in [1] to deal with the following applications: interfaces between compressible materials; shock waves in multiphase mixtures; evolution of homogeneous two-phase flows; cavitation in liquids. The basic difficulties of this model was connected to discretization of the non-conservative equation terms. As result, the class of problems concerned with passage of shock waves through fields with a discontinuing profile of a volume fraction was not described by means of this model. A class of schemes that are able to converge to the correct solution of such problems was received in [2] due to a deeper analysis of two-phase model. The technique offered in [2] was implemented on a Eulerian grid via the Godunov scheme. In present paper the additional analysis of two-phase model in view of microstructure of an mixture topology is carried out in Lagrange mass coordinates. As result, the equations averaged over the set of all possible realizations for two-phase mixture are received. The numerical solution is carried out with use of PPM method [3] in two steps: at first - the equations averaged over mass variable are solved; on the second - the solution, found on the previous step, is re-mapped to a fixed Eulerian grid. Such approach allows to expand the proposed technique on two-dimensional (three-dimensional) case, as in the Lagrange variables the Euler equations system is split on two (three) identical subsystems, each of which describes evolution of considered medium in the given direction. The accuracy and robustness of the described procedure are demonstrated on a sequence of the numerical problems. References: (1). R. Saurel, R. Abgrall, A multiphase Godunov method for compressible multi-fluid and multiphase flows, J. Comput. Phys. 150 (1999) 425-467; (2). R. Saurel, R. Abgrall, Discrete equations for physical and

  12. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi

    2016-05-01

    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  13. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures.

    Science.gov (United States)

    Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P; Alamdari, Houshang

    2016-05-04

    Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger's model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger's model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297-0.595 mm (-30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  14. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  15. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  16. Phase behavior of mixtures of oppositely charged nanoparticles: Heterogeneous Poisson-Boltzmann cell model applied to lysozyme and succinylated lysozyme

    NARCIS (Netherlands)

    Biesheuvel, P.M.; Lindhoud, S.; Vries, de R.J.; Stuart, M.A.C.

    2006-01-01

    We study the phase behavior of mixtures of oppositely charged nanoparticles, both theoretically and experimentally. As an experimental model system we consider mixtures of lysozyme and lysozyme that has been chemically modified in such a way that its charge is nearly equal in magnitude but opposite

  17. The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs

    Science.gov (United States)

    Aydin, Ali Orhan; Orgun, Mehmet Ali

    In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.

  18. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    Science.gov (United States)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-09-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mol N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3-32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. Under high irradiance, non-constitutive mixotrophy appreciably increases annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. In this ecosystem, non-constitutive mixotrophy is also observed to have an indirect stimulating effect on diatoms. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that contrasting forms of mixotrophy have different

  19. Introducing supersymmetry

    International Nuclear Information System (INIS)

    Sohnius, M.F.; Imperial Coll. of Science and Technology, London

    1986-01-01

    A systematic and self-contained introduction to supersymmetric model field theories in flat Minkowskian space and to the techniques used in deriving them is given (including superspace). A general overview of supersymmetry and supergravity is provided in the form of an introduction to the main body of the report. (orig.)

  20. Introducing supersymmetry

    International Nuclear Information System (INIS)

    Sohnius, M.F.; Imperial Coll. of Science and Technology, London

    1985-01-01

    A systematic and self-contained introduction to supersymmetric model field theories in flat Minkowskian space and to the techniques used in deriving them is given (including superspace). A general overview of supersymmetry and supergravity is provided in the form of an introduction to the main body of the report. (orig.)

  1. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  2. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    International Nuclear Information System (INIS)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L.; Fouque, A.L.; Ciuciu, Ph.; Risser, L.

    2009-01-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  3. Introducing the Evaluation Tools for HSE Management System Performance Using Balanced Score Card Model

    Directory of Open Access Journals (Sweden)

    Ali Mohammadi

    2016-12-01

    Full Text Available Background: The performance of the HSE units has various dimensions Leading to different performances. Thus, any industry should be capable of evaluating these systems. The aim of this study was to design a standard questionnaire in the field of performance evaluation of HSE management system employing Balanced Score Card model. Methods: In this study we, first determined the criteria to be evaluated in the framework of Balanced Score Card model based on the objectives and strategies of HSE Management System and existing standards, and then designed questions on every criterion. We used content validity and Cronbach's Alpha to determine the reliability and validity of the questionnaire. Results: The primary questionnaire was comprised of 126 questions some of which were omitted regarding the results obtained from the CVR and CVI values. We obtained the CVI average of environmental dimension to be 0.75 and its CVI average 0.71. Conclusion: With respect to the results of the reliability and validity of this questionnaire,and its standardized design we can suggest using it for evaluation of HSE management system performance in organizations and industries with the mentioned system.

  4. Mixtures of endocrine disrupting contaminants modelled on human high end exposures

    DEFF Research Database (Denmark)

    Christiansen, Sofie; Kortenkamp, A.; Petersen, Marta Axelstad

    2012-01-01

    exceeding 1 is expected to lead to effects in the rat, a total dose more than 62 times higher than human exposures should lead to responses. Considering the high uncertainty of this estimate, experience on lowest‐observed‐adverse‐effect‐level (LOAEL)/NOAEL ratios and statistical power of rat studies, we...... expected that combined doses 150 times higher than high end human intake estimates should give no, or only borderline effects, whereas doses 450 times higher should produce significant responses. Experiments indeed showed clear developmental toxicity of the 450‐fold dose in terms of increased nipple...... though each individual chemical is present at low, ineffective doses, but the effects of mixtures modelled based on human intakes have not previously been investigated. To address this issue for the first time, we selected 13 chemicals for a developmental mixture toxicity study in rats where data about...

  5. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  6. Phase equilibria for mixtures containing nonionic surfactant systems: Modeling and experiments

    International Nuclear Information System (INIS)

    Shin, Moon Sam; Kim, Hwayong

    2008-01-01

    Surfactants are important materials with numerous applications in the cosmetic, pharmaceutical, and food industries due to inter-associating and intra-associating bond. We present a lattice fluid equation-of-state that combines the quasi-chemical nonrandom lattice fluid model with Veytsman statistics for (intra + inter) molecular association to calculate phase behavior for mixtures containing nonionic surfactants. We also measured binary (vapor + liquid) equilibrium data for {2-butoxyethanol (C 4 E 1 ) + n-hexane} and {2-butoxyethanol (C 4 E 1 ) + n-heptane} systems at temperatures ranging from (303.15 to 323.15) K. A static apparatus was used in this study. The presented equation-of-state correlated well with the measured and published data for mixtures containing nonionic surfactant systems

  7. Modeling the long-term effects of introduced herbivores on the spread of an invasive tree

    Science.gov (United States)

    Zhang, Bo; DeAngelis, Donald L.; Rayamajhi, Min B.; Botkin, Daniel B.

    2017-01-01

    ContextMelaleuca quinquenervia (Cav.) Blake (hereafter melaleuca) is an invasive tree from Australia that has spread over the freshwater ecosystems of southern Florida, displacing native vegetation, thus threatening native biodiversity. Suppression of melaleuca appears to be progressing through the introduction of insect species, the weevil, Oxiops vitiosa, and the psyllid, Boreioglycaspis melaleucae.ObjectiveTo improve understanding of the possible effects of herbivory on the landscape dynamics of melaleuca in native southern Florida plant communities.MethodsWe projected likely future changes in plant communities using the individual based modeling platform, JABOWA-II, by simulating successional processes occurring in two types of southern Florida habitat, cypress swamp and bay swamp, occupied by native species and melaleuca, with the impact of insect herbivores.ResultsComputer simulations show melaleuca invasion leads to decreases in density and basal area of native species, but herbivory would effectively control melaleuca to low levels, resulting in a recovery of native species. When herbivory was modeled on pure melaleuca stands, it was more effective in stands with initially larger-sized melaleuca. Although the simulated herbivory did not eliminate melaleuca, it decreased its presence dramatically in all cases, supporting the long-term effectiveness of herbivory in controlling melaleuca invasion.ConclusionsThe results provide three conclusions relevant to management: (1) The introduction of insect herbivory that has been applied to melaleuca appears sufficient to suppress melaleuca over the long term, (2) dominant native species may recover in about 50 years, and (3) regrowth of native species will further suppress melaleuca through competition.

  8. Introducing a Model for Suspicious Behaviors Detection in Electronic Banking by Using Decision Tree Algorithms

    Directory of Open Access Journals (Sweden)

    Rohulla Kosari Langari

    2014-02-01

    Full Text Available Change the world through information technology and Internet development, has created competitive knowledge in the field of electronic commerce, lead to increasing in competitive potential among organizations. In this condition The increasing rate of commercial deals developing guaranteed with speed and light quality is due to provide dynamic system of electronic banking until by using modern technology to facilitate electronic business process. Internet banking is enumerate as a potential opportunity the fundamental pillars and determinates of e-banking that in cyber space has been faced with various obstacles and threats. One of this challenge is complete uncertainty in security guarantee of financial transactions also exist of suspicious and unusual behavior with mail fraud for financial abuse. Now various systems because of intelligence mechanical methods and data mining technique has been designed for fraud detection in users’ behaviors and applied in various industrial such as insurance, medicine and banking. Main of article has been recognizing of unusual users behaviors in e-banking system. Therefore, detection behavior user and categories of emerged patterns to paper the conditions for predicting unauthorized penetration and detection of suspicious behavior. Since detection behavior user in internet system has been uncertainty and records of transactions can be useful to understand these movement and therefore among machine method, decision tree technique is considered common tool for classification and prediction, therefore in this research at first has determinate banking effective variable and weight of everything in internet behaviors production and in continuation combining of various behaviors manner draw out such as the model of inductive rules to provide ability recognizing of different behaviors. At least trend of four algorithm Chaid, ex_Chaid, C4.5, C5.0 has compared and evaluated for classification and detection of exist

  9. Introducing a multivariate model for predicting driving performance: the role of driving anger and personal characteristics.

    Science.gov (United States)

    Roidl, Ernst; Siebert, Felix Wilhelm; Oehl, Michael; Höger, Rainer

    2013-12-01

    Maladaptive driving is an important source of self-inflicted accidents and this driving style could include high speeds, speeding violations, and poor lateral control of the vehicle. The literature suggests that certain groups of drivers, such as novice drivers, males, highly motivated drivers, and those who frequently experience anger in traffic, tend to exhibit more maladaptive driving patterns compared to other drivers. Remarkably, no coherent framework is currently available to describe the relationships and distinct influences of these factors. We conducted two studies with the aim of creating a multivariate model that combines the aforementioned factors, describes their relationships, and predicts driving performance more precisely. The studies employed different techniques to elicit emotion and different tracks designed to explore the driving behaviors of participants in potentially anger-provoking situations. Study 1 induced emotions with short film clips. Study 2 confronted the participants with potentially anger-inducing traffic situations during the simulated drive. In both studies, participants who experienced high levels of anger drove faster and exhibited greater longitudinal and lateral acceleration. Furthermore, multiple linear regressions and path-models revealed that highly motivated male drivers displayed the same behavior independent of their emotional state. The results indicate that anger and specific risk characteristics lead to maladaptive changes in important driving parameters and that drivers with these specific risk factors are prone to experience more anger while driving, which further worsens their driving performance. Driver trainings and anger management courses will profit from these findings because they help to improve the validity of assessments of anger related driving behavior. © 2013.

  10. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  11. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  12. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  13. An Odor Interaction Model of Binary Odorant Mixtures by a Partial Differential Equation Method

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2014-07-01

    Full Text Available A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE method. Based on the measurement method (tangent-intercept method of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture’s odor intensity to the individual odorant’s relative odor activity value (OAV. Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  14. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

    Science.gov (United States)

    Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

    2015-10-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

  15. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    International Nuclear Information System (INIS)

    Ellefsen, Karl J.; Smith, David B.

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples. - Highlights: • We evaluate a clustering procedure by applying it to geochemical data. • The procedure generates a hierarchy of clusters. • Different levels of the hierarchy show geochemical processes at different spatial scales. • The clustering method is Bayesian finite mixture modeling. • Model parameters are estimated with Hamiltonian Monte Carlo sampling.

  16. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    OpenAIRE

    Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P.; Alamdari, Houshang

    2016-01-01

    Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then use...

  17. Whole-Volume Clustering of Time Series Data from Zebrafish Brain Calcium Images via Mixture Modeling.

    Science.gov (United States)

    Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L

    2018-02-01

    Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.

  18. Probabilistic Modeling of Landfill Subsidence Introduced by Buried Structure Collapse - 13229

    International Nuclear Information System (INIS)

    Foye, Kevin; Soong, Te-Yang

    2013-01-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass and buried structure placement. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the waste mass and sub-grade properties, especially discontinuous inclusions, which control differential settlement. An alternative is to use a probabilistic model to capture the non-uniform collapse of cover soils and buried structures and the subsequent effect of that collapse on the final cover system. Both techniques are applied to the problem of two side-by-side waste trenches with collapsible voids. The results show how this analytical technique can be used to connect a metric of final cover performance (inundation area) to the susceptibility of the sub-grade to collapse and the effective thickness of the cover soils. This approach allows designers to specify cover thickness, reinforcement, and slope to meet the demands imposed by the settlement of the underlying waste trenches. (authors)

  19. A framework to prevent and control tobacco among adolescents and children: introducing the IMPACT model.

    Science.gov (United States)

    Arora, Monika; Mathur, Manu Raj; Singh, Neha

    2013-03-01

    The objective of this paper is to provide a comprehensive evidence based model aimed at addressing multi-level risk factors influencing tobacco use among children and adolescents with multi-level policy and programmatic approaches in India. Evidences around effectiveness of policy and program interventions from developed and developing countries were reviewed using Pubmed, Scopus, Google Scholar and Ovid databases. This evidence was then categorized under three broad approaches: Policy level approaches (increased taxation on tobacco products, smoke-free laws in public places and work places, effective health warnings, prohibiting tobacco advertising, promotions and sponsorships, and restricting access to minors); Community level approaches (school health programs, mass media campaigns, community based interventions, promoting tobacco free norms) and Individual level approaches (promoting cessation in various settings). This review of literature around determinants and interventions was organized into developing the IMPACT framework. The paper further presents a comparative analysis of tobacco control interventions in India vis a vis the proposed approaches. Mixed results were found for prevention and control efforts targeting youth. However, this article suggests a number of intervention strategies that have shown to be effective. Implementing these interventions in a coordinated way will provide potential synergies across interventions. Pediatricians have prominent role in advocating and implementing the IMPACT framework in countries aiming to prevent and control tobacco use among adolescents and children.

  20. Introducing a Virtual Lesion Model of Dysphagia Resulting from Pharyngeal Sensory Impairment

    Directory of Open Access Journals (Sweden)

    Paul Muhle

    2018-01-01

    Full Text Available Background/Aims: Performing neurophysiological and functional imaging studies in severely affected patients to investigate novel neurostimulation techniques for the treatment of neurogenic dysphagia is difficult. Therefore, basic research needs to be conducted in healthy subjects. Swallowing is a motor function highly dependent on sensory afferent input. Here we propose a virtual peripheral sensory lesion model to mimic pharyngeal sensory impairment, which is known as a major contributor to dysphagia in neurological disease. Methods: In this randomized crossover study on 11 healthy volunteers, cortical activation during pneumatic pharyngeal stimulation was measured applying magnetoencephalography in two separate sessions, with and without pharyngeal surface anesthesia. Results: Stimulation evoked bilateral event-related desynchronization (ERD mainly in the caudolateral pericentral cortex. In comparison to the no-anesthesia condition, topical anesthesia led to a reduction of ERD in beta (13-30 Hz and low gamma (30-60 Hz frequency ranges (p<0.05 in sensory but also motor cortical areas. Conclusions: Withdrawal of sensory afferent information by topical anesthesia leads to reduced response to pneumatic pharyngeal stimulation in a distributed cortical sensorimotor network in healthy subjects. The proposed paradigm may serve to investigate the effect of neuromodulatory treatments specifically on pharyngeal sensory impairment as relevant cause of neurogenic dysphagia.

  1. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    International Nuclear Information System (INIS)

    Finne, E.F.; Cooper, G.A.; Koop, B.F.; Hylland, K.; Tollefsen, K.E.

    2007-01-01

    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17α-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 μM paraquat (PQ) and 0.75 μM 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as well as

  2. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    Energy Technology Data Exchange (ETDEWEB)

    Finne, E.F. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway) and University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway)]. E-mail: eivind.finne@niva.no; Cooper, G.A. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Koop, B.F. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Hylland, K. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway); University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway); Tollefsen, K.E. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway)

    2007-03-10

    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17{alpha}-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 {mu}M paraquat (PQ) and 0.75 {mu}M 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as

  3. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  4. Introducing Toxics

    Directory of Open Access Journals (Sweden)

    David C. Bellinger

    2013-04-01

    Full Text Available With this inaugural issue, Toxics begins its life as a peer-reviewed, open access journal focusing on all aspects of toxic chemicals. We are interested in publishing papers that present a wide range of perspectives on toxicants and naturally occurring toxins, including exposure, biomarkers, kinetics, biological effects, fate and transport, treatment, and remediation. Toxics differs from many other journals in the absence of a page or word limit on contributions, permitting authors to present their work in as much detail as they wish. Toxics will publish original research papers, conventional reviews, meta-analyses, short communications, theoretical papers, case reports, commentaries and policy perspectives, and book reviews (Book reviews will be solicited and should not be submitted without invitation. Toxins and toxicants concern individuals from a wide range of disciplines, and Toxics is interested in receiving papers that represent the full range of approaches applied to their study, including in vitro studies, studies that use experimental animal or non-animal models, studies of humans or other biological populations, and mathematical modeling. We are excited to get underway and look forward to working with authors in the scientific and medical communities and providing them with a novel venue for sharing their work. [...

  5. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks.

    Science.gov (United States)

    Vakanski, A; Ferguson, J M; Lee, S

    2016-12-01

    The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of

  6. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    Science.gov (United States)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  7. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Use of finite mixture distribution models in the analysis of wind energy in the Canarian Archipelago

    International Nuclear Information System (INIS)

    Carta, Jose Antonio; Ramirez, Penelope

    2007-01-01

    The statistical characteristics of hourly mean wind speed data recorded at 16 weather stations located in the Canarian Archipelago are analyzed in this paper. As a result of this analysis we see that the typical two parameter Weibull wind speed distribution (W-pdf) does not accurately represent all wind regimes observed in that region. However, a Singly Truncated from below Normal Weibull mixture distribution (TNW-pdf) and a two component mixture Weibull distribution (WW-pdf) developed here do provide very good fits for both unimodal and bimodal wind speed frequency distributions observed in that region and offer less relative errors in determining the annual mean wind power density. The parameters of the distributions are estimated using the least squares method, which is resolved in this paper using the Levenberg-Marquardt algorithm. The suitability of the distributions is judged from the probability plot correlation coefficient plot R 2 , adjusted for degrees of freedom. Based on the results obtained, we conclude that the two mixture distributions proposed here provide very flexible models for wind speed studies and can be applied in a widespread manner to represent the wind regimes in the Canarian archipelago and in other regions with similar characteristics. The TNW-pdf takes into account the frequency of null winds, whereas the WW-pdf and W-pdf do not. It can, therefore, better represent wind regimes with high percentages of null wind speeds. However, calculation of the TNW-pdf is markedly slower

  9. Examining the cost efficiency of Chinese hydroelectric companies using a finite mixture model

    International Nuclear Information System (INIS)

    Barros, Carlos Pestana; Chen, Zhongfei; Managi, Shunsuke; Antunes, Olinda Sequeira

    2013-01-01

    This paper evaluates the operational activities of Chinese hydroelectric power companies over the period 2000–2010 using a finite mixture model that controls for unobserved heterogeneity. In so doing, a stochastic frontier latent class model, which allows for the existence of different technologies, is adopted to estimate cost frontiers. This procedure not only enables us to identify different groups among the hydro-power companies analysed, but also permits the analysis of their cost efficiency. The main result is that three groups are identified in the sample, each equipped with different technologies, suggesting that distinct business strategies need to be adapted to the characteristics of China's hydro-power companies. Some managerial implications are developed. - Highlights: ► This paper evaluates the operational activities of Chinese electricity hydric companies. ► This study uses data from 2000 to 2010 using a finite mixture model. ► The model procedure identifies different groups of Chinese hydric companies analysed. ► Three groups are identified in the sample, each equipped with completely different “technologies”. ► This suggests that distinct business strategies need to be adapted to the characteristics of the hydric companies

  10. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  11. N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models

    Science.gov (United States)

    Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.

    2018-01-01

    Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.

  12. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    Science.gov (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  13. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  14. Flame acceleration of hydrogen - air - diluent mixtures at middle scale using ENACCEF: experiments and modelling

    International Nuclear Information System (INIS)

    Fabrice Malet; Nathalie Lamoureux; Nabiha Djebaili-Chaumeix; Claude-Etienne Paillard; Pierre Pailhories; Jean-Pierre L'heriteau; Bernard Chaumont; Ahmed Bentaib

    2005-01-01

    Full text of publication follows: In the case of hypothetic severe accident on light water nuclear reactor, hydrogen would be produced during reactor core degradation and released to the reactor building which could subsequently raise a combustion hazard. A local ignition of the combustible mixture would give birth initially to a slow flame which can be accelerated due to turbulence. Depending on the geometry and the premixed combustible mixture composition, the flame can accelerate and for some conditions transit to detonation or be quenched after a certain distance. The flame acceleration is responsible for the generation of high pressure loads that could damage the reactor's building. Moreover, geometrical configuration is a major factor leading to flame acceleration. Thus, recording experimental data notably on mid-size installations is required for the numeric simulations validation before modelling realistic scales. The ENACCEF vertical facility is a 6 meters high acceleration tube aimed at representing steam generator room leading to containment dome. This setup can be equipped with obstacles of different blockage ratios and shapes in order to obtain an acceleration of the flame. Depending on the geometrical characteristics of these obstacles, different regimes of the flame propagation can be achieved. The mixture composition's influence on flame velocity and acceleration has been investigated. Using a steam physical-like diluent (40% He - 60% CO 2 ), influence of dilution on flame speed and acceleration has been investigated. The flame front has also been recorded with ultra fast ombroscopy visualization, both in the tube and in dome's the entering. The flame propagation is computed using the TONUS code. Based on Euler's equation solving code using structured finite volumes, it includes the CREBCOM flames modelling and simulates the hydrogen/air turbulent flame propagation, taking into account 3D complex geometry and reactants concentration gradients. Since

  15. Three-dimensional modeling and simulation of asphalt concrete mixtures based on X-ray CT microstructure images

    Directory of Open Access Journals (Sweden)

    Hainian Wang

    2014-02-01

    Full Text Available X-ray CT (computed tomography was used to scan asphalt mixture specimen to obtain high resolution continuous cross-section images and the meso-structure. According to the theory of three-dimensional (3D reconstruction, the 3D reconstruction algorithm was investigated in this paper. The key to the reconstruction technique is the acquisition of the voxel positions and the relationship between the pixel element and node. Three-dimensional numerical model of asphalt mixture specimen was created by a self-developed program. A splitting test was conducted to predict the stress distributions of the asphalt mixture and verify the rationality of the 3D model.

  16. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  17. Multiphase flow modeling of molten material-vapor-liquid mixtures in thermal nonequilibrium

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Park, Goon Cherl; Bang, Kwang Hyun

    2000-01-01

    This paper presents a numerical model of multiphase flow of the mixtures of molten material-liquid-vapor, particularly in thermal nonequilibrium. It is a two-dimensional, transient, three-fluid model in Eulerian coordinates. The equations are solved numerically using the finite difference method that implicitly couples the rates of phase changes, momentum, and energy exchange to determine the pressure, density, and velocity fields. To examine the model's ability to predict an experimental data, calculations have been performed for tests of pouring hot particles and molten material into a water pool. The predictions show good agreement with the experimental data. It appears, however, that the interfacial heat transfer and breakup of molten material need improved models that can be applied to such high temperature, high pressure, multiphase flow conditions

  18. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    Science.gov (United States)

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  19. Irruptive dynamics of introduced caribou on Adak Island, Alaska: an evaluation of Riney-Caughley model predictions

    Science.gov (United States)

    Ricca, Mark A.; Van Vuren, Dirk H.; Weckerly, Floyd W.; Williams, Jeffrey C.; Miles, A. Keith

    2014-01-01

    Large mammalian herbivores introduced to islands without predators are predicted to undergo irruptive population and spatial dynamics, but only a few well-documented case studies support this paradigm. We used the Riney-Caughley model as a framework to test predictions of irruptive population growth and spatial expansion of caribou (Rangifer tarandus granti) introduced to Adak Island in the Aleutian archipelago of Alaska in 1958 and 1959. We utilized a time series of spatially explicit counts conducted on this population intermittently over a 54-year period. Population size increased from 23 released animals to approximately 2900 animals in 2012. Population dynamics were characterized by two distinct periods of irruptive growth separated by a long time period of relative stability, and the catalyst for the initial irruption was more likely related to annual variation in hunting pressure than weather conditions. An unexpected pattern resembling logistic population growth occurred between the peak of the second irruption in 2005 and the next survey conducted seven years later in 2012. Model simulations indicated that an increase in reported harvest alone could not explain the deceleration in population growth, yet high levels of unreported harvest combined with increasing density-dependent feedbacks on fecundity and survival were the most plausible explanation for the observed population trend. No studies of introduced island Rangifer have measured a time series of spatial use to the extent described in this study. Spatial use patterns during the post-calving season strongly supported Riney-Caughley model predictions, whereby high-density core areas expanded outwardly as population size increased. During the calving season, caribou displayed marked site fidelity across the full range of population densities despite availability of other suitable habitats for calving. Finally, dispersal and reproduction on neighboring Kagalaska Island represented a new dispersal front

  20. The potential energy landscape in the Lennard-Jones binary mixture model

    International Nuclear Information System (INIS)

    Sampoli, M; Benassi, P; Eramo, R; Angelani, L; Ruocco, G

    2003-01-01

    The potential energy landscape in the Kob-Andersen Lennard-Jones binary mixture model has been studied carefully from the liquid down to the supercooled regime, from T = 2 down to 0.46. One thousand independent configurations along the time evolution locus have been examined at each temperature investigated. From the starting configuration, we searched for the nearest saddle (or quasi-saddle) and minimum of the potential energy. The vibrational densities of states for the starting and the two derived configurations have been evaluated. Besides the number of negative eigenvalues of the saddles other quantities show some signature of the approach of the dynamical arrest temperature

  1. Mixtures of beta distributions in models of the duration of a project affected by risk

    Science.gov (United States)

    Gładysz, Barbara; Kuchta, Dorota

    2017-07-01

    This article presents a method for timetabling a project affected by risk. The times required to carry out tasks are modelled using mixtures of beta distributions. The parameters of these beta distributions are given by experts: one corresponding to the duration of a task in stable conditions, with no risks materializing, and the other corresponding to the duration of a task in the case when risks do occur. Finally, a case study will be presented and analysed: the project of constructing a shopping mall in Poland.

  2. Introducing Geoscience Students to Numerical Modeling of Volcanic Hazards: The example of Tephra2 on VHub.org

    Directory of Open Access Journals (Sweden)

    Leah M. Courtland

    2012-07-01

    Full Text Available The Tephra2 numerical model for tephra fallout from explosive volcanic eruptions is specifically designed to enable students to probe ideas in model literacy, including code validation and verification, the role of simplifying assumptions, and the concepts of uncertainty and forecasting. This numerical model is implemented on the VHub.org website, a venture in cyberinfrastructure that brings together volcanological models and educational materials. The VHub.org resource provides students with the ability to explore and execute sophisticated numerical models like Tephra2. We present a strategy for using this model to introduce university students to key concepts in the use and evaluation of Tephra2 for probabilistic forecasting of volcanic hazards. Through this critical examination students are encouraged to develop a deeper understanding of the applicability and limitations of hazard models. Although the model and applications are intended for use in both introductory and advanced geoscience courses, they could easily be adapted to work in other disciplines, such as astronomy, physics, computational methods, data analysis, or computer science.

  3. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    Science.gov (United States)

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  4. Permeability of EVOH Barrier Material Used in Automotive Applications: Metrology Development for Model Fuel Mixtures

    Directory of Open Access Journals (Sweden)

    Zhao Jing

    2015-02-01

    Full Text Available EVOH (Ethylene-Vinyl Alcohol materials are widely used in automotive applications in multi-layer fuel lines and tanks owing to their excellent barrier properties to aromatic and aliphatic hydrocarbons. These barrier materials are essential to limit environmental fuel emissions and comply with the challenging requirements of fast changing international regulations. Nevertheless, the measurement of EVOH permeability to model fuel mixtures or to their individual components is particularly difficult due to the complexity of these systems and their very low permeability, which can vary by several orders of magnitude depending on the permeating species and their relative concentrations. This paper describes the development of a new automated permeameter capable of taking up the challenge of measuring minute quantities as low as 1 mg/(m2.day for partial fluxes for model fuel mixtures containing ethanol, i-octane and toluene at 50°C. The permeability results are discussed as a function of the model fuel composition and the importance of EVOH preconditioning is emphasized for accurate permeability measurements. The last part focuses on the influence of EVOH conditioning on its mechanical properties and its microstructure, and further illustrates the specific behavior of EVOH in presence of ethanol oxygenated fuels. The new metrology developed in this work offers a new insight in the permeability properties of a leading barrier material and will help prevent the consequences of (bioethanol addition in fuels on environmental emissions through fuel lines and tanks.

  5. Modeling intensive longitudinal data with mixtures of nonparametric trajectories and time-varying effects.

    Science.gov (United States)

    Dziak, John J; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P

    2015-12-01

    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, interindividual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semiparametric regression modeling, to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. (c) 2015 APA, all rights reserved).

  6. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    International Nuclear Information System (INIS)

    Stroev, N E; Iosilevskiy, I L

    2016-01-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 < X < 1. Strong “distillation” effect was found for NCPT in the present BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications. (paper)

  7. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    Science.gov (United States)

    Stroev, N. E.; Iosilevskiy, I. L.

    2016-11-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 distillation” effect was found for NCPT in the present BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications.

  8. Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model

    KAUST Repository

    Liu, Bo

    2016-02-03

    An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior distribution are first integrated forward with the dynamical model for forecasting. A GM representation of the forecast distribution is then constructed from the forecast particles. Once an observation becomes available, the forecast GM is updated according to Bayes’ rule. This leads to (i) a Kalman filter-like update of the particles, and (ii) a Particle filter-like update of their weights, generalizing the ensemble Kalman filter update to non-Gaussian distributions. We focus on investigating the impact of the clustering strategy on the behavior of the filter. Three different clustering methods for constructing the prior GM are considered: (i) a standard kernel density estimation, (ii) clustering with a specified mixture component size, and (iii) adaptive clustering (with a variable GM size). Numerical experiments are performed using a two-dimensional reactive contaminant transport model in which the contaminant concentration and the heterogenous hydraulic conductivity fields are estimated within a confined aquifer using solute concentration data. The experimental results suggest that the performance of the GM filter is sensitive to the choice of the GM model. In particular, increasing the size of the GM does not necessarily result in improved performances. In this respect, the best results are obtained with the proposed adaptive clustering scheme.

  9. Improved predictive model for n-decane kinetics across species, as a component of hydrocarbon mixtures.

    Science.gov (United States)

    Merrill, E A; Gearhart, J M; Sterner, T R; Robinson, P J

    2008-07-01

    n-Decane is considered a major component of various fuels and industrial solvents. These hydrocarbon products are complex mixtures of hundreds of components, including straight-chain alkanes, branched chain alkanes, cycloalkanes, diaromatics, and naphthalenes. Human exposures to the jet fuel, JP-8, or to industrial solvents in vapor, aerosol, and liquid forms all have the potential to produce health effects, including immune suppression and/or neurological deficits. A physiologically based pharmacokinetic (PBPK) model has previously been developed for n-decane, in which partition coefficients (PC), fitted to 4-h exposure kinetic data, were used in preference to measured values. The greatest discrepancy between fitted and measured values was for fat, where PC values were changed from 250-328 (measured) to 25 (fitted). Such a large change in a critical parameter, without any physiological basis, greatly impedes the model's extrapolative abilities, as well as its applicability for assessing the interactions of n-decane or similar alkanes with other compounds in a mixture model. Due to these limitations, the model was revised. Our approach emphasized the use of experimentally determined PCs because many tissues had not approached steady-state concentrations by the end of the 4-h exposures. Diffusion limitation was used to describe n-decane kinetics for the brain, perirenal fat, skin, and liver. Flow limitation was used to describe the remaining rapidly and slowly perfused tissues. As expected from the high lipophilicity of this semivolatile compound (log K(ow) = 5.25), sensitivity analyses showed that parameters describing fat uptake were next to blood:air partitioning and pulmonary ventilation as critical in determining overall systemic circulation and uptake in other tissues. In our revised model, partitioning into fat took multiple days to reach steady state, which differed considerably from the previous model that assumed steady-state conditions in fat at 4 h post

  10. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-01

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  11. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  12. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  13. New Alcohol and Onyx Mixture for Embolization: Feasibility and Proof of Concept in Both In Vitro and In Vivo Models

    Energy Technology Data Exchange (ETDEWEB)

    Saeed Kilani, Mohammad, E-mail: msaeedkilani@gmail.com, E-mail: mohammadalikilani@yahoo.com [Centre Hospitalier Universitaire (CHRU) de Lille, Hôpital cardiologique (France); Zehtabi, Fatemeh, E-mail: fatemeh.zehtabi@gmail.com; Lerouge, Sophie, E-mail: Sophie.Lerouge@etsmtl.ca [école de technologie supérieure (ETS) & CHUM Research center (CRCHUM), Department of Mechanical Engineering (Canada); Soulez, Gilles, E-mail: gilles.soulez.chum@ssss.gouv.qc.ca [Centre Hospitalier de l’Université de Montréal, Department of Radiology (Canada); Bartoli, Jean Michel, E-mail: jean-michel.bartoli@ap-hm.fr [University Hospital Timone, Department of Medical Imaging (France); Vidal, Vincent, E-mail: Vincent.VIDAL@ap-hm.fr [Centre Hospitalier Universitaire (CHRU) de Lille, Hôpital cardiologique (France); Badran, Mohammad F., E-mail: mfbadran@hotmail.com [King Faisal Specialist Hospital and Research Center, Radiology Department (Saudi Arabia)

    2017-05-15

    IntroductionOnyx and ethanol are well-known embolic and sclerotic agents that are frequently used in embolization. These agents present advantages and disadvantages regarding visibility, injection control and penetration depth. Mixing both products might yield a new product with different characteristics. The aim of this study is to evaluate the injectability, radiopacity, and mechanical and occlusive properties of different mixtures of Onyx 18 and ethanol in vitro and in vivo (in a swine model).Materials and MethodsVarious Onyx 18 and ethanol formulations were prepared and tested in vitro for their injectability, solidification rate and shrinkage, cohesion and occlusive properties. In vivo tests were performed using 3 swine. Ease of injection, radiopacity, cohesiveness and penetration were analyzed using fluoroscopy and high-resolution CT.ResultsAll mixtures were easy to inject through a microcatheter with no resistance or blockage in vitro and in vivo. The 50%-ethanol mixture showed delayed copolymerization with fragmentation and proximal occlusion. The 75%-ethanol mixture showed poor radiopacity in vivo and was not tested in vitro. The 25%-ethanol mixture showed good occlusive properties and accepted penetration and radiopacity.ConclusionMixing Onyx and ethanol is feasible. The mixture of 25% of ethanol and 75% of Onyx 18 could be a new sclero-embolic agent. Further research is needed to study the chemical changes of the mixture, to confirm the significance of the added sclerotic effect and to find out the ideal mixture percentages.

  14. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yen, E-mail: yen.liu@nasa.gov; Vinokur, Marcel [NASA Ames Research Center, Moffett Field, California 94035 (United States); Panesi, Marco; Sahai, Amal [University of Illinois, Urbana-Champaign, Illinois 61801 (United States)

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  15. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    International Nuclear Information System (INIS)

    Liu, Yen; Vinokur, Marcel; Panesi, Marco; Sahai, Amal

    2015-01-01

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations

  16. Bayesian Non-Parametric Mixtures of GARCH(1,1 Models

    Directory of Open Access Journals (Sweden)

    John W. Lau

    2012-01-01

    Full Text Available Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.

  17. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  18. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  19. Mapping behavioral landscapes for animal movement: a finite mixture modeling approach

    Science.gov (United States)

    Tracey, Jeff A.; Zhu, Jun; Boydston, Erin E.; Lyren, Lisa M.; Fisher, Robert N.; Crooks, Kevin R.

    2013-01-01

    Because of its role in many ecological processes, movement of animals in response to landscape features is an important subject in ecology and conservation biology. In this paper, we develop models of animal movement in relation to objects or fields in a landscape. We take a finite mixture modeling approach in which the component densities are conceptually related to different choices for movement in response to a landscape feature, and the mixing proportions are related to the probability of selecting each response as a function of one or more covariates. We combine particle swarm optimization and an Expectation-Maximization (EM) algorithm to obtain maximum likelihood estimates of the model parameters. We use this approach to analyze data for movement of three bobcats in relation to urban areas in southern California, USA. A behavioral interpretation of the models revealed similarities and differences in bobcat movement response to urbanization. All three bobcats avoided urbanization by moving either parallel to urban boundaries or toward less urban areas as the proportion of urban land cover in the surrounding area increased. However, one bobcat, a male with a dispersal-like large-scale movement pattern, avoided urbanization at lower densities and responded strictly by moving parallel to the urban edge. The other two bobcats, which were both residents and occupied similar geographic areas, avoided urban areas using a combination of movements parallel to the urban edge and movement toward areas of less urbanization. However, the resident female appeared to exhibit greater repulsion at lower levels of urbanization than the resident male, consistent with empirical observations of bobcats in southern California. Using the parameterized finite mixture models, we mapped behavioral states to geographic space, creating a representation of a behavioral landscape. This approach can provide guidance for conservation planning based on analysis of animal movement data using

  20. Characterization of the pharmacokinetics of gasoline using PBPK modeling with a complex mixtures chemical lumping approach.

    Science.gov (United States)

    Dennison, James E; Andersen, Melvin E; Yang, Raymond S H

    2003-09-01

    Gasoline consists of a few toxicologically significant components and a large number of other hydrocarbons in a complex mixture. By using an integrated, physiologically based pharmacokinetic (PBPK) modeling and lumping approach, we have developed a method for characterizing the pharmacokinetics (PKs) of gasoline in rats. The PBPK model tracks selected target components (benzene, toluene, ethylbenzene, o-xylene [BTEX], and n-hexane) and a lumped chemical group representing all nontarget components, with competitive metabolic inhibition between all target compounds and the lumped chemical. PK data was acquired by performing gas uptake PK studies with male F344 rats in a closed chamber. Chamber air samples were analyzed every 10-20 min by gas chromatography/flame ionization detection and all nontarget chemicals were co-integrated. A four-compartment PBPK model with metabolic interactions was constructed using the BTEX, n-hexane, and lumped chemical data. Target chemical kinetic parameters were refined by studies with either the single chemical alone or with all five chemicals together. o-Xylene, at high concentrations, decreased alveolar ventilation, consistent with respiratory irritation. A six-chemical interaction model with the lumped chemical group was used to estimate lumped chemical partitioning and metabolic parameters for a winter blend of gasoline with methyl t-butyl ether and a summer blend without any oxygenate. Computer simulation results from this model matched well with experimental data from single chemical, five-chemical mixture, and the two blends of gasoline. The PBPK model analysis indicated that metabolism of individual components was inhibited up to 27% during the 6-h gas uptake experiments of gasoline exposures.

  1. Industry-Cost-Curve Approach for Modeling the Environmental Impact of Introducing New Technologies in Life Cycle Assessment.

    Science.gov (United States)

    Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André

    2015-07-07

    The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.

  2. Mixture modeling of multi-component data sets with application to ion-probe zircon ages

    Science.gov (United States)

    Sambridge, M. S.; Compston, W.

    1994-12-01

    A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.

  3. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)

    2015-08-15

    Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.

  4. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Firing rate estimation using infinite mixture models and its application to neural decoding.

    Science.gov (United States)

    Shibue, Ryohei; Komaki, Fumiyasu

    2017-11-01

    Neural decoding is a framework for reconstructing external stimuli from spike trains recorded by various neural recordings. Kloosterman et al. proposed a new decoding method using marked point processes (Kloosterman F, Layton SP, Chen Z, Wilson MA. J Neurophysiol 111: 217-227, 2014). This method does not require spike sorting and thereby improves decoding accuracy dramatically. In this method, they used kernel density estimation to estimate intensity functions of marked point processes. However, the use of kernel density estimation causes problems such as low decoding accuracy and high computational costs. To overcome these problems, we propose a new decoding method using infinite mixture models to estimate intensity. The proposed method improves decoding performance in terms of accuracy and computational speed. We apply the proposed method to simulation and experimental data to verify its performance. NEW & NOTEWORTHY We propose a new neural decoding method using infinite mixture models and nonparametric Bayesian statistics. The proposed method improves decoding performance in terms of accuracy and computation speed. We have successfully applied the proposed method to position decoding from spike trains recorded in a rat hippocampus. Copyright © 2017 the American Physiological Society.

  6. On a model of mixtures with internal variables: Extended Liu procedure for the exploitation of the entropy principle

    Directory of Open Access Journals (Sweden)

    Francesco Oliveri

    2016-01-01

    Full Text Available The exploitation of second law of thermodynamics for a mixture of two fluids with a scalar internal variable and a first order nonlocal state space is achieved by using the extended Liu approach. This method requires to insert as constraints in the entropy inequality either the field equations or their gradient extensions. Consequently, the thermodynamic restrictions imposed by the entropy principle are derived without introducing extra terms neither in the energy balance equation nor in the entropy inequality.

  7. Measurement and Modeling of Surface Tensions of Asymmetric Systems: Heptane, Eicosane, Docosane, Tetracosane and their Mixtures

    DEFF Research Database (Denmark)

    Queimada, Antonio; Silva, Filipa A. E.; Caco, Ana I.

    2003-01-01

    To extend the surface tension database for heavy or asymmetric n-alkane mixtures, measurements were performed using the Wilhelmy plate method. Measured systems included the binary mixtures heptane + eicosane, heptane + docosane and heptane + tetracosane and the ternary mixture heptane + eicosane ...

  8. Measurement and Modeling of Surface Tensions of Asymmetric Systems: Heptane, Eicosane, Docosane, Tetracosane and their Mixtures

    DEFF Research Database (Denmark)

    Queimada, Antonio; Silva, Filipa A.E; Caco, Ana I.

    2003-01-01

    To extend the surface tension database for heavy or asymmetric n-alkane mixtures, measurements were performed using the Wilhelmy plate method. Measured systems included the binary mixtures heptane + eicosane, heptane + docosane and heptane + tetracosane and the ternary mixture heptane + eicosane...

  9. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-09

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  10. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    International Nuclear Information System (INIS)

    Chang, Chong

    2016-01-01

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  11. Cosmological models described by a mixture of van der Waals fluid and dark energy

    International Nuclear Information System (INIS)

    Kremer, G.M.

    2003-01-01

    The Universe is modeled as a binary mixture whose constituents are described by a van der Waals fluid and by a dark energy density. The dark energy density is considered either as quintessence or as the Chaplygin gas. The irreversible processes concerning the energy transfer between the van der Waals fluid and the gravitational field are taken into account. This model can simulate (a) an inflationary period where the acceleration grows exponentially and the van der Waals fluid behaves like an inflaton, (b) an accelerated period where the acceleration is positive but it decreases and tends to zero whereas the energy density of the van der Waals fluid decays, (c) a decelerated period which corresponds to a matter dominated period with a non-negative pressure, and (d) a present accelerated period where the dark energy density outweighs the energy density of the van der Waals fluid

  12. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  13. LEARNING VECTOR QUANTIZATION FOR ADAPTED GAUSSIAN MIXTURE MODELS IN AUTOMATIC SPEAKER IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    IMEN TRABELSI

    2017-05-01

    Full Text Available Speaker Identification (SI aims at automatically identifying an individual by extracting and processing information from his/her voice. Speaker voice is a robust a biometric modality that has a strong impact in several application areas. In this study, a new combination learning scheme has been proposed based on Gaussian mixture model-universal background model (GMM-UBM and Learning vector quantization (LVQ for automatic text-independent speaker identification. Features vectors, constituted by the Mel Frequency Cepstral Coefficients (MFCC extracted from the speech signal are used to train the New England subset of the TIMIT database. The best results obtained (90% for gender- independent speaker identification, 97 % for male speakers and 93% for female speakers for test data using 36 MFCC features.

  14. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  15. A narrow-band k-distribution model with single mixture gas assumption for radiative flows

    Science.gov (United States)

    Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon

    2018-06-01

    In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.

  16. Introducing a system of wind speed distributions for modeling properties of wind speed regimes around the world

    International Nuclear Information System (INIS)

    Jung, Christopher; Schindler, Dirk; Laible, Jessica; Buchholz, Alexander

    2017-01-01

    Highlights: • Evaluation of statistical properties of 10,016 empirical wind speed distributions. • Analysis of the shape of empirical wind speed distributions by L-moment ratios. • Introduction of a new system of wind speed distributions (Swd). • Random forests classification of the most appropriate distribution. • Comprehensive goodness of Swd fit evaluation on a global scale. - Abstract: Accurate modeling of empirical wind speed distributions is a crucial step in the estimation of average wind turbine power output. For this purpose, the Weibull distribution has often been fitted to empirical wind speed distributions. However, the Weibull distribution has been found to be insufficient to reproduce many wind speed regimes existing around the world. Results from previous studies demonstrate that numerous one-component distributions as well as mixture distributions provide a better goodness-of-fit to empirical wind speed distributions than the Weibull distribution. Moreover, there is considerable interest to apply a single system of distributions that can be utilized to reproduce the large majority of near-surface wind speed regimes existing around the world. Therefore, a system of wind speed distributions was developed that is capable of reproducing the main characteristics of existing wind speed regimes. The proposed system consists of two one-component distributions (Kappa and Wakeby) and one mixture distribution (Burr-Generalized Extreme Value). A random forests classifier was trained in order to select the most appropriate of these three distributions for each of 10,016 globally distributed empirical wind speed distributions. The shape of the empirical wind speed distributions was described by L-moment ratios. The L-moment ratios were used as predictor variables for the random forests classifier. The goodness-of-fit of the system of wind speed distributions was evaluated according to eleven goodness-of-fit metrics, which were merged into one

  17. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  18. Does Dynamical Downscaling Introduce Novel Information in Climate Model Simulations of Recipitation Change over a Complex Topography Region?

    Science.gov (United States)

    Tselioudis, George; Douvis, Costas; Zerefos, Christos

    2012-01-01

    Current climate and future climate-warming runs with the RegCM Regional Climate Model (RCM) at 50 and 11 km-resolutions forced by the ECHAM GCM are used to examine whether the increased resolution of the RCM introduces novel information in the precipitation field when the models are run for the mountainous region of the Hellenic peninsula. The model results are inter-compared with the resolution of the RCM output degraded to match that of the GCM, and it is found that in both the present and future climate runs the regional models produce more precipitation than the forcing GCM. At the same time, the RCM runs produce increases in precipitation with climate warming even though they are forced with a GCM that shows no precipitation change in the region. The additional precipitation is mostly concentrated over the mountain ranges, where orographic precipitation formation is expected to be a dominant mechanism. It is found that, when examined at the same resolution, the elevation heights of the GCM are lower than those of the averaged RCM in the areas of the main mountain ranges. It is also found that the majority of the difference in precipitation between the RCM and the GCM can be explained by their difference in topographic height. The study results indicate that, in complex topography regions, GCM predictions of precipitation change with climate warming may be dry biased due to the GCM smoothing of the regional topography.

  19. How to introduce medical ethics at the bedside - Factors influencing the implementation of an ethical decision-making model.

    Science.gov (United States)

    Meyer-Zehnder, Barbara; Albisser Schleger, Heidi; Tanner, Sabine; Schnurrer, Valentin; Vogt, Deborah R; Reiter-Theil, Stella; Pargger, Hans

    2017-02-23

    As the implementation of new approaches and procedures of medical ethics is as complex and resource-consuming as in other fields, strategies and activities must be carefully planned to use the available means and funds responsibly. Which facilitators and barriers influence the implementation of a medical ethics decision-making model in daily routine? Up to now, there has been little examination of these factors in this field. A medical ethics decision-making model called METAP was introduced on three intensive care units and two geriatric wards. An evaluation study was performed from 7 months after deployment of the project until two and a half years. Quantitative and qualitative methods including a questionnaire, semi-structured face-to-face and group-interviews were used. Sixty-three participants from different professional groups took part in 33 face-to-face and 9 group interviews, and 122 questionnaires could be analysed. The facilitating factors most frequently mentioned were: acceptance and presence of the model, support given by the medical and nursing management, an existing or developing (explicit) ethics culture, perception of a need for a medical ethics decision-making model, and engaged staff members. Lack of presence and acceptance, insufficient time resources and staff, poor inter-professional collaboration, absence of ethical competence, and not recognizing ethical problems were identified as inhibiting the implementation of the METAP model. However, the results of the questionnaire as well as of explicit inquiry showed that the respondents stated to have had enough time and staff available to use METAP if necessary. Facilitators and barriers of the implementation of a medical ethics decision-making model are quite similar to that of medical guidelines. The planning for implementing an ethics model or guideline can, therefore, benefit from the extensive literature and experience concerning the implementation of medical guidelines. Lack of time and

  20. Introducing a Fresh Cadaver Model for Ultrasound-guided Central Venous Access Training in Undergraduate Medical Education.

    Science.gov (United States)

    Miller, Ryan; Ho, Hang; Ng, Vivienne; Tran, Melissa; Rappaport, Douglas; Rappaport, William J A; Dandorf, Stewart J; Dunleavy, James; Viscusi, Rebecca; Amini, Richard

    2016-05-01

    Over the past decade, medical students have witnessed a decline in the opportunities to perform technical skills during their clinical years. Ultrasound-guided central venous access (USG-CVA) is a critical procedure commonly performed by emergency medicine, anesthesia, and general surgery residents, often during their first month of residency. However, the acquisition of skills required to safely perform this procedure is often deficient upon graduation from medical school. To ameliorate this lack of technical proficiency, ultrasound simulation models have been introduced into undergraduate medical education to train venous access skills. Criticisms of simulation models are the innate lack of realistic tactile qualities, as well as the lack of anatomical variances when compared to living patients. The purpose of our investigation was to design and evaluate a life-like and reproducible training model for USG-CVA using a fresh cadaver. This was a cross-sectional study at an urban academic medical center. An 18-point procedural knowledge tool and an 18-point procedural skill evaluation tool were administered during a cadaver lab at the beginning and end of the surgical clerkship. During the fresh cadaver lab, procedure naïve third-year medical students were trained on how to perform ultrasound-guided central venous access of the femoral and internal jugular vessels. Preparation of the fresh cadaver model involved placement of a thin-walled latex tubing in the anatomic location of the femoral and internal jugular vein respectively. Fifty-six third-year medical students participated in this study during their surgical clerkship. The fresh cadaver model provided high quality and lifelike ultrasound images despite numerous cannulation attempts. Technical skill scores improved from an average score of 3 to 12 (pcadaver model prevented extravasation of fluid, maintained ultrasound-imaging quality, and proved to be an effective educational model allowing third-year medical

  1. Introducing technology learning for energy technologies in a national CGE model through soft links to global and national energy models

    International Nuclear Information System (INIS)

    Martinsen, Thomas

    2011-01-01

    This paper describes a method to model the influence by global policy scenarios, particularly spillover of technology learning, on the energy service demand of the non-energy sectors of the national economy. It is exemplified by Norway. Spillover is obtained from the technology-rich global Energy Technology Perspective model operated by the International Energy Agency. It is provided to a national hybrid model where a national bottom-up Markal model carries forward spillover into a national top-down CGE model at a disaggregated demand category level. Spillover of technology learning from the global energy technology market will reduce national generation costs of energy carriers. This may in turn increase demand in the non-energy sectors of the economy because of the rebound effect. The influence of spillover on the Norwegian economy is most pronounced for the production level of industrial chemicals and for the demand for electricity for residential energy services. The influence is modest, however, because all existing electricity generating capacity is hydroelectric and thus compatible with the low emission policy scenario. In countries where most of the existing generating capacity must be replaced by nascent energy technologies or carbon captured and storage the influence on demand is expected to be more significant. - Highlights: → Spillover of global technology learning may be forwarded into a macroeconomic model. → The national electricity price differs significantly between the different global scenarios. → Soft-linking global and national models facilitate transparency in the technology learning effect chain.

  2. Solvable Model of a Generic Trapped Mixture of Interacting Bosons: Many-Body and Mean-Field Properties

    Science.gov (United States)

    Klaiman, S.; Streltsov, A. I.; Alon, O. E.

    2018-04-01

    A solvable model of a generic trapped bosonic mixture, N 1 bosons of mass m 1 and N 2 bosons of mass m 2 trapped in an harmonic potential of frequency ω and interacting by harmonic inter-particle interactions of strengths λ 1, λ 2, and λ 12, is discussed. It has recently been shown for the ground state [J. Phys. A 50, 295002 (2017)] that in the infinite-particle limit, when the interaction parameters λ 1(N 1 ‑ 1), λ 2(N 2 ‑ 1), λ 12 N 1, λ 12 N 2 are held fixed, each of the species is 100% condensed and its density per particle as well as the total energy per particle are given by the solution of the coupled Gross-Pitaevskii equations of the mixture. In the present work we investigate properties of the trapped generic mixture at the infinite-particle limit, and find differences between the many-body and mean-field descriptions of the mixture, despite each species being 100%. We compute analytically and analyze, both for the mixture and for each species, the center-of-mass position and momentum variances, their uncertainty product, the angular-momentum variance, as well as the overlap of the exact and Gross-Pitaevskii wavefunctions of the mixture. The results obtained in this work can be considered as a step forward in characterizing how important are many-body effects in a fully condensed trapped bosonic mixture at the infinite-particle limit.

  3. Gasification under CO2–Steam Mixture: Kinetic Model Study Based on Shared Active Sites

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2017-11-01

    Full Text Available In this work, char gasification of two coals (i.e., Shenfu bituminous coal and Zunyi anthracite and a petroleum coke under a steam and CO2 mixture (steam/CO2 partial pressures, 0.025–0.075 MPa; total pressures, 0.100 MPa and CO2/steam chemisorption of char samples were conducted in a Thermogravimetric Analyzer (TGA. Two conventional kinetic models exhibited difficulties in exactly fitting the experimental data of char–steam–CO2 gasification. Hence, a modified model based on Langmuir–Hinshelwood model and assuming that char–CO2 and char–steam reactions partially shared active sites was proposed and had indicated high accuracy for estimating the interactions in char–steam–CO2 reaction. Moreover, it was found that two new model parameters (respectively characterized as the amount ratio of shared active sites to total active sites in char–CO2 and char–steam reactions in the modified model hardly varied with gasification conditions, and the results of chemisorption indicate that these two new model parameters mainly depended on the carbon active sites in char samples.

  4. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  5. ADAPTIVE BACKGROUND DENGAN METODE GAUSSIAN MIXTURE MODELS UNTUK REAL-TIME TRACKING

    Directory of Open Access Journals (Sweden)

    Silvia Rostianingsih

    2008-01-01

    Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat

  6. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse

    2016-11-15

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers\\' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  7. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse; De Donato, Renato; Lensink, Marc F.; Petta, Andrea; Serra, Luigi; Scarano, Vittorio; Cavallo, Luigi; Oliva, Romina

    2016-01-01

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  8. Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model

    Science.gov (United States)

    Von Davier, Matthias; Yamamoto, Kentaro

    2004-01-01

    The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…

  9. Introducing a New Experimental Islet Transplantation Model using Biomimetic Hydrogel and a Simple High Yield Islet Isolation Technique.

    Science.gov (United States)

    Mohammadi Ayenehdeh, Jamal; Niknam, Bahareh; Hashemi, Seyed Mahmoud; Rahavi, Hossein; Rezaei, Nima; Soleimani, Masoud; Tajik, Nader

    2017-07-01

    Islet transplantation could be an ideal alternative treatment to insulin therapy for type 1 diabetes Mellitus (T1DM). This clinical and experimental field requires a model that covers problems such as requiring a large number of functional and viable islets, the optimal transplantation site, and the prevention of islet dispersion. Hence, the methods of choice for isolation of functional islets and transplantation are crucial. The present study has introduced an experimental model that overcomes some critical issues in islet transplantation, including in situ pancreas perfusion by digestive enzymes through common bile duct. In comparison with conventional methods, we inflated the pancreas in Petri dishes with only 1 ml collagenase type XI solution, which was followed by hand-picking isolation or Ficoll gradient separation to purify the islets. Then we used a hydrogel composite in which the islets were embedded and transplanted into the peritoneal cavity of the streptozotocin-induced diabetic C57BL/6 mice. As compared to the yield of the classical methods, in our modified technique, the mean yield of isolation was about 130-200 viable islets/mouse pancreas. In vitro glucose-mediated insulin secretion assay indicated an appropriate response in isolated islets. In addition, data from in vivo experiments revealed that the allograft remarkably maintained blood glucose levels under 400 mg/dl and hydrogel composite prevents the passage of immune cells. In the model presented here, the rapid islet isolation technique and the application of biomimetic hydrogel wrapping of islets could facilitate islet transplantation procedures.

  10. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  11. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Science.gov (United States)

    McDowell, Ian C; Manandhar, Dinesh; Vockley, Christopher M; Schmid, Amy K; Reddy, Timothy E; Engelhardt, Barbara E

    2018-01-01

    Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  12. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Directory of Open Access Journals (Sweden)

    Ian C McDowell

    2018-01-01

    Full Text Available Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP, which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  13. Dynamic modeling the composting process of the mixture of poultry manure and wheat straw.

    Science.gov (United States)

    Petric, Ivan; Mustafić, Nesib

    2015-09-15

    Due to lack of understanding of the complex nature of the composting process, there is a need to provide a valuable tool that can help to improve the prediction of the process performance but also its optimization. Therefore, the main objective of this study is to develop a comprehensive mathematical model of the composting process based on microbial kinetics. The model incorporates two different microbial populations that metabolize the organic matter in two different substrates. The model was validated by comparison of the model and experimental data obtained from the composting process of the mixture of poultry manure and wheat straw. Comparison of simulation results and experimental data for five dynamic state variables (organic matter conversion, oxygen concentration, carbon dioxide concentration, substrate temperature and moisture content) showed that the model has very good predictions of the process performance. According to simulation results, the optimum values for air flow rate and ambient air temperature are 0.43 l min(-1) kg(-1)OM and 28 °C, respectively. On the basis of sensitivity analysis, the maximum organic matter conversion is the most sensitive among the three objective functions. Among the twelve examined parameters, μmax,1 is the most influencing parameter and X1 is the least influencing parameter. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    Science.gov (United States)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  15. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    Science.gov (United States)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  16. On selecting a prior for the precision parameter of Dirichlet process mixture models

    Science.gov (United States)

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  17. Genomic outlier profile analysis: mixture models, null hypotheses, and nonparametric estimation.

    Science.gov (United States)

    Ghosh, Debashis; Chinnaiyan, Arul M

    2009-01-01

    In most analyses of large-scale genomic data sets, differential expression analysis is typically assessed by testing for differences in the mean of the distributions between 2 groups. A recent finding by Tomlins and others (2005) is of a different type of pattern of differential expression in which a fraction of samples in one group have overexpression relative to samples in the other group. In this work, we describe a general mixture model framework for the assessment of this type of expression, called outlier profile analysis. We start by considering the single-gene situation and establishing results on identifiability. We propose 2 nonparametric estimation procedures that have natural links to familiar multiple testing procedures. We then develop multivariate extensions of this methodology to handle genome-wide measurements. The proposed methodologies are compared using simulation studies as well as data from a prostate cancer gene expression study.

  18. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  19. Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images

    Science.gov (United States)

    Sánchez, Clara I.; Hornero, Roberto; Mayo, Agustín; García, María

    2009-02-01

    Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images is a difficult task due to the wide variability that these images usually present in screening programs. We propose a statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed algorithms.

  20. A Grasp-Pose Generation Method Based on Gaussian Mixture Models

    Directory of Open Access Journals (Sweden)

    Wenjia Wu

    2015-11-01

    Full Text Available A Gaussian Mixture Model (GMM-based grasp-pose generation method is proposed in this paper. Through offline training, the GMM is set up and used to depict the distribution of the robot's reachable orientations. By dividing the robot's workspace into small 3D voxels and training the GMM for each voxel, a look-up table covering all the workspace is built with the x, y and z positions as the index and the GMM as the entry. Through the definition of Task Space Regions (TSR, an object's feasible grasp poses are expressed as a continuous region. With the GMM, grasp poses can be preferentially sampled from regions with high reachability probabilities in the online grasp-planning stage. The GMM can also be used as a preliminary judgement of a grasp pose's reachability. Experiments on both a simulated and a real robot show the superiority of our method over the existing method.

  1. MATHEMATICAL MODEL OF FORECASTING FOR OUTCOMES IN VICTIMS OF METHANE-COAL MIXTURE EXPLOSION

    Directory of Open Access Journals (Sweden)

    E. Y. Fistal

    2016-01-01

    Full Text Available BACKGROUND. The severity of the victims’ state  in the early period after the combined  trauma (with the prevalence of a thermal  injury is associated with the development of numerous  changes  in all organs and systems  which make proper  diagnosis  of complications and estimation of lethal  outcome  probability extremely  difficult to be performed.MATERIAL AND METHODS. The article  presents a mathematical model  for predicting  lethal  outcomes  in victims of methanecoal mixture explosion, based on case histories of 220 miners who were treated at the Donetsk Burn Center in 1994–2012.RESULTS. It was revealed  that  the  probability  of lethal  outcomes  in victims of methane-coal mixture  explosion was statistically significantly affected  with the  area  of deep  burns  (p<0.001, and  the  severe traumatic brain injury (p<0.001. In the probability of lethal  outcomes,  tactics  of surgical treatment for burn wounds in the early hours after the injury was statistically significant (p=0.003. It involves the primary debridement of burn wounds in the period of burn shock with the simultaneous closure of affected  surfaces with temporary biological covering.CONCLUSION. These neural network models are easy to practice and may be created  for the most common pathologic conditions  frequently encountered in clinical practice.

  2. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Directory of Open Access Journals (Sweden)

    Arturo Medrano-Soto

    2004-12-01

    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  3. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-07-14

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project

  4. Introducing ZBrush 4

    CERN Document Server

    Keller, Eric

    2011-01-01

    Introducing ZBrush 4 launches readers head-on into fulfilling their artistic potential for sculpting realistic creature, cartoon, and hard surface models in ZBrush. ZBrush's innovative technology and interface can be intimidating to both digital-art beginners as well as veterans who are used to a more conventional modeling environment. This book dispels myths about the difficulty of ZBrush with a thorough tour and exploration of the program's interface. Engaging projects also allow the reader to become comfortable with digital sculpting in with a relaxed and fun book atmosphere. Introducing ZB

  5. A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems

    Science.gov (United States)

    Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad

    2015-02-01

    As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.

  6. Estimating the Prevalence of Atrial Fibrillation from A Three-Class Mixture Model for Repeated Diagnoses

    Science.gov (United States)

    Li, Liang; Mao, Huzhang; Ishwaran, Hemant; Rajeswaran, Jeevanantham; Ehrlinger, John; Blackstone, Eugene H.

    2016-01-01

    Atrial fibrillation (AF) is an abnormal heart rhythm characterized by rapid and irregular heart beat, with or without perceivable symptoms. In clinical practice, the electrocardiogram (ECG) is often used for diagnosis of AF. Since the AF often arrives as recurrent episodes of varying frequency and duration and only the episodes that occur at the time of ECG can be detected, the AF is often underdiagnosed when a limited number of repeated ECGs are used. In studies evaluating the efficacy of AF ablation surgery, each patient undergo multiple ECGs and the AF status at the time of ECG is recorded. The objective of this paper is to estimate the marginal proportions of patients with or without AF in a population, which are important measures of the efficacy of the treatment. The underdiagnosis problem is addressed by a three-class mixture regression model in which a patient’s probability of having no AF, paroxysmal AF, and permanent AF is modeled by auxiliary baseline covariates in a nested logistic regression. A binomial regression model is specified conditional on a subject being in the paroxysmal AF group. The model parameters are estimated by the EM algorithm. These parameters are themselves nuisance parameters for the purpose of this research, but the estimators of the marginal proportions of interest can be expressed as functions of the data and these nuisance parameters and their variances can be estimated by the sandwich method. We examine the performance of the proposed methodology in simulations and two real data applications. PMID:27983754

  7. Mixture regression models for the gap time distributions and illness-death processes.

    Science.gov (United States)

    Huang, Chia-Hui

    2018-01-27

    The aim of this study is to provide an analysis of gap event times under the illness-death model, where some subjects experience "illness" before "death" and others experience only "death." Which event is more likely to occur first and how the duration of the "illness" influences the "death" event are of interest. Because the occurrence of the second event is subject to dependent censoring, it can lead to bias in the estimation of model parameters. In this work, we generalize the semiparametric mixture models for competing risks data to accommodate the subsequent event and use a copula function to model the dependent structure between the successive events. Under the proposed method, the survival function of the censoring time does not need to be estimated when developing the inference procedure. We incorporate the cause-specific hazard functions with the counting process approach and derive a consistent estimation using the nonparametric maximum likelihood method. Simulations are conducted to demonstrate the performance of the proposed analysis, and its application in a clinical study on chronic myeloid leukemia is reported to illustrate its utility.

  8. Bayesian semiparametric mixture Tobit models with left censoring, skewness, and covariate measurement errors.

    Science.gov (United States)

    Dagne, Getachew A; Huang, Yangxin

    2013-09-30

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.

  9. Calculation of radiative opacity of plasma mixtures using a relativistic screened hydrogenic model

    International Nuclear Information System (INIS)

    Mendoza, M.A.; Rubiano, J.G.; Gil, J.M.; Rodríguez, R.; Florido, R.; Espinosa, G.; Martel, P.; Mínguez, E.

    2014-01-01

    We present the code ATMED based on an average atom model and conceived for fast computing the population distribution and radiative properties of hot and dense single and multicomponent plasmas under LTE conditions. A relativistic screened hydrogenic model (RSHM), built on a new set of universal constants considering j-splitting, is used to calculate the required atomic data. The opacity model includes radiative bound–bound, bound–free, free–free, and scattering processes. Bound–bound line-shape function has contributions from natural, Doppler and electron-impact broadenings. An additional dielectronic broadening to account for fluctuations in the average level populations has been included, which improves substantially the Rosseland mean opacity results. To illustrate the main features of the code and its capabilities, calculations of several fundamental quantities of one-component plasmas and mixtures are presented, and a comparison with previously published data is performed. Results are satisfactorily compared with those predicted by more elaborate codes. - Highlights: • A new opacity code, ATMED, based on the average atom approximation is presented. • Atomic data are computed by means of a relativistic screened hydrogenic model. • An effective bound level degeneracy is included for accounting pressure ionization. • A new dielectronic line broadening is included to improve the mean opacities. • ATMED has the possibility to handle with single element and multicomponent plasmas

  10. An initiative towards an energy and environment scheme for Iran: Introducing RAISE (Richest Alternatives for Implementation to Supply Energy) model

    International Nuclear Information System (INIS)

    Eshraghi, Hadi; Ahadi, Mohammad Sadegh

    2016-01-01

    Decision making in Iran's energy and environment-related issues has always been tied to complexities. Discussing these complexities and the necessity to deal with them, this paper strives to help the country with a tool by introducing Richest Alternatives for Implementation to Supply Energy (RAISE), a mixed integer linear programming model developed by the means of GNUMathprog mathematical programming language. The paper fully elaborates authors' desired modeling mentality and formulations on which RAISE is programmed to work and verifies its structure by running a widely known sample case named “UTOPIA” and comparing the results with other works including OSeMOSYS and Temoa. The model applies RAISE model to Iranian energy sector to elicit optimal policy without and with a CO_2 emission cap. The results suggest promotion of energy efficiency through investment on combined cycle power plants as the key to optimal policy in power generation sector. Regarding oil refining sector, investment on condensate refineries and advanced refineries equipped with Residual Fluid Catalytic Cracking (RFCC) units are suggested. Results also undermine the prevailing supposition that climate change mitigation deteriorates economic efficiency of energy system and suggest that there is a strong synergy between them. In the case of imposing a CO_2 cap that aims at maintaining CO_2 emissions from electricity production activities at 2012 levels, a shift to renewable energies occurs. - Highlights: • Combined cycle power plant is the best option to meet base load requirements. • There's synergy between climate change mitigation and economic affordability. • Power sector reacts to an emission cap by moving towards renewable energies. • Instead of being exported, condensates should be refined by condensate refineries • Iran's refining sector should be advanced by shifting to RFCC-equipped refineries.

  11. Developing religiously-tailored health messages for behavioral change: Introducing the reframe, reprioritize, and reform ("3R") model.

    Science.gov (United States)

    Padela, Aasim I; Malik, Sana; Vu, Milkie; Quinn, Michael; Peek, Monica

    2018-05-01

    As community health interventions advance from being faith-placed to authentically faith-based, greater discussion is needed about the theory, practice, and ethics of delivering health messages embedded within a religious worldview. While there is much potential to leverage religion to promote health behaviors and improve health outcomes, there is also a risk of co-opting religious teachings for strictly biomedical ends. To describe the development, implementation, and ethical dimensions of a conceptual model for religiously-tailoring health messages. We used data from 6 focus groups and 19 interviews with women aged 40 and older sampled from diverse Muslim community organizations to map out how religious beliefs and values impact mammography-related behavioral, normative and control beliefs. These beliefs were further grouped into those that enhance mammography intention (facilitators) and those that impede intention (barriers). In concert with a multi-disciplinary advisory board, and by drawing upon leading theories of health behavior change, we developed the "3R" model for crafting religiously-tailored health messages. The 3R model addresses barrier beliefs, which are beliefs that negatively impact adopting a health behavior, by (i) reframing the belief within a relevant religious worldview, (ii) reprioritizing the belief by introducing another religious belief that has greater resonance with participants, and (iii) reforming the belief by uncovering logical flaws and/or theological misinterpretations. These approaches were used to create messages for a peer-led, mosque-based, educational intervention designed to improve mammography intention among Muslim women. There are benefits and potential ethical challenges to using religiously tailored messages to promote health behaviors. Our theoretically driven 3R model aids interventionists in crafting messages that address beliefs that hinder healthy behaviors. It is particularly useful in the context of faith

  12. Mixture Item Response Theory-MIMIC Model: Simultaneous Estimation of Differential Item Functioning for Manifest Groups and Latent Classes

    Science.gov (United States)

    Bilir, Mustafa Kuzey

    2009-01-01

    This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…

  13. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  14. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    Science.gov (United States)

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often

  15. Assessing the effect, on animal model, of mixture of food additives, on the water balance.

    Science.gov (United States)

    Friedrich, Mariola; Kuchlewska, Magdalena

    2013-01-01

    The purpose of this study was to determine, on the animal model, the effect of modification of diet composition and administration of selected food additives on water balance in the body. The study was conducted with 48 males and 48 females (separately for each sex) of Wistar strain rats divided into four groups. For drinking, the animals from groups I and III were receiving water, whereas the animals from groups II and IV were administered 5 ml of a solution of selected food additives (potassium nitrate - E 252, sodium nitrite - E 250, benzoic acid - E 210, sorbic acid - E 200, and monosodium glutamate - E 621). Doses of the administered food additives were computed taking into account the average intake by men, expressed per body mass unit. Having drunk the solution, the animals were provided water for drinking. The mixture of selected food additives applied in the experiment was found to facilitate water retention in the body both in the case of both male and female rats, and differences observed between the volume of ingested fluids and the volume of excreted urine were statistically significant in the animals fed the basal diet. The type of feed mixture provided to the animals affected the site of water retention - in the case of animals receiving the basal diet analyses demonstrated a significant increase in water content in the liver tissue, whereas in the animals fed the modified diet water was observed to accumulate in the vascular bed. Taking into account the fact of water retention in the vascular bed, the effects of food additives intake may be more adverse in the case of females.

  16. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  17. Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.

    Science.gov (United States)

    Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M

    2018-05-22

    The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Augmenting Scheffe Linear Mixture Models With Squared and/or Crossproduct Terms

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Szychowski, Jeffrey M.; Loeppky, Jason L.

    2001-01-01

    A glass composition variation study (CVS) for high-level waste (HLW) stored at the Idaho National Engineering and Environmental Laboratory (INEEL) is being statistically designed and performed in phases over several years. The purpose of the CVS is to investigate and model how HLW-glass properties depend on glass composition within a glass composition region compatible with the expected range of INEEL HLW. The resulting glass property-composition models will be used to develop desirable glass formulations and other purposes. Phases 1 and 2 of the CVS have been completed so far, and are briefly described. The main focus of this paper is the CVS Phase 3 experimental design (test matrix). The Phase 3 experimental design was chosen to augment the Phase 1 and 2 data with additional data points, as well as to account for additional glass components of interest not studied in Phases 1 and/or 2. The paper describes how these Phase 3 experimental design augmentation challenges were addressed using the previous data, preliminary property-composition models, and statistical mixture experiment and optimal experimental design methods and software. The resulting Phase 3 experimental design of 30 simulated HAW glasses is presented and discussed

  19. Simultaneous discovery, estimation and prediction analysis of complex traits using a bayesian mixture model.

    Directory of Open Access Journals (Sweden)

    Gerhard Moser

    2015-04-01

    Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.

  20. A Dedicated Mixture Model for Clustering Smart Meter Data: Identification and Analysis of Electricity Consumption Behaviors

    Directory of Open Access Journals (Sweden)

    Fateh Nassim Melzi

    2017-09-01

    Full Text Available The large amount of data collected by smart meters is a valuable resource that can be used to better understand consumer behavior and optimize electricity consumption in cities. This paper presents an unsupervised classification approach for extracting typical consumption patterns from data generated by smart electric meters. The proposed approach is based on a constrained Gaussian mixture model whose parameters vary according to the day type (weekday, Saturday or Sunday. The proposed methodology is applied to a real dataset of Irish households collected by smart meters over one year. For each cluster, the model provides three consumption profiles that depend on the day type. In the first instance, the model is applied on the electricity consumption of users during one month to extract groups of consumers who exhibit similar consumption behaviors. The clustering results are then crossed with contextual variables available for the households to show the close links between electricity consumption and household socio-economic characteristics. At the second instance, the evolution of the consumer behavior from one month to another is assessed through variations of cluster sizes over time. The results show that the consumer behavior evolves over time depending on the contextual variables such as temperature fluctuations and calendar events.

  1. Experimental and modeling study on effects of N2 and CO2 on ignition characteristics of methane/air mixture

    Directory of Open Access Journals (Sweden)

    Wen Zeng

    2015-03-01

    Full Text Available The ignition delay times of methane/air mixture diluted by N2 and CO2 were experimentally measured in a chemical shock tube. The experiments were performed over the temperature range of 1300–2100 K, pressure range of 0.1–1.0 MPa, equivalence ratio range of 0.5–2.0 and for the dilution coefficients of 0%, 20% and 50%. The results suggest that a linear relationship exists between the reciprocal of temperature and the logarithm of the ignition delay times. Meanwhile, with ignition temperature and pressure increasing, the measured ignition delay times of methane/air mixture are decreasing. Furthermore, an increase in the dilution coefficient of N2 or CO2 results in increasing ignition delays and the inhibition effect of CO2 on methane/air mixture ignition is stronger than that of N2. Simulated ignition delays of methane/air mixture using three kinetic models were compared to the experimental data. Results show that GRI_3.0 mechanism gives the best prediction on ignition delays of methane/air mixture and it was selected to identify the effects of N2 and CO2 on ignition delays and the key elementary reactions in the ignition chemistry of methane/air mixture. Comparisons of the calculated ignition delays with the experimental data of methane/air mixture diluted by N2 and CO2 show excellent agreement, and sensitivity coefficients of chain branching reactions which promote mixture ignition decrease with increasing dilution coefficient of N2 or CO2.

  2. Soot modeling of counterflow diffusion flames of ethylene-based binary mixture fuels

    KAUST Repository

    Wang, Yu

    2015-03-01

    A soot model was developed based on the recently proposed PAH growth mechanism for C1-C4 gaseous fuels (KAUST PAH Mechanism 2, KM2) that included molecular growth up to coronene (A7) to simulate soot formation in counterflow diffusion flames of ethylene and its binary mixtures with methane, ethane and propane based on the method of moments. The soot model has 36 soot nucleation reactions from 8 PAH molecules including pyrene and larger PAHs. Soot surface growth reactions were based on a modified hydrogen-abstraction-acetylene-addition (HACA) mechanism in which CH3, C3H3 and C2H radicals were included in the hydrogen abstraction reactions in addition to H atoms. PAH condensation on soot particles was also considered. The experimentally measured profiles of soot volume fraction, number density, and particle size were well captured by the model for the baseline case of ethylene along with the cases involving mixtures of fuels. The simulation results, which were in qualitative agreement with the experimental data in the effects of binary fuel mixing on the sooting structures of the measured flames, showed in particular that 5% addition of propane (ethane) led to an increase in the soot volume fraction of the ethylene flame by 32% (6%), despite the fact that propane and ethane are less sooting fuels than is ethylene, which is in reasonable agreement with experiments of 37% (14%). The model revealed that with 5% addition of methane, there was an increase of 6% in the soot volume fraction. The average soot particle sizes were only minimally influenced while the soot number densities were increased by the fuel mixing. Further analysis of the numerical data indicated that the chemical cross-linking effect between ethylene and the dopant fuels resulted in an increase in PAH formation, which led to higher soot nucleation rates and therefore higher soot number densities. On the other hand, the rates of soot surface growth per unit surface area through the HACA mechanism were

  3. DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.

    Science.gov (United States)

    Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei

    2018-01-01

    Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author

  4. Introducing a model incorporating early integration of specialist palliative care: A qualitative research study of staff's perspectives.

    Science.gov (United States)

    Michael, Natasha; O'Callaghan, Clare; Brooker, Joanne E; Walker, Helen; Hiscock, Richard; Phillips, David

    2016-03-01

    Palliative care has evolved to encompass early integration, with evaluation of patient and organisational outcomes. However, little is known of staff's experiences and adaptations when change occurs within palliative care services. To explore staff experiences of a transition from a service predominantly focused on end-of-life care to a specialist service encompassing early integration. Qualitative research incorporating interviews, focus groups and anonymous semi-structured questionnaires. Data were analysed using a comparative approach. Service activity data were also aggregated. A total of 32 medical, nursing, allied health and administrative staff serving a 22-bed palliative care unit and community palliative service, within a large health service. Patients cared for within the new model were significantly more likely to be discharged home (7.9% increase, p = 0.003) and less likely to die in the inpatient unit (10.4% decrease, p management was considered valuable, nurses particularly found additional skill expectations challenging, and perceived patients' acute care needs as detracting from emotional and end-of-life care demands. Staff views varied on whether they regarded the new model's faster-paced work-life as consistent with fundamental palliative care principles. Less certainty about care goals, needing to prioritise care tasks, reduced shared support rituals and other losses could intensify stress, leading staff to develop personalised coping strategies. Services introducing and researching innovative models of palliative care need to ensure adequate preparation, maintenance of holistic care principles in faster work-paced contexts and assist staff dealing with demands associated with caring for patients at different stages of illness trajectories. © The Author(s) 2015.

  5. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    Science.gov (United States)

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  6. Introducing litter quality to the ecosystem model LPJ-GUESS: Effects on short- and long-term soil carbon dynamics

    Science.gov (United States)

    Portner, Hanspeter; Wolf, Annett; Rühr, Nadine; Bugmann, Harald

    2010-05-01

    Many biogeochemical models have been applied to study the response of the carbon cycle to changes in climate, whereby the process of carbon uptake (photosynthesis) has usually gained more attention than the equally important process of carbon release by respiration. The decomposition of soil organic matter is driven by a combination of factors like soil temperature, soil moisture and litter quality. We have introduced dependence on litter substrate quality to heterotrophic soil respiration in the ecosystem model LPJ-GUESS [Smith et al.(2001)]. We were interested in differences in model projections before and after the inclusion of the dependency both in respect to short- and long-term soil carbon dynamics. The standard implementation of heterotrophic soil respiration in LPJ-GUESS is a simple carbon three-pool model whose decay rates are dependent on soil temperature and soil moisture. We have added dependence on litter quality by coupling LPJ-GUESS to the soil carbon model Yasso07 [Tuomi et al.(2008)]. The Yasso07 model is based on an extensive number of measurements of litter decomposition of forest soils. Apart from the dependence on soil temperature and soil moisture, the Yasso07 model uses carbon soil pools representing different substrate qualities: acid hydrolyzable, water soluble, ethanol soluble, lignin compounds and humus. Additionally Yasso07 differentiates between woody and non-woody litter. In contrary to the reference implementation of LPJ-GUESS, in the new model implementation, the litter now is divided according to its specific quality and added to the corresponding soil carbon pool. The litter quality thereby differs between litter source (leaves, roots, stems) and plant functional type (broadleaved, needleleaved, grass). The two contrasting model implementations were compared and validated at one specific CarboEuropeIP site (Lägern, Switzerland) and on a broader scale all over Switzerland. Our focus lay on the soil respiration for the years 2006

  7. Surface complexation modeling of Cu(II adsorption on mixtures of hydrous ferric oxide and kaolinite

    Directory of Open Access Journals (Sweden)

    Schaller Melinda S

    2008-09-01

    Full Text Available Abstract Background The application of surface complexation models (SCMs to natural sediments and soils is hindered by a lack of consistent models and data for large suites of metals and minerals of interest. Furthermore, the surface complexation approach has mostly been developed and tested for single solid systems. Few studies have extended the SCM approach to systems containing multiple solids. Results Cu adsorption was measured on pure hydrous ferric oxide (HFO, pure kaolinite (from two sources and in systems containing mixtures of HFO and kaolinite over a wide range of pH, ionic strength, sorbate/sorbent ratios and, for the mixed solid systems, using a range of kaolinite/HFO ratios. Cu adsorption data measured for the HFO and kaolinite systems was used to derive diffuse layer surface complexation models (DLMs describing Cu adsorption. Cu adsorption on HFO is reasonably well described using a 1-site or 2-site DLM. Adsorption of Cu on kaolinite could be described using a simple 1-site DLM with formation of a monodentate Cu complex on a variable charge surface site. However, for consistency with models derived for weaker sorbing cations, a 2-site DLM with a variable charge and a permanent charge site was also developed. Conclusion Component additivity predictions of speciation in mixed mineral systems based on DLM parameters derived for the pure mineral systems were in good agreement with measured data. Discrepancies between the model predictions and measured data were similar to those observed for the calibrated pure mineral systems. The results suggest that quantifying specific interactions between HFO and kaolinite in speciation models may not be necessary. However, before the component additivity approach can be applied to natural sediments and soils, the effects of aging must be further studied and methods must be developed to estimate reactive surface areas of solid constituents in natural samples.

  8. Use of a mixture statistical model in studying malaria vectors density.

    Directory of Open Access Journals (Sweden)

    Olayidé Boussari

    Full Text Available Vector control is a major step in the process of malaria control and elimination. This requires vector counts and appropriate statistical analyses of these counts. However, vector counts are often overdispersed. A non-parametric mixture of Poisson model (NPMP is proposed to allow for overdispersion and better describe vector distribution. Mosquito collections using the Human Landing Catches as well as collection of environmental and climatic data were carried out from January to December 2009 in 28 villages in Southern Benin. A NPMP regression model with "village" as random effect is used to test statistical correlations between malaria vectors density and environmental and climatic factors. Furthermore, the villages were ranked using the latent classes derived from the NPMP model. Based on this classification of the villages, the impacts of four vector control strategies implemented in the villages were compared. Vector counts were highly variable and overdispersed with important proportion of zeros (75%. The NPMP model had a good aptitude to predict the observed values and showed that: i proximity to freshwater body, market gardening, and high levels of rain were associated with high vector density; ii water conveyance, cattle breeding, vegetation index were associated with low vector density. The 28 villages could then be ranked according to the mean vector number as estimated by the random part of the model after adjustment on all covariates. The NPMP model made it possible to describe the distribution of the vector across the study area. The villages were ranked according to the mean vector density after taking into account the most important covariates. This study demonstrates the necessity and possibility of adapting methods of vector counting and sampling to each setting.

  9. An evaluation of three-dimensional modeling of compaction cycles by analyzing the densification behavior of binary and ternary mixtures.

    Science.gov (United States)

    Picker, K M; Bikane, F

    2001-08-01

    The aim of the study is to use the 3D modeling technique of compaction cycles for analysis of binary and ternary mixtures. Three materials with very different deformation and densification characteristics [cellulose acetate (CAC), dicalcium phosphate dihydrate (EM) and theophylline monohydrate (TM)] have been tableted at graded maximum relative densities (rhorel, max) on an eccentric tableting machine. Following that, graded binary mixtures from CAC and EM have been compacted. Finally, the same ratios of CAC and EM have been tableted in a ternary mixture with 20 vol% TM. All compaction cycles have been analyzed by using different data analysis methods. Three-dimensional modeling, conventional determination of the slope of the Heckel function, determination of the elastic recovery during decompression, and calculations according to the pressure-time function were the methods of choice. The results show that the 3D model technique is able to gain the information in one step instead of three different approaches, which is an advantage for formulation development. The results show that this model enables one to better distinguish the compaction properties of mixtures and the interaction of the components in the tablet than 2D models. Furthermore, the information by 3D modeling is more precise since in the slope K of the Heckel-plot (in die) elasticity is included, and in the parameters of the pressure-time function beta and gamma plastic deformation due to pressure is included. The influence of time and pressure on the displacement can now be differentiated.

  10. A Gaussian mixture model for definition of lung tumor volumes in positron emission tomography

    International Nuclear Information System (INIS)

    Aristophanous, Michalis; Penney, Bill C.; Martel, Mary K.; Pelizzari, Charles A.

    2007-01-01

    The increased interest in 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) in radiation treatment planning in the past five years necessitated the independent and accurate segmentation of gross tumor volume (GTV) from FDG-PET scans. In some studies the radiation oncologist contours the GTV based on a computed tomography scan, while incorporating pertinent data from the PET images. Alternatively, a simple threshold, typically 40% of the maximum intensity, has been employed to differentiate tumor from normal tissue, while other researchers have developed algorithms to aid the PET based GTV definition. None of these methods, however, results in reliable PET tumor segmentation that can be used for more sophisticated treatment plans. For this reason, we developed a Gaussian mixture model (GMM) based segmentation technique on selected PET tumor regions from non-small cell lung cancer patients. The purpose of this study was to investigate the feasibility of using a GMM-based tumor volume definition in a robust, reliable and reproducible way. A GMM relies on the idea that any distribution, in our case a distribution of image intensities, can be expressed as a mixture of Gaussian densities representing different classes. According to our implementation, each class belongs to one of three regions in the image; the background (B), the uncertain (U) and the target (T), and from these regions we can obtain the tumor volume. User interaction in the implementation is required, but is limited to the initialization of the model parameters and the selection of an ''analysis region'' to which the modeling is restricted. The segmentation was developed on three and tested on another four clinical cases to ensure robustness against differences observed in the clinic. It also compared favorably with thresholding at 40% of the maximum intensity and a threshold determination function based on tumor to background image intensities proposed in a recent paper. The parts of the

  11. Quantitative Structure-Relative Volatility Relationship Model for Extractive Distillation of Ethylbenzene/p-Xylene Mixtures: Application to Binary and Ternary Mixtures as Extractive Agents

    International Nuclear Information System (INIS)

    Kang, Young-Mook; Oh, Kyunghwan; You, Hwan; No, Kyoung Tai; Jeon, Yukwon; Shul, Yong-Gun; Hwang, Sung Bo; Shin, Hyun Kil; Kim, Min Sung; Kim, Namseok; Son, Hyoungjun; Chu, Young Hwan; Cho, Kwang-Hwi

    2016-01-01

    Ethylbenzene (EB) and p-xylene (PX) are important chemicals for the production of industrial materials; accordingly, their efficient separation is desired, even though the difference in their boiling points is very small. This paper describes the efforts toward the identification of high-performance extractive agents for EB and PX separation by distillation. Most high-performance extractive agents contain halogen atoms, which present health hazards and are corrosive to distillation plates. To avoid this disadvantage of extractive agents, we developed a quantitative structure-relative volatility relationship (QSRVR) model for designing safe extractive agents. We have previously developed and reported QSRVR models for single extractive agents. In this study, we introduce extended QSRVR models for binary and ternary extractive agents. The QSRVR models accurately predict the relative volatilities of binary and ternary extractive agents. The service to predict the relative volatility for binary and ternary extractive agents is freely available from the Internet at http://qsrvr.o pengsi.org/.

  12. Quantitative Structure-Relative Volatility Relationship Model for Extractive Distillation of Ethylbenzene/p-Xylene Mixtures: Application to Binary and Ternary Mixtures as Extractive Agents

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Young-Mook; Oh, Kyunghwan; You, Hwan; No, Kyoung Tai [Bioinformatics and Molecular Design Research Center, Seoul (Korea, Republic of); Jeon, Yukwon; Shul, Yong-Gun; Hwang, Sung Bo; Shin, Hyun Kil; Kim, Min Sung; Kim, Namseok; Son, Hyoungjun [Yonsei University, Seoul (Korea, Republic of); Chu, Young Hwan [Sangji University, Wonju (Korea, Republic of); Cho, Kwang-Hwi [Soongsil University, Seoul (Korea, Republic of)

    2016-04-15

    Ethylbenzene (EB) and p-xylene (PX) are important chemicals for the production of industrial materials; accordingly, their efficient separation is desired, even though the difference in their boiling points is very small. This paper describes the efforts toward the identification of high-performance extractive agents for EB and PX separation by distillation. Most high-performance extractive agents contain halogen atoms, which present health hazards and are corrosive to distillation plates. To avoid this disadvantage of extractive agents, we developed a quantitative structure-relative volatility relationship (QSRVR) model for designing safe extractive agents. We have previously developed and reported QSRVR models for single extractive agents. In this study, we introduce extended QSRVR models for binary and ternary extractive agents. The QSRVR models accurately predict the relative volatilities of binary and ternary extractive agents. The service to predict the relative volatility for binary and ternary extractive agents is freely available from the Internet at http://qsrvr.o pengsi.org/.

  13. Bayesian model averaging using particle filtering and Gaussian mixture modeling : Theory, concepts, and simulation experiments

    NARCIS (Netherlands)

    Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.

    2012-01-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive

  14. Segmentation of Concealed Objects in Passive Millimeter-Wave Images Based on the Gaussian Mixture Model

    Science.gov (United States)

    Yu, Wangyang; Chen, Xiangguang; Wu, Lei

    2015-04-01

    Passive millimeter wave (PMMW) imaging has become one of the most effective means to detect the objects concealed under clothing. Due to the limitations of the available hardware and the inherent physical properties of PMMW imaging systems, images often exhibit poor contrast and low signal-to-noise ratios. Thus, it is difficult to achieve ideal results by using a general segmentation algorithm. In this paper, an advanced Gaussian Mixture Model (GMM) algorithm for the segmentation of concealed objects in PMMW images is presented. Our work is concerned with the fact that the GMM is a parametric statistical model, which is often used to characterize the statistical behavior of images. Our approach is three-fold: First, we remove the noise from the image using both a notch reject filter and a total variation filter. Next, we use an adaptive parameter initialization GMM algorithm (APIGMM) for simulating the histogram of images. The APIGMM provides an initial number of Gaussian components and start with more appropriate parameter. Bayesian decision is employed to separate the pixels of concealed objects from other areas. At last, the confidence interval (CI) method, alongside local gradient information, is used to extract the concealed objects. The proposed hybrid segmentation approach detects the concealed objects more accurately, even compared to two other state-of-the-art segmentation methods.

  15. GGRaSP: A R-package for selecting representative genomes using Gaussian mixture models.

    Science.gov (United States)

    Clarke, Thomas H; Brinkac, Lauren M; Sutton, Granger; Fouts, Derrick E

    2018-04-14

    The vast number of available sequenced bacterial genomes occasionally exceeds the facilities of comparative genomic methods or is dominated by a single outbreak strain, and thus a diverse and representative subset is required. Generation of the reduced subset currently requires a priori supervised clustering and sequence-only selection of medoid genomic sequences, independent of any additional genome metrics or strain attributes. The GGRaSP R-package described below generates a reduced subset of genomes that prioritizes maintaining genomes of interest to the user as well as minimizing the loss of genetic variation. The package also allows for unsupervised clustering by modeling the genomic relationships using a Gaussian Mixture Model to select an appropriate cluster threshold. We demonstrate the capabilities of GGRaSP by generating a reduced list of 315 genomes from a genomic dataset of 4600 Escherichia coli genomes, prioritizing selection by type strain and by genome completeness. GGRaSP is available at https://github.com/JCVenterInstitute/ggrasp/. tclarke@jcvi.org. Supplementary data are available at the GitHub site.

  16. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    Science.gov (United States)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  17. Optical measurements for the gaseous phase speciation of HIx mixtures: experiments and modelling

    International Nuclear Information System (INIS)

    Denis Doizi; Vincent Dauvois; Vincent Delanne; Jean Luc Roujou; Bruno Larousse; Olivier Hercher; Christophe Moulin; Pierre Fauvet; P Carles; Jean Michel Hartmann

    2006-01-01

    To design and optimize the efficiency of the reactive distillation column of HI we have proposed for the HI section of the I-S cycle, analytical optical 'online' techniques have been proposed to measure the partial and total pressures of the liquid vapour equilibrium of the ternary HI/I 2 /H 2 O mixtures: - FTIR spectrometry for the measurement of hydrogen iodide and water vapours, - Visible spectrometry for the measurement of iodine vapour. The use of these optical techniques has been validated in an experimental device around 130 C and 2 bars. This device is composed of a glass cell equipped with two optical path lengths and placed in a thermo-regulated oven to allow the optical measurements of the concentrations of the three species in the vapour phase. Using an experimental design analysis, the infrared spectra of hydrogen iodide and water have been measured in a selected wavelength range versus temperature and for different HI x compositions. The spectra are then analyzed in particular using a model especially developed for this objective. This model relies on the fitting of the experimental infrared data using a root mean square method and an appropriate spectroscopic database. The visible spectrum of iodine has also been measured. (authors)

  18. Modeling photopolarimetric characteristics of comet dust as a polydisperse mixture of polyshaped rough spheroids

    Science.gov (United States)

    Kolokolova, L.; Das, H.; Dubovik, O.; Lapyonok, T.

    2013-12-01

    It is widely recognized now that the main component of comet dust is aggregated particles that consist of submicron grains. It is also well known that cometary dust obey a rather wide size distribution with abundant particles whose size reaches dozens of microns. However, numerous attempts of computer simulation of light scattering by comet dust using aggregated particles have not succeeded to consider particles larger than a couple of microns due to limitations in the memory and speed of available computers. Attempts to substitute aggregates by polydisperse solid particles (spheres, spheroids, cylinders) could not consistently reproduce observed angular and spectral characteristics of comet brightness and polarization even in such a general case as polyshaped (i.e. containing particles of a variety of aspect ratios) mixture of spheroids (Kolokolova et al., In: Photopolarimetry in Remote Sensing, Kluwer Acad. Publ., 431, 2004). In this study we are checking how well cometary dust can be modeled using modeling tools for rough spheroids. With this purpose we use the software package described in Dubovik et al. (J. Geophys. Res., 111, D11208, doi:10.1029/2005JD006619d, 2006) that allows for a substantial reduction of computer time in calculating scattering properties of spheroid mixtures by means of using pre-calculated kernels - quadrature coefficients employed in the numerical integration of spheroid optical properties over size and shape. The kernels were pre-calculated for spheroids of 25 axis ratios, ranging from 0.3 to 3, and 42 size bins within the size parameter range 0.01 - 625. This software package has been recently expanded with the possibility of simulating not only smooth but also rough spheroids that is used in present study. We consider refractive indexes of the materials typical for comet dust: silicate, carbon, organics, and their mixtures. We also consider porous particles accounting on voids in the spheroids through effective medium approach. The

  19. Mixture modeling methods for the assessment of normal and abnormal personality, part I: cross-sectional models.

    Science.gov (United States)

    Hallquist, Michael N; Wright, Aidan G C

    2014-01-01

    Over the past 75 years, the study of personality and personality disorders has been informed considerably by an impressive array of psychometric instruments. Many of these tests draw on the perspective that personality features can be conceptualized in terms of latent traits that vary dimensionally across the population. A purely trait-oriented approach to personality, however, might overlook heterogeneity that is related to similarities among subgroups of people. This article describes how factor mixture modeling (FMM), which incorporates both categories and dimensions, can be used to represent person-oriented and trait-oriented variability in the latent structure of personality. We provide an overview of different forms of FMM that vary in the degree to which they emphasize trait- versus person-oriented variability. We also provide practical guidelines for applying FMM to personality data, and we illustrate model fitting and interpretation using an empirical analysis of general personality dysfunction.

  20. Fitting a mixture of von Mises distributions in order to model data on wind direction in Peninsular Malaysia

    International Nuclear Information System (INIS)

    Masseran, N.; Razali, A.M.; Ibrahim, K.; Latif, M.T.

    2013-01-01

    Highlights: • We suggest a simple way for wind direction modeling using the mixture of von Mises distribution. • We determine the most suitable probability model for wind direction regime in Malaysia. • We provide the circular density plots to show the most prominent directions of wind blows. - Abstract: A statistical distribution for describing wind direction provides information about the wind regime at a particular location. In addition, this information complements knowledge of wind speed, which allows researchers to draw some conclusions about the energy potential of wind and aids the development of efficient wind energy generation. This study focuses on modeling the frequency distribution of wind direction, including some characteristics of wind regime that cannot be represented by a unimodal distribution. To identify the most suitable model, a finite mixture of von Mises distributions were fitted to the average hourly wind direction data for nine wind stations located in Peninsular Malaysia. The data used were from the years 2000 to 2009. The suitability of each mixture distribution was judged based on the R 2 coefficient and the histogram plot with a density line. The results showed that the finite mixture of the von Mises distribution with H number of components was the best distribution to describe the wind direction distributions in Malaysia. In addition, the circular density plots of the suitable model clearly showed the most prominent directions of wind blows than the other directions

  1. A Bivariate Mixture Model for Natural Antibody Levels to Human Papillomavirus Types 16 and 18: Baseline Estimates for Monitoring the Herd Effects of Immunization.

    Directory of Open Access Journals (Sweden)

    Margaretha A Vink

    Full Text Available Post-vaccine monitoring programs for human papillomavirus (HPV have been introduced in many countries, but HPV serology is still an underutilized tool, partly owing to the weak antibody response to HPV infection. Changes in antibody levels among non-vaccinated individuals could be employed to monitor herd effects of immunization against HPV vaccine types 16 and 18, but inference requires an appropriate statistical model. The authors developed a four-component bivariate mixture model for jointly estimating vaccine-type seroprevalence from correlated antibody responses against HPV16 and -18 infections. This model takes account of the correlation between HPV16 and -18 antibody concentrations within subjects, caused e.g. by heterogeneity in exposure level and immune response. The model was fitted to HPV16 and -18 antibody concentrations as measured by a multiplex immunoassay in a large serological survey (3,875 females carried out in the Netherlands in 2006/2007, before the introduction of mass immunization. Parameters were estimated by Bayesian analysis. We used the deviance information criterion for model selection; performance of the preferred model was assessed through simulation. Our analysis uncovered elevated antibody concentrations in doubly as compared to singly seropositive individuals, and a strong clustering of HPV16 and -18 seropositivity, particularly around the age of sexual debut. The bivariate model resulted in a more reliable classification of singly and doubly seropositive individuals than achieved by a combination of two univariate models, and suggested a higher pre-vaccine HPV16 seroprevalence than previously estimated. The bivariate mixture model provides valuable baseline estimates of vaccine-type seroprevalence and may prove useful in seroepidemiologic assessment of the herd effects of HPV vaccination.

  2. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    Directory of Open Access Journals (Sweden)

    Andreas Tuerk

    2017-05-01

    Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  3. Determination and correlation thermodynamic models for solid–liquid equilibrium of the Nifedipine in pure and mixture organic solvents

    International Nuclear Information System (INIS)

    Wu, Gang; Hu, Yonghong; Gu, Pengfei; Yang, Wenge; Wang, Chunxiao; Ding, Zhiwen; Deng, Renlun; Li, Tao; Hong, Housheng

    2016-01-01

    Highlights: • The solubility increased with increasing temperature. • The data were fitted using the modified Apelblat equation in pure solvents. • The data were fitted using the CNIBS/R-K model in binary solvent mixture. - Abstract: Knowledge of thermodynamic parameters on corresponding solid-liquid equilibrium of nifedipine in different solvents is essential for a preliminary study of pharmaceutical engineering and industrial applications. In this paper, a gravimetric method was used to correct the solid-liquid equilibrium of nifedipine in methanol, ethanol, 1-butanol, acetone, acetonitrile, ethyl acetate and tetrahydrofuran pure solvents as well as in the (tetrahydrofuran + acetonitrile) mixture solvents at temperatures from 278.15 K to 328.15 K under 0.1 MPa. For the temperature range investigation, the solubility of nifedipine in the solvents increased with increasing temperature. The solubility of nifedipine in tetrahydrofuran is superior to other selected pure solvents. The modified Apelblat model, the Buchowski-Ksiazaczak λh model, and the ideal model were adopted to describe and predict the change tendency of solubility. Computational results showed that the modified Apelblat model stood out to be more suitable with the higher accuracy. The solubility values were fitted using a modified Apelblat model, a variant of the combined nearly ideal binary solvent/Redich-Kister (CNIBS/R-K) model and Jouyban-Acree model in (tetrahydrofuran + acetonitrile) binary solvent mixture. Computational results showed that the CNIBS/R-K model had more advantages than other models.

  4. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    Science.gov (United States)

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Mathematic Model of Technical Process of Heavy Mixtures Classifying on the Basis of Dispersion of Particles Flight Path

    OpenAIRE

    Normahmad Ravshanov; Bozorboy Palvanov; Gulnora Shermatova

    2014-01-01

    The article presents mathematic model and results of computer calculations of heavy mixtures classifying and farm crops full seeds selection. They enable to determine major process parameters and variation range, providing maximum dispersion of particles flight path, depending on feedstock modules.

  6. Mathematic Model of Technical Process of Heavy Mixtures Classifying on the Basis of Dispersion of Particles Flight Path

    Directory of Open Access Journals (Sweden)

    Normahmad Ravshanov

    2014-05-01

    Full Text Available The article presents mathematic model and results of computer calculations of heavy mixtures classifying and farm crops full seeds selection. They enable to determine major process parameters and variation range, providing maximum dispersion of particles flight path, depending on feedstock modules.

  7. Trajectories of Heroin Addiction: Growth Mixture Modeling Results Based on a 33-Year Follow-Up Study

    Science.gov (United States)

    Hser, Yih-Ing; Huang, David; Chou, Chih-Ping; Anglin, M. Douglas

    2007-01-01

    This study investigates trajectories of heroin use and subsequent consequences in a sample of 471 male heroin addicts who were admitted to the California Civil Addict Program in 1964-1965 and followed over 33 years. Applying a two-part growth mixture modeling strategy to heroin use level during the first 16 years of the addiction careers since…

  8. Methods and Measures: Growth Mixture Modeling--A Method for Identifying Differences in Longitudinal Change among Unobserved Groups

    Science.gov (United States)

    Ram, Nilam; Grimm, Kevin J.

    2009-01-01

    Growth mixture modeling (GMM) is a method for identifying multiple unobserved sub-populations, describing longitudinal change within each unobserved sub-population, and examining differences in change among unobserved sub-populations. We provide a practical primer that may be useful for researchers beginning to incorporate GMM analysis into their…

  9. A Mixture Rasch Model with a Covariate: A Simulation Study via Bayesian Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Dai, Yunyun

    2013-01-01

    Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…

  10. Low Reynolds number steady state flow through a branching network of rigid vessels: II. A finite element mixture model

    NARCIS (Netherlands)

    Huyghe, J.M.R.J.; Oomens, C.W.J.; Campen, van D.H.; Heethaar, R.M.

    1989-01-01

    This research aims at formulating and verifying a finite element mixture formulation for blood perfusion. The equations derived in a companion paper [3] are discretized according to the Galerkin method. A flow experiment in a rigid model of a vascular tree of about 500 vessels is performed in order

  11. Differential Item Functioning Analysis Using a Mixture 3-Parameter Logistic Model with a Covariate on the TIMSS 2007 Mathematics Test

    Science.gov (United States)

    Choi, Youn-Jeng; Alexeev, Natalia; Cohen, Allan S.

    2015-01-01

    The purpose of this study was to explore what may be contributing to differences in performance in mathematics on the Trends in International Mathematics and Science Study 2007. This was done by using a mixture item response theory modeling approach to first detect latent classes in the data and then to examine differences in performance on items…

  12. Has growth mixture modeling improved our understanding of how early change predicts psychotherapy outcome?

    Science.gov (United States)

    Koffmann, Andrew

    2017-03-02

    Early change in psychotherapy predicts outcome. Seven studies have used growth mixture modeling [GMM; Muthén, B. (2001). Second-generation structural equation modeling with a combination of categorical and continuous latent variables: New opportunities for latent class-latent growth modeling. In L. M. Collins & A. G. Sawyers (Eds.), New methods for the analysis of change (pp. 291-322). Washington, DC: American Psychological Association] to identify patient classes based on early change but have yielded conflicting results. Here, we review the earlier studies and apply GMM to a new data set. In a university-based training clinic, 251 patients were administered the Outcome Questionnaire-45 [Lambert, M. J., Hansen, N. B., Umphress, V., Lunnen, K., Okiishi, J., Burlingame, G., … Reisinger, C. W. (1996). Administration and scoring manual for the Outcome Questionnaire (OQ 45.2). Wilmington, DE: American Professional Credentialing Services] at each psychotherapy session. We used GMM to identify class structure based on change in the first six sessions and examined trajectories as predictors of outcome. The sample was best described as a single class. There was no evidence of autoregressive trends in the data. We achieved better fit to the data by permitting latent variables some degree of kurtosis, rather than to assume multivariate normality. Treatment outcome was predicted by the amount of early improvement, regardless of initial level of distress. The presence of sudden early gains or losses did not further improve outcome prediction. Early improvement is an easily computed, powerful predictor of psychotherapy outcome. The use of GMM to investigate the relationship between change and outcome is technically complex and computationally intensive. To date, it has not been particularly informative.

  13. Multi-temperature mixture of fluids

    Directory of Open Access Journals (Sweden)

    Ruggeri Tommaso

    2009-01-01

    Full Text Available We present a survey on some recent results concerning the different models of a mixture of compressible fluids. In particular we discuss the most realistic case of a mixture when each constituent has its own temperature (MT and we first compare the solutions of this model with the one with a unique common temperature (ST . In the case of Eulerian fluids it will be shown that the corresponding (ST differential system is a principal subsystem of the (MT one. Global behavior of smooth solutions for large time for both systems will also be discussed through the application of the Shizuta-Kawashima condition. Then we introduce the concept of the average temperature of mixture based upon the consideration that the internal energy of the mixture is the same as in the case of a single-temperature mixture. As a consequence, it is shown that the entropy of the mixture reaches a local maximum in equilibrium. Through the procedure of Maxwellian iteration a new constitutive equation for non-equilibrium temperatures of constituents is obtained in a classical limit, together with the Fick's law for the diffusion flux. Finally, to justify the Maxwellian iteration, we present for dissipative fluids a possible approach of a classical theory of mixture with multi-temperature and we prove that the differences of temperatures between the constituents imply the existence of a new dynamical pressure even if the fluids have a zero bulk viscosity.

  14. Analysis of a Pareto Mixture Distribution for Maritime Surveillance Radar

    Directory of Open Access Journals (Sweden)

    Graham V. Weinberg

    2012-01-01

    Full Text Available The Pareto distribution has been shown to be an excellent model for X-band high-resolution maritime surveillance radar clutter returns. Given the success of mixture distributions in radar, it is thus of interest to consider the effect of Pareto mixture models. This paper introduces a formulation of a Pareto intensity mixture distribution and investigates coherent multilook radar detector performance using this new clutter model. Clutter parameter estimates are derived from data sets produced by the Defence Science and Technology Organisation's Ingara maritime surveillance radar.

  15. Damage Detection of Refractory Based on Principle Component Analysis and Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Changming Liu

    2018-01-01

    Full Text Available Acoustic emission (AE technique is a common approach to identify the damage of the refractories; however, there is a complex problem since there are as many as fifteen involved parameters, which calls for effective data processing and classification algorithms to reduce the level of complexity. In this paper, experiments involving three-point bending tests of refractories were conducted and AE signals were collected. A new data processing method of merging the similar parameters in the description of the damage and reducing the dimension was developed. By means of the principle component analysis (PCA for dimension reduction, the fifteen related parameters can be reduced to two parameters. The parameters were the linear combinations of the fifteen original parameters and taken as the indexes for damage classification. Based on the proposed approach, the Gaussian mixture model was integrated with the Bayesian information criterion to group the AE signals into two damage categories, which accounted for 99% of all damage. Electronic microscope scanning of the refractories verified the two types of damage.

  16. The role of bullying in depressive symptoms from adolescence to emerging adulthood: A growth mixture model.

    Science.gov (United States)

    Hill, Ryan M; Mellick, William; Temple, Jeff R; Sharp, Carla

    2017-01-01

    The present study sought to identify trajectories of depressive symptoms in adolescence and emerging adulthood using a school-based sample of adolescents assessed over a five-year period. The study also examined whether bully and cyberbully victimization and perpetration significantly predicted depressive symptom trajectories. Data from a sample of 1042 high school students were examined. The sample had a mean age of 15.09 years (SD=.79), was 56.0% female, and was racially diverse: 31.4% Hispanic, 29.4% White, and 27.9% African American. Data were examined using growth mixture modeling. Four depressive symptoms trajectories were identified, including those with a mild trajectory of depressive symptoms, an increasing trajectory of depressive symptoms, an elevated trajectory of depressive symptoms, and a decreasing trajectory of depressive symptoms. Results indicated that bully victimization and cyberbully victimization differentially predicted depressive symptoms trajectories across adolescence, though bully and cyberbully perpetration did not. Limitations include reliance on self-reports of bully perpetration and a limited consideration of external factors that may impact the course of depression. These findings may inform school personnel in identifying students' likely trajectory of depressive symptoms and determining where depression prevention and treatment services may be needed. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A Latent Growth Mixture Modeling Approach to PTSD Symptoms in Rape Victims.

    Science.gov (United States)

    Armour, Cherie; Shevlin, Mark; Elklit, Ask; Mroczek, Dan

    2012-03-01

    The research literature has suggested that longitudinal changes in posttraumatic stress disorder (PTSD) could be adequately described in terms of one universal trajectory, with individual differences in baseline levels (intercept) and rate of change (slope) being negligible. However, not everyone who has experienced a trauma is diagnosed with PTSD, and symptom severity levels differ between individuals exposed to similar traumas. The current study employed the latent growth mixture modeling technique to test for multiple trajectories using data from a sample of Danish rape victims (N = 255). In addition, the analysis aimed to determine whether a number of explanatory variables could differentiate between the trajectories (age, acute stress disorder [ASD], and perceived social support). Results concluded the existence of two PTSD trajectories. ASD was found to be the only significant predictor of one trajectory characterized by high initial levels of PTSD symptomatology. The present findings confirmed the existence of multiple trajectories with regard to PTSD symptomatology in a way that may be useful to clinicians working with this population.

  18. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    Science.gov (United States)

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  19. Modification of Gaussian mixture models for data classification in high energy physics

    Science.gov (United States)

    Štěpánek, Michal; Franc, Jiří; Kůs, Václav

    2015-01-01

    In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).

  20. Testing job typologies and identifying at-risk subpopulations using factor mixture models.

    Science.gov (United States)

    Keller, Anita C; Igic, Ivana; Meier, Laurenz L; Semmer, Norbert K; Schaubroeck, John M; Brunner, Beatrice; Elfering, Achim

    2017-10-01

    Research in occupational health psychology has tended to focus on the effects of single job characteristics or various job characteristics combined into 1 factor. However, such a variable-centered approach does not account for the clustering of job attributes among groups of employees. We addressed this issue by using a person-centered approach to (a) investigate the occurrence of different empirical constellations of perceived job stressors and resources and (b) validate the meaningfulness of profiles by analyzing their association with employee well-being and performance. We applied factor mixture modeling to identify profiles in 4 large samples consisting of employees in Switzerland (Studies 1 and 2) and the United States (Studies 3 and 4). We identified 2 profiles that spanned the 4 samples, with 1 reflecting a combination of relatively low stressors and high resources (P1) and the other relatively high stressors and low resources (P3). The profiles differed mainly in terms of their organizational and social aspects. Employees in P1 reported significantly higher mean levels of job satisfaction, performance, and general health, and lower means in exhaustion compared with P3. Additional analyses showed differential relationships between job attributes and outcomes depending on profile membership. These findings may benefit organizational interventions as they show that perceived work stressors and resources more strongly influence satisfaction and well-being in particular profiles. (PsycINFO Database Record (c) 2017 APA, all rights reserved).