Sample records for factor mixture model

  1. Multilevel Mixture Factor Models (United States)

    Varriale, Roberta; Vermunt, Jeroen K.


    Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…

  2. Detecting Social Desirability Bias Using Factor Mixture Models (United States)

    Leite, Walter L.; Cooper, Lou Ann


    Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…

  3. Hidden Markov Models with Factored Gaussian Mixtures Densities

    Institute of Scientific and Technical Information of China (English)

    LI Hao-zheng; LIU Zhi-qiang; ZHU Xiang-hua


    We present a factorial representation of Gaussian mixture models for observation densities in Hidden Markov Models(HMMs), which uses the factorial learning in the HMM framework. We derive the reestimation formulas for estimating the factorized parameters by the Expectation Maximization (EM) algorithm. We conduct several experiments to compare the performance of this model structure with Factorial Hidden Markov Models(FHMMs) and HMMs, some conclusions and promising empirical results are presented.

  4. Advances in Behavioral Genetics Modeling Using Mplus: Applications of Factor Mixture Modeling to Twin Data

    National Research Council Canada - National Science Library

    Muthen, Bengt; Asparouhov, Tihomir; Rebollo, Irene


    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder...

  5. Advances in behavioral genetics modeling using Mplus: applications of factor mixture modeling to twin data. (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Rebollo, Irene


    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder. In this model, heritability is simultaneously studied with respect to latent class membership and within-class severity dimensions. Different latent classes of individuals are allowed to have different heritability for the severity dimensions. The factor mixture approach appears to have great potential for the genetic analyses of heterogeneous populations. Generalizations for longitudinal data are also outlined.

  6. Using a factor mixture modeling approach in alcohol dependence in a general population sample. (United States)

    Kuo, Po-Hsiu; Aggen, Steven H; Prescott, Carol A; Kendler, Kenneth S; Neale, Michael C


    Alcohol dependence (AD) is a complex and heterogeneous disorder. The identification of more homogeneous subgroups of individuals with drinking problems and the refinement of the diagnostic criteria are inter-related research goals. They have the potential to improve our knowledge of etiology and treatment effects, and to assist in the identification of risk factors or specific genetic factors. Mixture modeling has advantages over traditional modeling that focuses on either the dimensional or categorical latent structure. The mixture modeling combines both latent class and latent trait models, but has not been widely applied in substance use research. The goal of the present study is to assess whether the AD criteria in the population could be better characterized by a continuous dimension, a few discrete subgroups, or a combination of the two. More than seven thousand participants were recruited from the population-based Virginia Twin Registry, and were interviewed to obtain DSM-IV (Diagnostic and Statistical Manual of Mental Disorder, version IV) symptoms and diagnosis of AD. We applied factor analysis, latent class analysis, and factor mixture models for symptom items based on the DSM-IV criteria. Our results showed that a mixture model with 1 factor and 3 classes for both genders fit well. The 3 classes were a non-problem drinking group and severe and moderate drinking problem groups. By contrast, models constrained to conform to DSM-IV diagnostic criteria were rejected by model fitting indices providing empirical evidence for heterogeneity in the AD diagnosis. Classification analysis showed different characteristics across subgroups, including alcohol-caused behavioral problems, comorbid disorders, age at onset for alcohol-related milestones, and personality. Clinically, the expanded classification of AD may aid in identifying suitable treatments, interventions and additional sources of comorbidity based on these more homogenous subgroups of alcohol use

  7. Models and Strategies for Factor Mixture Analysis: An Example Concerning the Structure Underlying Psychological Disorders (United States)

    Clark, Shaunna L.; Muthén, Bengt; Kaprio, Jaakko; D’Onofrio, Brian M.; Viken, Richard; Rose, Richard J.


    The factor mixture model (FMM) uses a hybrid of both categorical and continuous latent variables. The FMM is a good model for the underlying structure of psychopathology because the use of both categorical and continuous latent variables allows the structure to be simultaneously categorical and dimensional. This is useful because both diagnostic class membership and the range of severity within and across diagnostic classes can be modeled concurrently. While the conceptualization of the FMM has been explained in the literature, the use of the FMM is still not prevalent. One reason is that there is little research about how such models should be applied in practice and, once a well fitting model is obtained, how it should be interpreted. In this paper, the FMM will be explored by studying a real data example on conduct disorder. By exploring this example, this paper aims to explain the different formulations of the FMM, the various steps in building a FMM, as well as how to decide between a FMM and alternative models. PMID:24302849

  8. Image Retrieval Based on Multiview Constrained Nonnegative Matrix Factorization and Gaussian Mixture Model Spectral Clustering Method

    Directory of Open Access Journals (Sweden)

    Qunyi Xie


    Full Text Available Content-based image retrieval has recently become an important research topic and has been widely used for managing images from repertories. In this article, we address an efficient technique, called MNGS, which integrates multiview constrained nonnegative matrix factorization (NMF and Gaussian mixture model- (GMM- based spectral clustering for image retrieval. In the proposed methodology, the multiview NMF scheme provides competitive sparse representations of underlying images through decomposition of a similarity-preserving matrix that is formed by fusing multiple features from different visual aspects. In particular, the proposed method merges manifold constraints into the standard NMF objective function to impose an orthogonality constraint on the basis matrix and satisfy the structure preservation requirement of the coefficient matrix. To manipulate the clustering method on sparse representations, this paper has developed a GMM-based spectral clustering method in which the Gaussian components are regrouped in spectral space, which significantly improves the retrieval effectiveness. In this way, image retrieval of the whole database translates to a nearest-neighbour search in the cluster containing the query image. Simultaneously, this study investigates the proof of convergence of the objective function and the analysis of the computational complexity. Experimental results on three standard image datasets reveal the advantages that can be achieved with the proposed retrieval scheme.

  9. Factor mixture modeling of the Penn State Worry Questionnaire: Evidence for distinct classes of worry. (United States)

    Korte, Kristina J; Allan, Nicholas P; Schmidt, Norman B


    Worry, the anticipation of future threat, is a common feature of anxiety and mood psychopathology. Considerable research has examined the latent structure of worry to determine whether this construct reflects a dimensional or taxonic structure. Recent taxometric investigations have provided support for a unidimensional structure of worry; however, the results of these studies are limited in that taxometric approaches are unable to assess for the presence of more than two classes of a given construct. Given the complex nature of worry, it is possible that worry may actually reflect a latent structure comprised of multiple classes that cannot be assessed through taxometric approaches. Thus, it is important to utilize newer statistical techniques, such as factor-mixture modeling (FMM), which allow for a more nuanced assessment of the latent structure of a given psychological construct. The aim of the current study was to examine the latent structure of worry using FMM. It was predicted that worry would reflect a three-class structure comprised of (1) a class of low, normative levels of worry, (2) a class of moderate, subclinical worry, and (3) a class of high, pervasive worry. The latent class structure of worry was assessed using FMM in a sample of 1337 participants recruited from the community through a research clinic. Results revealed a three-class structure of the PSWQ comprising low, moderate-high, and high classes of worry. We also provided convergent and discriminant validity of the worry classes by demonstrating that the high worry class was most associated with GAD and that the low worry class was the least associated with GAD. The clinical utility of the worry classes, including the creation of empirically based cut-scores, and the implications for future research are discussed.

  10. Pointer Sentinel Mixture Models


    Merity, Stephen; Xiong, Caiming; Bradbury, James; Socher, Richard


    Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. O...

  11. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  12. Confirmatory factor analysis, latent profile analysis, and factor mixture modeling of the syndromes of the Child Behavior Checklist and Teacher Report Form. (United States)

    Gomez, Rapson; Vance, Alasdair


    The current study used confirmatory factor analysis (CFA), latent profile analysis (LPA), and factor mixture modeling (FMM) to examine the co-occurrence of the childhood syndromes using the Child Behavior Checklist (CBCL) and Teacher Report Form (TRF). Parents and teachers completed the CBCL and TRF, respectively, for a clinic-referred sample of 720 children, ages 7-12 years. For the CBCL, the analyses indicated most support a 2-class 2-factor FMM, and for the TRF, there was most support for a 2-class 3-factor model. The classes were all syndromes at average levels and all syndromes at high levels. The findings indicate high syndrome co-occurrence. The implications of the findings for understanding syndrome co-occurrence in the CBCL and TRF, theories of syndrome co-occurrence, and the clinical use of the CBCL and TRF are discussed. (c) 2014 APA, all rights reserved.

  13. Constrained Fisher Scoring for a Mixture of Factor Analyzers (United States)


    global appearance model across the entire sensor network. constrained maximum likelihood estimation, mixture of factor analyzers, Newton’s method...ARL-TR-7836• SEP 2016 US Army Research Laboratory Constrained Fisher Scoring for a Mixture of Factor Analyzers by Gene T Whipps, Emre Ertin, and...TR-7836• SEP 2016 US Army Research Laboratory Constrained Fisher Scoring for a Mixture of Factor Analyzers by Gene T Whipps Sensors and Electron

  14. Testing Latent Mean Differences between Observed and Unobserved Groups Using Multilevel Factor Mixture Models (United States)

    Allua, Shane; Stapleton, Laura M.; Beretvas, S. Natasha


    When assessing latent mean differences, researchers frequently do not explore possible heterogeneity within their data sets. Sources of differences may be functions of a nested data structure or heterogeneity in the form of unobserved classes of observations defined by a difference in factor means. In this study, the use of multilevel structural…

  15. Mixtures of Common Skew-t Factor Analyzers


    Murray, Paula M.; McNicholas, Paul D.; Browne, Ryan P.


    A mixture of common skew-t factor analyzers model is introduced for model-based clustering of high-dimensional data. By assuming common component factor loadings, this model allows clustering to be performed in the presence of a large number of mixture components or when the number of dimensions is too large to be well-modelled by the mixtures of factor analyzers model or a variant thereof. Furthermore, assuming that the component densities follow a skew-t distribution allows robust clusterin...

  16. Trajectory Pathways for Depressive Symptoms and Their Associated Factors in a Chinese Primary Care Cohort by Growth Mixture Modelling.

    Directory of Open Access Journals (Sweden)

    Weng Yee Chin

    Full Text Available The naturalistic course for patients suffering from depressive disorders can be quite varied. Whilst some remit with little or no intervention, others may suffer a more prolonged course of symptoms. The aim of this study was to identify trajectory patterns for depressive symptoms in a Chinese primary care cohort and their associated factors.A 12-month cohort study was conducted. Patients recruited from 59 primary care clinics across Hong Kong were screened for depressive symptoms using the Centre for Epidemiologic Studies Depression Scale (CES-D and monitored over 12 months using the Patient Health Questionnaire-9 items (PHQ-9 administered at 12, 26 and 52 weeks. 721 subjects were included for growth mixture modelling analysis. Using Akaike Information Criterion, Bayesian Information Criterion, Entropy and Lo-Mendell-Rubin adjusted likelihood ratio test, a seven-class trajectory path model was identified. Over 12 months, three trajectory groups showed improvement in depressive symptoms, three remained static, whilst one deteriorated. A mild severity of depressive symptoms with gradual improvement was the most prevalent trajectory identified. Multivariate, multinomial regression analysis was used to identify factors associated with each trajectory. Risk factors associated with chronicity included: female gender; not married; not in active employment; presence of multiple chronic disease co-morbidities; poor self-rated general health; and infrequent health service use.Whilst many primary care patients may initially present with a similar severity of depressive symptoms, their course over 12 months can be quite heterogeneous. Although most primary care patients improve naturalistically over 12 months, many do not remit and it is important for doctors to be able to identify those who are at risk of chronicity. Regular follow-up and greater treatment attention is recommended for patients at risk of chronicity.

  17. Multiple Deprivation, Severity and Latent Sub-Groups: Advantages of Factor Mixture Modelling for Analysing Material Deprivation. (United States)

    Najera Catalan, Hector E


    Material deprivation is represented in different forms and manifestations. Two individuals with the same deprivation score (i.e. number of deprivations), for instance, are likely to be unable to afford or access entirely or partially different sets of goods and services, while one individual may fail to purchase clothes and consumer durables and another one may lack access to healthcare and be deprived of adequate housing . As such, the number of possible patterns or combinations of multiple deprivation become increasingly complex for a higher number of indicators. Given this difficulty, there is interest in poverty research in understanding multiple deprivation, as this analysis might lead to the identification of meaningful population sub-groups that could be the subjects of specific policies. This article applies a factor mixture model (FMM) to a real dataset and discusses its conceptual and empirical advantages and disadvantages with respect to other methods that have been used in poverty research . The exercise suggests that FMM is based on more sensible assumptions (i.e. deprivation covary within each class), provides valuable information with which to understand multiple deprivation and is useful to understand severity of deprivation and the additive properties of deprivation indicators.

  18. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)


    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the

  19. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)


    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the sam

  20. Bayesian mixture models for spectral density estimation


    Cadonna, Annalisa


    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  1. Mixture Modeling: Applications in Educational Psychology (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.


    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  2. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai


    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  3. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai


    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  4. On the mixture model for multiphase flow

    Energy Technology Data Exchange (ETDEWEB)

    Manninen, M.; Taivassalo, V. [VTT Energy, Espoo (Finland). Nuclear Energy; Kallio, S. [Aabo Akademi, Turku (Finland)


    Numerical flow simulation utilising a full multiphase model is impractical for a suspension possessing wide distributions in the particle size or density. Various approximations are usually made to simplify the computational task. In the simplest approach, the suspension is represented by a homogeneous single-phase system and the influence of the particles is taken into account in the values of the physical properties. This study concentrates on the derivation and closing of the model equations. The validity of the mixture model is also carefully analysed. Starting from the continuity and momentum equations written for each phase in a multiphase system, the field equations for the mixture are derived. The mixture equations largely resemble those for a single-phase flow but are represented in terms of the mixture density and velocity. The volume fraction for each dispersed phase is solved from a phase continuity equation. Various approaches applied in closing the mixture model equations are reviewed. An algebraic equation is derived for the velocity of a dispersed phase relative to the continuous phase. Simplifications made in calculating the relative velocity restrict the applicability of the mixture model to cases in which the particles reach the terminal velocity in a short time period compared to the characteristic time scale of the flow of the mixture. (75 refs.)

  5. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín


    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  6. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas


    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  7. Identifiability of large phylogenetic mixture models. (United States)

    Rhodes, John A; Sullivant, Seth


    Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. Such models are of increasing interest for data analysis, as they can capture the variety of evolutionary processes that may be occurring across long sequences of DNA or proteins. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. Identifiability is, however, essential to their use for statistical inference.We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. This provides a theoretical justification for some current empirical studies, and indicates that extensions to even more mixture components should be theoretically well behaved. We also extend our results to certain mixtures on different trees, using the same algebraic techniques.

  8. Flexible Rasch Mixture Models with Package psychomix

    Directory of Open Access Journals (Sweden)

    Hannah Frick


    Full Text Available Measurement invariance is an important assumption in the Rasch model and mixture models constitute a flexible way of checking for a violation of this assumption by detecting unobserved heterogeneity in item response data. Here, a general class of Rasch mixture models is established and implemented in R, using conditional maximum likelihood estimation of the item parameters (given the raw scores along with flexible specification of two model building blocks: (1 Mixture weights for the unobserved classes can be treated as model parameters or based on covariates in a concomitant variable model. (2 The distribution of raw score probabilities can be parametrized in two possible ways, either using a saturated model or a specification through mean and variance. The function raschmix( in the R package psychomix provides these models, leveraging the general infrastructure for fitting mixture models in the flexmix package. Usage of the function and its associated methods is illustrated on artificial data as well as empirical data from a study of verbally aggressive behavior.

  9. Lattice Model for water-solute mixtures


    Furlan, A. P.; Almarza, N. G.; M. C. Barbosa


    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting on, hydrophilic, inert and hydrophobic interactions. Extensive Monte Carlo simulations were carried out and the behavior of pure components and the excess proper...

  10. A Skew-Normal Mixture Regression Model (United States)

    Liu, Min; Lin, Tsung-I


    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  11. Mixture model analysis of complex samples

    NARCIS (Netherlands)

    Wedel, M; ter Hofstede, F; Steenkamp, JBEM


    We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample desi

  12. The Supervised Learning Gaussian Mixture Model

    Institute of Scientific and Technical Information of China (English)

    马继涌; 高文


    The traditional Gaussian Mixture Model(GMM)for pattern recognition is an unsupervised learning method.The parameters in the model are derived only by the training samples in one class without taking into account the effect of sample distributions of other classes,hence,its recognition accuracy is not ideal sometimes.This paper introduces an approach for estimating the parameters in GMM in a supervising way.The Supervised Learning Gaussian Mixture Model(SLGMM)improves the recognition accuracy of the GMM.An experimental example has shown its effectiveness.The experimental results have shown that the recognition accuracy derived by the approach is higher than those obtained by the Vector Quantization(VQ)approach,the Radial Basis Function (RBF) network model,the Learning Vector Quantization (LVQ) approach and the GMM.In addition,the training time of the approach is less than that of Multilayer Perceptrom(MLP).

  13. Population mixture model for nonlinear telomere dynamics (United States)

    Itzkovitz, Shalev; Shlush, Liran I.; Gluck, Dan; Skorecki, Karl


    Telomeres are DNA repeats protecting chromosomal ends which shorten with each cell division, eventually leading to cessation of cell growth. We present a population mixture model that predicts an exponential decrease in telomere length with time. We analytically solve the dynamics of the telomere length distribution. The model provides an excellent fit to available telomere data and accounts for the previously unexplained observation of telomere elongation following stress and bone marrow transplantation, thereby providing insight into the nature of the telomere clock.

  14. Self-assembly models for lipid mixtures (United States)

    Singh, Divya; Porcar, Lionel; Butler, Paul; Perez-Salas, Ursula


    Solutions of mixed long and short (detergent-like) phospholipids referred to as ``bicelle'' mixtures in the literature, are known to form a variety of different morphologies based on their total lipid composition and temperature in a complex phase diagram. Some of these morphologies have been found to orient in a magnetic field, and consequently bicelle mixtures are widely used to study the structure of soluble as well as membrane embedded proteins using NMR. In this work, we report on the low temperature phase of the DMPC and DHPC bicelle mixture, where there is agreement on the discoid structures but where molecular packing models are still being contested. The most widely accepted packing arrangement, first proposed by Vold and Prosser had the lipids completely segregated in the disk: DHPC in the rim and DMPC in the disk. Using data from small angle neutron scattering (SANS) experiments, we show how radius of the planar domain of the disks is governed by the effective molar ratio qeff of lipids in aggregate and not the molar ratio q (q = [DMPC]/[DHPC] ) as has been understood previously. We propose a new quantitative (packing) model and show that in this self assembly scheme, qeff is the real determinant of disk sizes. Based on qeff , a master equation can then scale the radii of disks from mixtures with varying q and total lipid concentration.

  15. Hierarchical mixture models for assessing fingerprint individuality


    Dass, Sarat C.; Li, Mingfei


    The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...

  16. Strength Mechanism and Influence Factors for Cold Recycled Asphalt Mixture


    Tao Ma; Hao Wang; Yongli Zhao; Xiaoming Huang; Yuhui Pi


    This study focused on the key factors affecting the tensile strength of cold recycled asphalt mixture with cement and emulsified asphalt. The specific surface areas and strength of RAP were analyzed. The interaction between the emulsified asphalt and cement was observed. Comprehensive laboratory testing was conducted to evaluate the influences of RAP, emulsified asphalt, and cement on the tensile strength of cold recycled asphalt mixture. It is found that although RAP is used as aggregates, i...

  17. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  18. Bayesian mixture models for partially verified data

    DEFF Research Database (Denmark)

    Kostoulas, Polychronis; Browne, William J.; Nielsen, Søren Saxmose;


    for some individuals, in order to minimize this loss in the discriminatory power. The distribution of the continuous antibody response against MAP has been obtained for healthy, MAP-infected and MAP-infectious cows of different age groups. The overall power of the milk-ELISA to discriminate between healthy......Bayesian mixture models can be used to discriminate between the distributions of continuous test responses for different infection stages. These models are particularly useful in case of chronic infections with a long latent period, like Mycobacterium avium subsp. paratuberculosis (MAP) infection...

  19. Video compressive sensing using Gaussian mixture models. (United States)

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence


    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  20. [Comparison of two spectral mixture analysis models]. (United States)

    Wang, Qin-Jun; Lin, Qi-Zhong; Li, Ming-Xiao; Wang, Li-Ming


    A spectral mixture analysis experiment was designed to compare the spectral unmixing effects of linear spectral mixture analysis (LSMA) and constraint linear spectral mixture analysis (CLSMA). In the experiment, red, green, blue and yellow colors were printed on a coarse album as four end members. Thirty nine mixed samples were made according to each end member's different percent in one pixel. Then, field spectrometer was located on the top of the mixed samples' center to measure spectrum one by one. Inversion percent of each end member in the pixel was extracted using LSMA and CLSMA models. Finally, normalized mean squared error was calculated between inversion and real percent to compare the two models' effects on spectral unmixing. Results from experiment showed that the total error of LSMA was 0.30087 and that of CLSMA was 0.37552 when using all bands in the spectrum. Therefore, LSMA was 0.075 less than that of CLSMA when the whole bands of four end members' spectra were used. On the other hand, the total error of LSMA was 0.28095 and that of CLSMA was 0.29805 after band selection. So, LSMA was 0.017 less than that of CLSMA when bands selection was performed. Therefore, whether all or selected bands were used, the accuracy of LSMA was better than that of CLSMA because during the process of spectrum measurement, errors caused by instrument or human were introduced into the model, leading to that the measured data could not mean the strict requirement of CLSMA and therefore reduced its accuracy: Furthermore, the total error of LSMA using selected bands was 0.02 less than that using the whole bands. The total error of CLSMA using selected bands was 0.077 less than that using the whole bands. So, in the same model, spectral unmixing using selected bands to reduce the correlation of end members' spectra was superior to that using the whole bands.

  1. Investigation of a Gamma model for mixture STR samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Lauritzen, Steffen L.

    The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis.......The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis....

  2. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures (United States)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered

  3. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    performed satisfactorily and predicted the general behavior of the systems, but qCPA used fewer adjustable parameters to achieve similar predictions. It has been demonstrated that qCPA is a promising model which, compared to CPA, systematically improves the predictions of the experimentally determined phase......, accurate predictions of the thermodynamic properties and phase equilibria of mixtures containing CO2 are challenging with classical models such as the Soave-Redlich-Kwong (SRK) equation of state (EoS). This is believed to be due to the fact, that CO2 has a large quadrupole moment which the classical models...... do not explicitly account for. In this thesis, in an attempt to obtain a physically more consistent model, the cubicplus association (CPA) EoS is extended to include quadrupolar interactions. The new quadrupolar CPA (qCPA) can be used with the experimental value of the quadrupolemoment...

  4. Strength Mechanism and Influence Factors for Cold Recycled Asphalt Mixture

    Directory of Open Access Journals (Sweden)

    Tao Ma


    Full Text Available This study focused on the key factors affecting the tensile strength of cold recycled asphalt mixture with cement and emulsified asphalt. The specific surface areas and strength of RAP were analyzed. The interaction between the emulsified asphalt and cement was observed. Comprehensive laboratory testing was conducted to evaluate the influences of RAP, emulsified asphalt, and cement on the tensile strength of cold recycled asphalt mixture. It is found that although RAP is used as aggregates, its inner structure and strength are much different from real aggregates. The strength of RAP has decisive effect on the strength of cold recycled asphalt mixture. New aggregates and fine gradation design can help improve the bonding between RAP and binder. For emulsified asphalt, slow setting of asphalt can give sufficient time for cement to hydrate which is helpful for strength formation in the cold recycled asphalt mixture. The high viscosity of asphalt can improve the early strength of cold recycled asphalt mixture that is important for traffic opening in the field. Cement is an efficient additive to improve the strength of cold recycled asphalt mixtures by promoting demulsification of emulsified asphalt and producing cement hydrates. However, the cement content is limited by RAP.

  5. Mixture Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.


    A mixture experiment involves combining two or more components in various proportions or amounts and then measuring one or more responses for the resulting end products. Other factors that affect the response(s), such as process variables and/or the total amount of the mixture, may also be studied in the experiment. A mixture experiment design specifies the combinations of mixture components and other experimental factors (if any) to be studied and the response variable(s) to be measured. Mixture experiment data analyses are then used to achieve the desired goals, which may include (i) understanding the effects of components and other factors on the response(s), (ii) identifying components and other factors with significant and nonsignificant effects on the response(s), (iii) developing models for predicting the response(s) as functions of the mixture components and any other factors, and (iv) developing end-products with desired values and uncertainties of the response(s). Given a mixture experiment problem, a practitioner must consider the possible approaches for designing the experiment and analyzing the data, and then select the approach best suited to the problem. Eight possible approaches include 1) component proportions, 2) mathematically independent variables, 3) slack variable, 4) mixture amount, 5) component amounts, 6) mixture process variable, 7) mixture of mixtures, and 8) multi-factor mixture. The article provides an overview of the mixture experiment designs, models, and data analyses for these approaches.

  6. Bayesian Estimation of a Mixture Model

    Directory of Open Access Journals (Sweden)

    Ilhem Merah


    Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.

  7. Partial structure factors in star polymer/colloid mixtures

    CERN Document Server

    Stellbrink, J; Richter, D; Moussaid, A; Schofield, A B; Poon, W C K; Pusey, P N; Lindner, P; Dzubiella, J; Likos, C N; Löwen, H


    Addition of polymer to colloidal suspensions induces an attractive part to the colloid pair potential, which is of purely entropic origin (''depletion interaction''). We investigated the influence of polymer branching on depletion forces by studying mixtures of hard sphere colloids and star polymers with increasing arm number f=2-32, but constant R sub g approx 500 A. We found a pronounced effect of branching on the position of the gas/liquid demixing transition. Using small angle neutron scattering (SANS) we were able to measure partial structure factors in star polymer/colloid mixtures. The relative distance to the demixing transition is reflected in our scattering data. (orig.)

  8. Mixture latent autoregressive models for longitudinal data

    CERN Document Server

    Bartolucci, Francesco; Pennoni, Fulvia


    Many relevant statistical and econometric models for the analysis of longitudinal data include a latent process to account for the unobserved heterogeneity between subjects in a dynamic fashion. Such a process may be continuous (typically an AR(1)) or discrete (typically a Markov chain). In this paper, we propose a model for longitudinal data which is based on a mixture of AR(1) processes with different means and correlation coefficients, but with equal variances. This model belongs to the class of models based on a continuous latent process, and then it has a natural interpretation in many contexts of application, but it is more flexible than other models in this class, reaching a goodness-of-fit similar to that of a discrete latent process model, with a reduced number of parameters. We show how to perform maximum likelihood estimation of the proposed model by the joint use of an Expectation-Maximisation algorithm and a Newton-Raphson algorithm, implemented by means of recursions developed in the hidden Mark...

  9. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard


    .e. the window length. In this work we use the Wishart Mixture Model (WMM) as a probabilistic model for dFC based on variational inference. The framework admits arbitrary window lengths and number of dynamic components and includes the static one-component model as a special case. We exploit that the WMM...

  10. Maximum likelihood estimation of finite mixture model for economic data (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir


    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  11. Mixture Model and MDSDCA for Textual Data (United States)

    Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît

    E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.

  12. Mixtures of multiplicative cascade models in geochemistry

    Directory of Open Access Journals (Sweden)

    F. P. Agterberg


    Full Text Available Multifractal modeling of geochemical map data can help to explain the nature of frequency distributions of element concentration values for small rock samples and their spatial covariance structure. Useful frequency distribution models are the lognormal and Pareto distributions which plot as straight lines on logarithmic probability and log-log paper, respectively. The model of de Wijs is a simple multiplicative cascade resulting in discrete logbinomial distribution that closely approximates the lognormal. In this model, smaller blocks resulting from dividing larger blocks into parts have concentration values with constant ratios that are scale-independent. The approach can be modified by adopting random variables for these ratios. Other modifications include a single cascade model with ratio parameters that depend on magnitude of concentration value. The Turcotte model, which is another variant of the model of de Wijs, results in a Pareto distribution. Often a single straight line on logarithmic probability or log-log paper does not provide a good fit to observed data and two or more distributions should be fitted. For example, geochemical background and anomalies (extremely high values have separate frequency distributions for concentration values and for local singularity coefficients. Mixtures of distributions can be simulated by adding the results of separate cascade models. Regardless of properties of background, an unbiased estimate can be obtained of the parameter of the Pareto distribution characterizing anomalies in the upper tail of the element concentration frequency distribution or lower tail of the local singularity distribution. Computer simulation experiments and practical examples are used to illustrate the approach.

  13. Empirical profile mixture models for phylogenetic reconstruction

    National Research Council Canada - National Science Library

    Si Quang, Le; Gascuel, Olivier; Lartillot, Nicolas


    Motivation: Previous studies have shown that accounting for site-specific amino acid replacement patterns using mixtures of stationary probability profiles offers a promising approach for improving...

  14. Modeling methods for mixture-of-mixtures experiments applied to a tablet formulation problem. (United States)

    Piepel, G F


    During the past few years, statistical methods for the experimental design, modeling, and optimization of mixture experiments have been widely applied to drug formulation problems. Different methods are required for mixture-of-mixtures (MoM) experiments in which a formulation is a mixture of two or more "major" components, each of which is a mixture of one or more "minor" components. Two types of MoM experiments are briefly described. A tablet formulation optimization example from a 1997 article in this journal is used to illustrate one type of MoM experiment and corresponding empirical modeling methods. Literature references that discuss other methods for MoM experiments are also provided.

  15. Determining of migraine prognosis using latent growth mixture models

    Institute of Scientific and Technical Information of China (English)

    Bahar Tasdelen; Aynur Ozge; Hakan Kaleagasi; Semra Erdogan; Tufan Mengi


    Background This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies,participants are classified with respect to baseline status and followed within a certain time period.However,latent growth mixture model is the most suitable method,which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence,we planned this comprehensive study to identify prognostic factors in migraine.Methods The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity,frequency,and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures,semiparametric and group-based mixture modeling approach,were applied to define the developmental trajectories.Results While the three-group model for the severity (mild,moderate,severe) and frequency (low,medium,high) of headache appeared to be appropriate,the four-group model for the duration (low,medium,high,extremely high) was more suitable. The severity of headache increased in the patients with nausea,vomiting,photophobia and phonophobia.The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration.Conclusions Nausea,vomiting and photophobia were the most significant factors to identify developmental trajectories.The remission time was not the same for the severity,frequency,and duration of headache.

  16. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions (United States)

    Steinley, Douglas; Brusco, Michael J.


    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  17. Species Tree Inference Using a Mixture Model. (United States)

    Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens


    Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic

  18. Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    CERN Document Server

    Garrido, M C; Ruiz, A; 10.1613/jair.533


    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard prob...

  19. Learning High-Dimensional Mixtures of Graphical Models

    CERN Document Server

    Anandkumar, A; Kakade, S M


    We consider the problem of learning mixtures of discrete graphical models in high dimensions and propose a novel method for estimating the mixture components with provable guarantees. The method proceeds mainly in three stages. In the first stage, it estimates the union of the Markov graphs of the mixture components (referred to as the union graph) via a series of rank tests. It then uses this estimated union graph to compute the mixture components via a spectral decomposition method. The spectral decomposition method was originally proposed for latent class models, and we adapt this method for learning the more general class of graphical model mixtures. In the end, the method produces tree approximations of the mixture components via the Chow-Liu algorithm. Our output is thus a tree-mixture model which serves as a good approximation to the underlying graphical model mixture. When the union graph has sparse node separators, we prove that our method has sample and computational complexities scaling as poly(p, ...

  20. Second-order model selection in mixture experiments

    Energy Technology Data Exchange (ETDEWEB)

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.


    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  1. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.


    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  2. Maximum likelihood estimation in constrained parameter spaces for mixtures of factor analyzers


    Greselin, Francesca; Ingrassia, Salvatore


    Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of high-dimensional data. According to the likelihood approach in data modeling, it is well known that the unconstrained log-likelihood function may present spurious maxima and singularities and this is due to specific patterns of the estimated covariance structure, when their determinant approaches 0. To reduce such drawbacks, in this paper we introduce a procedure for the parameter estimati...

  3. A stochastic evolutionary model generating a mixture of exponential distributions

    CERN Document Server

    Fenner, Trevor; Loizou, George


    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in \\cite{FENN15} so that it can generate mixture models,in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  4. Detection of unobserved heterogeneity with growth mixture models


    Jost Reinecke; Luca Mariotti


    Latent growth curve models as structural equation models are extensively discussedin various research fields (Duncan et al., 2006). Recent methodological and statisticalextension are focused on the consideration of unobserved heterogeneity in empiricaldata. Muth´en extended the classical structural equation approach by mixture components,i. e. categorical latent classes (Muth´en 2002, 2004, 2007).The paper will discuss applications of growth mixture models with data from oneof the first panel...

  5. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.


    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for asparta

  6. Modeling and interpreting biological effects of mixtures in the environment: introduction to the metal mixture modeling evaluation project. (United States)

    Van Genderen, Eric; Adams, William; Dwyer, Robert; Garman, Emily; Gorsuch, Joseph


    The fate and biological effects of chemical mixtures in the environment are receiving increased attention from the scientific and regulatory communities. Understanding the behavior and toxicity of metal mixtures poses unique challenges for incorporating metal-specific concepts and approaches, such as bioavailability and metal speciation, in multiple-metal exposures. To avoid the use of oversimplified approaches to assess the toxicity of metal mixtures, a collaborative 2-yr research project and multistakeholder group workshop were conducted to examine and evaluate available higher-tiered chemical speciation-based metal mixtures modeling approaches. The Metal Mixture Modeling Evaluation project and workshop achieved 3 important objectives related to modeling and interpretation of biological effects of metal mixtures: 1) bioavailability models calibrated for single-metal exposures can be integrated to assess mixture scenarios; 2) the available modeling approaches perform consistently well for various metal combinations, organisms, and endpoints; and 3) several technical advancements have been identified that should be incorporated into speciation models and environmental risk assessments for metals.

  7. Simulation of rheological behavior of asphalt mixture with lattice model

    Institute of Scientific and Technical Information of China (English)

    杨圣枫; 杨新华; 陈传尧


    A three-dimensional(3D) lattice model for predicting the rheological behavior of asphalt mixtures was presented.In this model asphalt mixtures were described as a two-phase composite material consisting of asphalt sand and coarse aggregates distributed randomly.Asphalt sand was regarded as a viscoelastic material and aggregates as an elastic material.The rheological response of asphalt mixture subjected to different constant stresses was simulated.The calibrated overall creep strain shows a good approximation to experimental results.

  8. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars


    2011, and compute dollar losses and implied standard deviation losses. We compare our results to those of existing mixture models and other benchmarks like component models and jump models. Using the model confidence set test, the overall dollar root mean squared error of the best performing benchmark...

  9. 多阶段混合增长模型的影响因素:距离与形态%Factors of Piecewise Growth Mixture Model:Distance and Pattern

    Institute of Scientific and Technical Information of China (English)

    刘源; 骆方; 刘红云


    The piecewise growth mixture model (PGMM) has been a very popular analytical approach in recent studies of longitudinal data. PGMM builds on the piecewise growth model (PGM) and the growth mixture model (GMM). It is used to locate the turning point of growth trajectory as well as to identify the latent class of the population. It is particularly useful in detecting the non-continuous growing trend in a heterogeneous population. A simplified version of the model, the latent class growth analysis (LCGA), has also been often used with a restriction on the variance of PGMM. Understandably, factors affecting PGM and GMM will affect the estimates and performance of PGMM. These factors may include the change of the slope, the distance of latent classes, and the sample size. PGMM being developed from the two growth-related models (PGM, GMM) also attempts to analyze the growth pattern in latent growth trajectory as a special and newly emerged issue. Even for models with the same distance, their different slopes can be combined to form different patterns. This issue has not been fully explored in previous literature. Yet in empirical studies, factors such as the distance of the latent classes, the growth pattern, the existing criteria of model fit indices, and the precision of parameter estimates are well worth examining issues. In the present simulation study, a two-class-two-period model was adopted. The three simulation conditions being considered were:the sample size, the distance of latent class, and the pattern of the growth trajectory. The sample size was set to be 100, 200 and 500. The distance of the latent classes was defined as the squared Mahalanobis distance (SMD), with 1.5, 3 and 5 being used to represent the small, medium and large distance of latent classes respectively. Four different types of growth pattern were selected to represent one parallel and three non-parallel patterns. Finally, the LCGA was selected as the reference model to see whether PGMM

  10. Proper Versus Improper Mixtures in the ESR Model

    CERN Document Server

    Garola, Claudio


    The interpretation of mixtures is problematic in quantum mechanics (QM) because of nonobjectivity of properties. The ESR model restores objectivity reinterpreting quantum probabilities as conditional on detection and embodying the mathematical formalism of QM into a broader noncontextual (hence local) framework. We have recently provided a Hilbert space representation of the generalized observables that appear in the ESR model. We show here that each proper mixture is represented by a family of density operators parametrized by the macroscopic properties characterizing the physical system $\\Omega$ that is considered and that each improper mixture is represented by a single density operator which coincides with the operator that represents it in QM. The new representations avoid the problems mentioned above and entail some predictions that differ from the predictions of QM. One can thus contrive experiments for distinguishing empirically proper from improper mixtures, hence for confirming or disproving the ESR...

  11. Mixture modeling approach to flow cytometry data. (United States)

    Boedigheimer, Michael J; Ferbas, John


    Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.

  12. Stochastic downscaling of precipitation with neural network conditional mixture models (United States)

    Carreau, Julie; Vrac, Mathieu


    We present a new class of stochastic downscaling models, the conditional mixture models (CMMs), which builds on neural network models. CMMs are mixture models whose parameters are functions of predictor variables. These functions are implemented with a one-layer feed-forward neural network. By combining the approximation capabilities of mixtures and neural networks, CMMs can, in principle, represent arbitrary conditional distributions. We evaluate the CMMs at downscaling precipitation data at three stations in the French Mediterranean region. A discrete (Dirac) component is included in the mixture to handle the "no-rain" events. Positive rainfall is modeled with a mixture of continuous densities, which can be either Gaussian, log-normal, or hybrid Pareto (an extension of the generalized Pareto). CMMs are stochastic weather generators in the sense that they provide a model for the conditional density of local variables given large-scale information. In this study, we did not look for the most appropriate set of predictors, and we settled for a decent set as the basis to compare the downscaling models. The set of predictors includes the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalyses sea level pressure fields on a 6 × 6 grid cell region surrounding the stations plus three date variables. We compare the three distribution families of CMMs with a simpler benchmark model, which is more common in the downscaling community. The difference between the benchmark model and CMMs is that positive rainfall is modeled with a single Gamma distribution. The results show that CMM with hybrid Pareto components outperforms both the CMM with Gaussian components and the benchmark model in terms of log-likelihood. However, there is no significant difference with the log-normal CMM. In general, the additional flexibility of mixture models, as opposed to using a single distribution, allows us to better represent the

  13. Count data modeling and classification using finite mixtures of distributions. (United States)

    Bouguila, Nizar


    In this paper, we consider the problem of constructing accurate and flexible statistical representations for count data, which we often confront in many areas such as data mining, computer vision, and information retrieval. In particular, we analyze and compare several generative approaches widely used for count data clustering, namely multinomial, multinomial Dirichlet, and multinomial generalized Dirichlet mixture models. Moreover, we propose a clustering approach via a mixture model based on a composition of the Liouville family of distributions, from which we select the Beta-Liouville distribution, and the multinomial. The novel proposed model, which we call multinomial Beta-Liouville mixture, is optimized by deterministic annealing expectation-maximization and minimum description length, and strives to achieve a high accuracy of count data clustering and model selection. An important feature of the multinomial Beta-Liouville mixture is that it has fewer parameters than the recently proposed multinomial generalized Dirichlet mixture. The performance evaluation is conducted through a set of extensive empirical experiments, which concern text and image texture modeling and classification and shape modeling, and highlights the merits of the proposed models and approaches.

  14. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    framework. The global form of heterogeneity is incorporated in a Hedonic Price Index model that encompasses a nonlinear function of the geographical coordinates of each dwelling. The local form of heterogeneity is subsequently modeled as a Finite Mixture Model for the residuals of the Hedonic Index...

  15. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai


    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  16. Novel mixture model for the representation of potential energy surfaces (United States)

    Pham, Tien Lam; Kino, Hiori; Terakura, Kiyoyuki; Miyake, Takashi; Dam, Hieu Chi


    We demonstrate that knowledge of chemical physics on a materials system can be automatically extracted from first-principles calculations using a data mining technique; this information can then be utilized to construct a simple empirical atomic potential model. By using unsupervised learning of the generative Gaussian mixture model, physically meaningful patterns of atomic local chemical environments can be detected automatically. Based on the obtained information regarding these atomic patterns, we propose a chemical-structure-dependent linear mixture model for estimating the atomic potential energy. Our experiments show that the proposed mixture model significantly improves the accuracy of the prediction of the potential energy surface for complex systems that possess a large diversity in their local structures.

  17. Finite mixture varying coefficient models for analyzing longitudinal heterogenous data. (United States)

    Lu, Zhaohua; Song, Xinyuan


    This paper aims to develop a mixture model to study heterogeneous longitudinal data on the treatment effect of heroin use from a California Civil Addict Program. Each component of the mixture is characterized by a varying coefficient mixed effect model. We use the Bayesian P-splines approach to approximate the varying coefficient functions. We develop Markov chain Monte Carlo algorithms to estimate the smooth functions, unknown parameters, and latent variables in the model. We use modified deviance information criterion to determine the number of components in the mixture. A simulation study demonstrates that the modified deviance information criterion selects the correct number of components and the estimation of unknown quantities is accurate. We apply the proposed model to the heroin treatment study. Furthermore, we identify heterogeneous longitudinal patterns.

  18. Phylogenetic mixture models can reduce node-density artifacts. (United States)

    Venditti, Chris; Meade, Andrew; Pagel, Mark


    We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the

  19. Community Detection Using Multilayer Edge Mixture Model

    CERN Document Server

    Zhang, Han; Lai, Jian-Huang; Yu, Philip S


    A wide range of complex systems can be modeled as networks with corresponding constraints on the edges and nodes, which have been extensively studied in recent years. Nowadays, with the progress of information technology, systems that contain the information collected from multiple perspectives have been generated. The conventional models designed for single perspective networks fail to depict the diverse topological properties of such systems, so multilayer network models aiming at describing the structure of these networks emerge. As a major concern in network science, decomposing the networks into communities, which usually refers to closely interconnected node groups, extracts valuable information about the structure and interactions of the network. Unlike the contention of dozens of models and methods in conventional single-layer networks, methods aiming at discovering the communities in the multilayer networks are still limited. In order to help explore the community structure in multilayer networks, we...

  20. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward


    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  1. Modeling Biodegradation Kinetics on Benzene and Toluene and Their Mixture

    Directory of Open Access Journals (Sweden)

    Aparecido N. Módenes


    Full Text Available The objective of this work was to model the biodegradation kinetics of toxic compounds toluene and benzene as pure substrates and in a mixture. As a control, Monod and Andrews models were used. To predict substrates interactions, more sophisticated models of inhibition and competition, and SKIP (sum kinetics interactions parameters model were applied. The models evaluation was performed based on the experimental data from Pseudomonas putida F1 activities published in the literature. In parameter identification procedure, the global method of particle swarm optimization (PSO was applied. The simulation results show that the better description of the biodegradation process of pure toxic substrate can be achieved by Andrews' model. The biodegradation process of a mixture of toxic substrates is modeled the best when modified competitive inhibition and SKIP models are used. The developed software can be used as a toolbox of a kinetics model catalogue of industrial wastewater treatment for process design and optimization.

  2. Statistical Compressed Sensing of Gaussian Mixture Models

    CERN Document Server

    Yu, Guoshen


    A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is u...

  3. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models (United States)

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung


    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  4. Multi-resolution image segmentation based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    Tang Yinggan; Liu Dong; Guan Xinping


    Mixture model based image segmentation method, which assumes that image pixels are independent and do not consider the position relationship between pixels, is not robust to noise and usually leads to misclassification. A new segmentation method, called multi-resolution Gaussian mixture model method, is proposed. First, an image pyramid is constructed and son-father link relationship is built between each level of pyramid. Then the mixture model segmentation method is applied to the top level. The segmentation result on the top level is passed top-down to the bottom level according to the son-father link relationship between levels. The proposed method considers not only local but also global information of image, it overcomes the effect of noise and can obtain better segmentation result. Experimental result demonstrates its effectiveness.

  5. A Gamma Model for Mixture STR Samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Morling, Niels

    This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered in the amp......This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered...... in the amplifier output and thereby the question of confidence in separate DNA -profiles suggested by an output is addressed....

  6. Modeling of Complex Mixtures: JP-8 Toxicokinetics (United States)


    diffusion, including metabolic loss via the cytochrome P-450 system, described by non-linear Michaelis - Menten kinetics as shown in the following...point. Inhalation and iv were the dose routes for the rat study. The modelers used saturable ( Michaelis - Menten ) kinetics as well as a second... Michaelis - Menten liver metabolic constants for n-decane have been measured (Km = 1.5 mg/L and Vmax = 0.4 mg/hour) using rat liver slices in a vial

  7. Spatial mixture multiscale modeling for aggregated health data. (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin


    One of the main goals in spatial epidemiology is to study the geographical pattern of disease risks. For such purpose, the convolution model composed of correlated and uncorrelated components is often used. However, one of the two components could be predominant in some regions. To investigate the predominance of the correlated or uncorrelated component for multiple scale data, we propose four different spatial mixture multiscale models by mixing spatially varying probability weights of correlated (CH) and uncorrelated heterogeneities (UH). The first model assumes that there is no linkage between the different scales and, hence, we consider independent mixture convolution models at each scale. The second model introduces linkage between finer and coarser scales via a shared uncorrelated component of the mixture convolution model. The third model is similar to the second model but the linkage between the scales is introduced through the correlated component. Finally, the fourth model accommodates for a scale effect by sharing both CH and UH simultaneously. We applied these models to real and simulated data, and found that the fourth model is the best model followed by the second model.

  8. A stochastic evolutionary model generating a mixture of exponential distributions (United States)

    Fenner, Trevor; Levene, Mark; Loizou, George


    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  9. Multinomial mixture model with heterogeneous classification probabilities (United States)

    Holland, M.D.; Gray, B.R.


    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  10. Hard-sphere kinetic models for inert and reactive mixtures (United States)

    Polewczak, Jacek


    I consider stochastic variants of a simple reacting sphere (SRS) kinetic model (Xystris and Dahler 1978 J. Chem. Phys. 68 387-401, Qin and Dahler 1995 J. Chem. Phys. 103 725-50, Dahler and Qin 2003 J. Chem. Phys. 118 8396-404) for dense reacting mixtures. In contrast to the line-of-center models of chemical reactive models, in the SRS kinetic model, the microscopic reversibility (detailed balance) can be easily shown to be satisfied, and thus all mathematical aspects of the model can be fully justified. In the SRS model, the molecules behave as if they were single mass points with two internal states. Collisions may alter the internal states of the molecules, and this occurs when the kinetic energy associated with the reactive motion exceeds the activation energy. Reactive and non-reactive collision events are considered to be hard sphere-like. I consider a four component mixture A, B, A *, B *, in which the chemical reactions are of the type A+B\\rightleftharpoons {{A}\\ast}+{{B}\\ast} , with A * and B * being distinct species from A and B. This work extends the joined works with George Stell to the kinetic models of dense inert and reactive mixtures. The idea of introducing smearing-type effect in the collisional process results in a new class of stochastic kinetic models for both inert and reactive mixtures. In this paper the important new mathematical properties of such systems of kinetic equations are proven. The new results for stochastic revised Enskog system for inert mixtures are also provided.

  11. Robust estimation of unbalanced mixture models on samples with outliers. (United States)

    Galimzianova, Alfiia; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga


    Mixture models are often used to compactly represent samples from heterogeneous sources. However, in real world, the samples generally contain an unknown fraction of outliers and the sources generate different or unbalanced numbers of observations. Such unbalanced and contaminated samples may, for instance, be obtained by high density data sensors such as imaging devices. Estimation of unbalanced mixture models from samples with outliers requires robust estimation methods. In this paper, we propose a novel robust mixture estimator incorporating trimming of the outliers based on component-wise confidence level ordering of observations. The proposed method is validated and compared to the state-of-the-art FAST-TLE method on two data sets, one consisting of synthetic samples with a varying fraction of outliers and a varying balance between mixture weights, while the other data set contained structural magnetic resonance images of the brain with tumors of varying volumes. The results on both data sets clearly indicate that the proposed method is capable to robustly estimate unbalanced mixtures over a broad range of outlier fractions. As such, it is applicable to real-world samples, in which the outlier fraction cannot be estimated in advance.

  12. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities....... Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%....

  13. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek


    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  14. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars


    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  15. Comparing State SAT Scores Using a Mixture Modeling Approach (United States)

    Kim, YoungKoung Rachel


    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  16. An integral equation model for warm and hot dense mixtures

    CERN Document Server

    Starrett, C E; Daligault, J; Hamel, S


    In Starrett and Saumon [Phys. Rev. E 87, 013104 (2013)] a model for the calculation of electronic and ionic structures of warm and hot dense matter was described and validated. In that model the electronic structure of one "atom" in a plasma is determined using a density functional theory based average-atom (AA) model, and the ionic structure is determined by coupling the AA model to integral equations governing the fluid structure. That model was for plasmas with one nuclear species only. Here we extend it to treat plasmas with many nuclear species, i.e. mixtures, and apply it to a carbon-hydrogen mixture relevant to inertial confinement fusion experiments. Comparison of the predicted electronic and ionic structures with orbital-free and Kohn-Sham molecular dynamics simulations reveals excellent agreement wherever chemical bonding is not significant.

  17. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander


    The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict...

  18. A general mixture model for sediment laden flows (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián


    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  19. Adaptive mixture observation models for multiple object tracking

    Institute of Scientific and Technical Information of China (English)

    CUI Peng; SUN LiFeng; YANG ShiQiang


    Multiple object tracking (MOT) poses many difficulties to conventional well-studied single object track-ing (SOT) algorithms, such as severe expansion of configuration space, high complexity of motion con-ditions, and visual ambiguities among nearby targets, among which the visual ambiguity problem is the central challenge. In this paper, we address this problem by embedding adaptive mixture observation models (AMOM) into a mixture tracker which is implemented in Particle Filter framework. In AMOM, the extracted multiple features for appearance description are combined according to their discriminative power between ambiguity prone objects, where the discriminability of features are evaluated by online entropy-based feature selection techniques. The induction of AMOM can help to surmount the Incapa-bility of conventional mixture tracker in handling object occlusions, and meanwhile retain its merits of flexibility and high efficiency. The final experiments show significant improvement in MOT scenarios compared with other methods.

  20. Measurement error in earnings data : Using a mixture model approach to combine survey and register data

    NARCIS (Netherlands)

    Meijer, E.; Rohwedder, S.; Wansbeek, T.J.


    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the me

  1. Comparison of criteria for choosing the number of classes in Bayesian finite mixture models

    NARCIS (Netherlands)

    K. Nasserinejad (Kazem); J.M. van Rosmalen (Joost); W. de Kort (Wim); E.M.E.H. Lesaffre (Emmanuel)


    textabstractIdentifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown th

  2. Phylogenetic mixtures and linear invariants for equal input models. (United States)

    Casanellas, Marta; Steel, Mike


    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  3. The physical model for research of behavior of grouting mixtures (United States)

    Hajovsky, Radovan; Pies, Martin; Lossmann, Jaroslav


    The paper deals with description of physical model designed for verification of behavior of grouting mixtures when applied below underground water level. Described physical model has been set up to determine propagation of grouting mixture in a given environment. Extension of grouting in this environment is based on measurement of humidity and temperature with the use of combined sensors located within preinstalled special measurement probes around grouting needle. Humidity was measured by combined capacity sensor DTH-1010, temperature was gathered by a NTC thermistor. Humidity sensors measured time when grouting mixture reached sensor location point. NTC thermistors measured temperature changes in time starting from initial of injection. This helped to develop 3D map showing the distribution of grouting mixture through the environment. Accomplishment of this particular measurement was carried out by a designed primary measurement module capable of connecting 4 humidity and temperature sensors. This module also takes care of converting these physical signals into unified analogue signals consequently brought to the input terminals of analogue input of programmable automation controller (PAC) WinPAC-8441. This controller ensures the measurement itself, archiving and visualization of all data. Detail description of a complex measurement system and evaluation in form of 3D animations and graphs is supposed to be in a full paper.

  4. Landmine detection using mixture of discrete hidden Markov models (United States)

    Frigui, Hichem; Hamdi, Anis; Missaoui, Oualid; Gader, Paul


    We propose a landmine detection algorithm that uses a mixture of discrete hidden Markov models. We hypothesize that the data are generated by K models. These different models reflect the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Model identification could be achieved through clustering in the parameters space or in the feature space. However, this approach is inappropriate as it is not trivial to define a meaningful distance metric for model parameters or sequence comparison. Our proposed approach is based on clustering in the log-likelihood space, and has two main steps. First, one HMM is fit to each of the R individual sequence. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an R×R log-likelihood distance matrix that will be partitioned into K groups using a hierarchical clustering algorithm. In the second step, we pool the sequences, according to which cluster they belong, into K groups, and we fit one HMM to each group. The mixture of these K HMMs would be used to build a descriptive model of the data. An artificial neural networks is then used to fuse the output of the K models. Results on large and diverse Ground Penetrating Radar data collections show that the proposed method can identify meaningful and coherent HMM models that describe different properties of the data. Each HMM models a group of alarm signatures that share common attributes such as clutter, mine type, and burial depth. Our initial experiments have also indicated that the proposed mixture model outperform the baseline HMM that uses one model for the mine and one model for the background.

  5. Gaussian mixture models as flux prediction method for central receivers (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie


    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  6. 40 CFR Table 2c to Subpart E of... - Reactivity Factors for Aromatic Hydrocarbon Solvent Mixtures (United States)


    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Reactivity Factors for Aromatic Hydrocarbon Solvent Mixtures 2C Table 2C to Subpart E of Part 59 Protection of Environment ENVIRONMENTAL... Hydrocarbon Solvent Mixtures Bin Boiling range (degrees F) Criteria Reactivityfactor 21 280-290...

  7. A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization

    Directory of Open Access Journals (Sweden)

    Gonzalo Vegas-Sanchez-Ferrero


    Full Text Available Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG distribution (which also generalizes the Nakagami distribution was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1 a simple but robust methodology to estimate the ML parameters of GG distributions and (2 a Generalized Gama Mixture Model (GGMM. These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.

  8. Modeling, clustering, and segmenting video with mixtures of dynamic textures. (United States)

    Chan, Antoni B; Vasconcelos, Nuno


    A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.

  9. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll


    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although the norma......In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...... the normalized L2 distance was slightly inferior to the Kullback-Leibler distance with respect to classification performance, it has the advantage of obeying the triangle inequality, which allows for efficient searching....

  10. Detecting Clusters in Atom Probe Data with Gaussian Mixture Models. (United States)

    Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P


    Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.

  11. Translated Poisson Mixture Model for Stratification Learning (PREPRINT) (United States)


    unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ...Pless. Figure 1 shows, for each algorithm, the point cloud with each point colored and marked differently according to its classification. In the dif...1: Clustering of a spiral and a plane. Results with different algorithms (this is a color figure). Due to the statistical nature of the R-TPMM

  12. Analysis of Forest Foliage Using a Multivariate Mixture Model (United States)

    Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.


    Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.

  13. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.


    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.


    Directory of Open Access Journals (Sweden)

    N Dwidayati


    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  15. Induced polarization of clay-sand mixtures. Experiments and modelling. (United States)

    Okay, G.; Leroy, P.


    The complex conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite (mainly Na-Montmorillonite) in the frequency range 1.4 mHz - 12 kHz. The experiments were performed with various clay contents (1, 5, 20, and 100 % in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). Induced polarization measurements were performed with a cylindrical four-electrode sample-holder associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO4 electrodes. The results illustrate the strong impact of the CEC of the clay minerals upon the complex conductivity. The quadrature conductivity increases steadily with the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents. The experimental data show good agreement with predicted values given by our SIP model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex conductivity of the porous medium constituted of the grains and the electrolyte. The SIP model includes also the effect of the grain size distribution upon the complex conductivity spectra.

  16. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models


    Hiroki Yoshioka; Kenta Obata


    The fraction of vegetation cover (FVC) is often estimated by unmixing a linear mixture model (LMM) to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could ...

  17. A hybrid finite mixture model for exploring heterogeneous ordering patterns of driver injury severity. (United States)

    Ma, Lu; Wang, Guan; Yan, Xuedong; Weng, Jinxian


    Debates on the ordering patterns of crash injury severity are ongoing in the literature. Models without proper econometrical structures for accommodating the complex ordering patterns of injury severity could result in biased estimations and misinterpretations of factors. This study proposes a hybrid finite mixture (HFM) model aiming to capture heterogeneous ordering patterns of driver injury severity while enhancing modeling flexibility. It attempts to probabilistically partition samples into two groups in which one group represents an unordered/nominal data-generating process while the other represents an ordered data-generating process. Conceptually, the newly developed model offers flexible coefficient settings for mining additional information from crash data, and more importantly it allows the coexistence of multiple ordering patterns for the dependent variable. A thorough modeling performance comparison is conducted between the HFM model, and the multinomial logit (MNL), ordered logit (OL), finite mixture multinomial logit (FMMNL) and finite mixture ordered logit (FMOL) models. According to the empirical results, the HFM model presents a strong ability to extract information from the data, and more importantly to uncover heterogeneous ordering relationships between factors and driver injury severity. In addition, the estimated weight parameter associated with the MNL component in the HFM model is greater than the one associated with the OL component, which indicates a larger likelihood of the unordered pattern than the ordered pattern for driver injury severity.

  18. Sand - rubber mixtures submitted to isotropic loading: a minimal model (United States)

    Platzer, Auriane; Rouhanifar, Salman; Richard, Patrick; Cazacliu, Bogdan; Ibraim, Erdin


    The volume of scrap tyres, an undesired urban waste, is increasing rapidly in every country. Mixing sand and rubber particles as a lightweight backfill is one of the possible alternatives to avoid stockpiling them in the environment. This paper presents a minimal model aiming to capture the evolution of the void ratio of sand-rubber mixtures undergoing an isotropic compression loading. It is based on the idea that, submitted to a pressure, the rubber chips deform and partially fill the porous space of the system, leading to a decrease of the void ratio with increasing pressure. Our simple approach is capable of reproducing experimental data for two types of sand (a rounded one and a sub-angular one) and up to mixtures composed of 50% of rubber.

  19. Dirichlet multinomial mixtures: generative models for microbial metagenomics. (United States)

    Holmes, Ian; Harris, Keith; Quince, Christopher


    We introduce Dirichlet multinomial mixtures (DMM) for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' ( This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP)' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD) phenotypes, ileal Crohn's disease (ICD) is associated with a more variable

  20. Dirichlet multinomial mixtures: generative models for microbial metagenomics.

    Directory of Open Access Journals (Sweden)

    Ian Holmes

    Full Text Available We introduce Dirichlet multinomial mixtures (DMM for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' ( This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD phenotypes, ileal Crohn's disease (ICD is associated with

  1. The Spectral Mixture Models: A Minimum Information Divergence Approach (United States)


    Bayesian   Information   Criterion .   Developing a metric that measures the fitness of different models is beyond the scope of our discussion.,  then  the  results  are  questionable  or  perhaps  wrong.    Various  information   criteria  have  been  proposed  such  as  the  Akaike   and...LABORATORY INFORMATION DIRECTORATE THE SPECTRAL MIXTURE MODELS

  2. Background Subtraction with DirichletProcess Mixture Models. (United States)

    Haines, Tom S F; Tao Xiang


    Video analysis often begins with background subtraction. This problem is often approached in two steps-a background model followed by a regularisation scheme. A model of the background allows it to be distinguished on a per-pixel basis from the foreground, whilst the regularisation combines information from adjacent pixels. We present a new method based on Dirichlet process Gaussian mixture models, which are used to estimate per-pixel background distributions. It is followed by probabilistic regularisation. Using a non-parametric Bayesian method allows per-pixel mode counts to be automatically inferred, avoiding over-/under- fitting. We also develop novel model learning algorithms for continuous update of the model in a principled fashion as the scene changes. These key advantages enable us to outperform the state-of-the-art alternatives on four benchmarks.

  3. Molecular Code Division Multiple Access: Gaussian Mixture Modeling (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  4. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht


    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid......, alcohols and glycols) are considered, the importance of cross-association is investigated. The cross-association is accounted for either via combining rules or using a cross-solvation energy obtained from experimental spectroscopic or calorimetric data or from ab initio calculations. In both cases two...

  5. Modeling human mortality using mixtures of bathtub shaped failure distributions. (United States)

    Bebbington, Mark; Lai, Chin-Diew; Zitikis, Ricardas


    Aging and mortality is usually modeled by the Gompertz-Makeham distribution, where the mortality rate accelerates with age in adult humans. The resulting parameters are interpreted as the frailty and decrease in vitality with age. This fits well to life data from 'westernized' societies, where the data are accurate, of high resolution, and show the effects of high quality post-natal care. We show, however, that when the data are of lower resolution, and contain considerable structure in the infant mortality, the fit can be poor. Moreover, the Gompertz-Makeham distribution is consistent with neither the force of natural selection, nor the recently identified 'late life mortality deceleration'. Although actuarial models such as the Heligman-Pollard distribution can, in theory, achieve an improved fit, the lack of a closed form for the survival function makes fitting extremely arduous, and the biological interpretation can be lacking. We show, that a mixture, assigning mortality to exogenous or endogenous causes, using the reduced additive and flexible Weibull distributions, models well human mortality over the entire life span. The components of the mixture are asymptotically consistent with the reliability and biological theories of aging. The relative simplicity of the mixture distribution makes feasible a technique where the curvature functions of the corresponding survival and hazard rate functions are used to identify the beginning and the end of various life phases, such as infant mortality, the end of the force of natural selection, and late life mortality deceleration. We illustrate our results with a comparative analysis of Canadian and Indonesian mortality data.

  6. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A


    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  7. Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua


    Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...... elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated...

  8. A computer graphical user interface for survival mixture modelling of recurrent infections. (United States)

    Lee, Andy H; Zhao, Yun; Yau, Kelvin K W; Ng, S K


    Recurrent infections data are commonly encountered in medical research, where the recurrent events are characterised by an acute phase followed by a stable phase after the index episode. Two-component survival mixture models, in both proportional hazards and accelerated failure time settings, are presented as a flexible method of analysing such data. To account for the inherent dependency of the recurrent observations, random effects are incorporated within the conditional hazard function, in the manner of generalised linear mixed models. Assuming a Weibull or log-logistic baseline hazard in both mixture components of the survival mixture model, an EM algorithm is developed for the residual maximum quasi-likelihood estimation of fixed effect and variance component parameters. The methodology is implemented as a graphical user interface coded using Microsoft visual C++. Application to model recurrent urinary tract infections for elderly women is illustrated, where significant individual variations are evident at both acute and stable phases. The survival mixture methodology developed enable practitioners to identify pertinent risk factors affecting the recurrent times and to draw valid conclusions inferred from these correlated and heterogeneous survival data.

  9. Thermodynamics of mixtures containing amines. IX. Application of the concentration-concentration structure factor to the study of binary mixtures containing pyridines

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Juan Antonio, E-mail: [G.E.T.E.F. Dpto Termodinamica y Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, Valladolid 47071 (Spain); Cobos, Jose Carlos; Garcia de la Fuente, Isaias; Mozo, Ismael [G.E.T.E.F. Dpto Termodinamica y Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, Valladolid 47071 (Spain)


    Binary mixtures formed by a pyridine base and an alkane, or an aromatic hydrocarbon, or a 1-alkanol have been studied in the framework of the concentration-concentration structure factor, S{sub CC}(0), formalism. Deviations between experimental data and those provided by the DISQUAC model are discussed. Systems containing alkanes are characterized by homocoordination. In pyridine + alkane mixtures, S{sub CC}(0) decreases with the chain length of the longer alkanes, due to size effects. For a given alkane, S{sub CC}(0) also decreases with the number of CH{sub 3}- groups in the pyridine base. This has been interpreted assuming that the number of amine-amine interactions available to be broken upon mixing also decreases similarly, probably as steric hindrances exerted by the methyl groups of the aromatic amine increase with the number of these groups. Homocoordination is higher in mixtures with 3,5-dimethylpyridine than in those with 2,6-dimethylpyridine. That is, steric effects exerted by methyl groups in positions 3 and 5 are stronger than when they are in positions 2 and 6. Similarly, from the application of the DISQUAC (dispersive-quasichemical) model, it is possible to conclude that homocoordination is higher in systems with 3- or 4-methylpyridine than in those involving 2-methylpyridine. Systems including aromatic hydrocarbons are nearly ideal, which seems to indicate that there is no specific interaction in such solutions. Mixtures with 1-alkanols show heterocoordination. This reveals the existence of interactions between unlike molecules, characteristic of alkanol + amine mixtures. Methanol systems show the lowest S{sub CC}(0) values due, partially, to size effects. This explains the observed decrease of homocoordination in such solutions in the order: pyridine > 2-methylpyridine > 2,6-dimethylpyridine. Moreover, as the energies of the OH-N hydrogen bonds are practically independent of the pyridine base considered when mixed with methanol, it suggests that

  10. A mixture copula Bayesian network model for multimodal genomic data

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang


    Full Text Available Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  11. A polynomial hyperelastic model for the mixture of fat and glandular tissue in female breast. (United States)

    Calvo-Gallego, Jose L; Martínez-Reina, Javier; Domínguez, Jaime


    In the breast of adult women, glandular and fat tissues are intermingled and cannot be clearly distinguished. This work studies if this mixture can be treated as a homogenized tissue. A mechanical model is proposed for the mixture of tissues as a function of the fat content. Different distributions of individual tissues and geometries have been tried to verify the validity of the mixture model. A multiscale modelling approach was applied in a finite element model of a representative volume element (RVE) of tissue, formed by randomly assigning fat or glandular elements to the mesh. Both types of tissues have been assumed as isotropic, quasi-incompressible hyperelastic materials, modelled with a polynomial strain energy function, like the homogenized model. The RVE was subjected to several load cases from which the constants of the polynomial function of the homogenized tissue were fitted in the least squares sense. The results confirm that the fat volume ratio is a key factor in determining the properties of the homogenized tissue, but the spatial distribution of fat is not so important. Finally, a simplified model of a breast was developed to check the validity of the homogenized model in a geometry similar to the actual one.

  12. Nonlinear sensor fault diagnosis using mixture of probabilistic PCA models (United States)

    Sharifi, Reza; Langari, Reza


    This paper presents a methodology for sensor fault diagnosis in nonlinear systems using a Mixture of Probabilistic Principal Component Analysis (MPPCA) models. This methodology separates the measurement space into several locally linear regions, each of which is associated with a Probabilistic PCA (PPCA) model. Using the transformation associated with each PPCA model, a parity relation scheme is used to construct a residual vector. Bayesian analysis of the residuals forms the basis for detection and isolation of sensor faults across the entire range of operation of the system. The resulting method is demonstrated in its application to sensor fault diagnosis of a fully instrumented HVAC system. The results show accurate detection of sensor faults under the assumption that a single sensor is faulty.

  13. Gaussian Mixture Model and Rjmcmc Based RS Image Segmentation (United States)

    Shi, X.; Zhao, Q. H.


    For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  14. Refining personality disorder subtypes and classification using finite mixture modeling. (United States)

    Yun, Rebecca J; Stern, Barry L; Lenzenweger, Mark F; Tiersky, Lana A


    The current Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic system for Axis II disorders continues to be characterized by considerable heterogeneity and poor discriminant validity. Such problems impede accurate personality disorder (PD) diagnosis. As a result, alternative assessment tools are often used in conjunction with the DSM. One popular framework is the object relational model developed by Kernberg and his colleagues (J. F. Clarkin, M. F. Lenzenweger, F. Yeomans, K. N. Levy, & O. F. Kernberg, 2007, An object relations model of borderline pathology, Journal of Personality Disorders, Vol. 21, pp. 474-499; O. F. Kernberg, 1984, Severe Personality Disorders, New Haven, CT: Yale University Press; O. F. Kernberg & E. Caligor, 2005, A psychoanalytic theory of personality disorders, in M. F. Lenzenweger & J. F. Clarkin, Eds., Major Theories of Personality Disorder, New York, NY: Guilford Press). Drawing on this model and empirical studies thereof, the current study attempted to clarify Kernberg's (1984) PD taxonomy and identify subtypes within a sample with varying levels of personality pathology using finite mixture modeling. Subjects (N = 141) were recruited to represent a wide range of pathology. The finite mixture modeling results indicated that 3 components were harbored within the variables analyzed. Group 1 was characterized by low levels of antisocial, paranoid, and aggressive features, and Group 2 was characterized by elevated paranoid features. Group 3 revealed the highest levels across the 3 variables. The validity of the obtained solution was then evaluated by reference to a variety of external measures that supported the validity of the identified grouping structure. Findings generally appear congruent with previous research, which argued that a PD taxonomy based on paranoid, aggressive, and antisocial features is a viable supplement to current diagnostic systems. Our study suggests that Kernberg's object relational model offers a

  15. A model for steady flows of magma-volatile mixtures

    CERN Document Server

    Belan, Marco


    A general one-dimensional model for the steady adiabatic motion of liquid-volatile mixtures in vertical ducts with varying cross-section is presented. The liquid contains a dissolved part of the volatile and is assumed to be incompressible and in thermomechanical equilibrium with a perfect gas phase, which is generated by the exsolution of the same volatile. An inverse problem approach is used -- the pressure along the duct is set as an input datum, and the other physical quantities are obtained as output. This fluid-dynamic model is intended as an approximate description of magma-volatile mixture flows of interest to geophysics and planetary sciences. It is implemented as a symbolic code, where each line stands for an analytic expression, whether algebraic or differential, which is managed by the software kernel independently of the numerical value of each variable. The code is versatile and user-friendly and permits to check the consequences of different hypotheses even through its early steps. Only the las...

  16. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W


    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  17. Ultrasonic study on organic liquid and binary organic liquid mixtures by using Schaaffs' collision factor theory

    Institute of Scientific and Technical Information of China (English)

    Lu Yi-Gang; Dong Yan-Wu


    Based on Schaaff's collision factor theory (CFT) in liquids, the equations for nonlinear ultrasonic parameters in both organic liquid and binary organic liquid mixtures are deduced. The nonlinear ultrasonic parameters, including pressure coefficient, temperature coefficients of ultrasonic velocity, and nonlinear acoustic parameter B/A in both organic liquid and binary organic liquid mixtures, are evaluated for comparison with the measured results and data from other sources. The equations show that the coefficient of ultrasonic velocity and nonlinear acoustic parameter B/A are closely related to molecular interactions. These nonlinear ultrasonic parameters reflect some information of internal structure and outside status of the medium or mixtures. From the exponent of repulsive forces of the molecules,several thermodynamic parameters, pressure and temperature of the medium, the nonlinear ultrasonic parameters and ultrasonic nature of the medium can be evaluated. When evaluating and studying nonlinear acoustic parameter B/A of binary organic liquid mixtures, there is no need to know the nonlinear acoustic parameter B/A of the components.Obviously, the equation reveals the connection between the nonlinear ultrasonic nature and internal structure and outside status of the mixtures more directly and distinctly than traditional mixture law for B/A, e.g. Apfel's and Sehgal's laws for liquid binary mixtures.

  18. Mixture of a seismicity model based on the rate-and-state friction and ETAS model (United States)

    Iwata, T.


    Currently the ETAS model [Ogata, 1988, JASA] is considered to be a standard model of seismicity. However, because the ETAS model is a purely statistical one, the physics-based seismicity model derived from the rate-and-state friction (hereafter referred to as Dieterich model) [Dieterich, 1994, JGR] is frequently examined. However, the original version of the Dieterich model has several problems in the application to real earthquake sequences and therefore modifications have been conducted in previous studies. Iwata [2015, Pageoph] is one of such studies and shows that the Dieterich model is significantly improved as a result of the inclusion of the effect of secondary aftershocks (i.e., aftershocks caused by previous aftershocks). However, still the performance of the ETAS model is superior to that of the improved Dieterich model. For further improvement, the mixture of the Dieterich and ETAS models is examined in this study. To achieve the mixture, the seismicity rate is represented as a sum of the ETAS and Dieterich models of which weights are given as k and 1-k, respectively. This mixture model is applied to the aftershock sequences of the 1995 Kobe and 2004 Mid-Niigata sequences which have been analyzed in Iwata [2015]. Additionally, the sequence of the Matsushiro earthquake swarm in central Japan 1965-1970 is also analyzed. The value of k and parameters of the ETAS and Dieterich models are estimated by means of the maximum likelihood method, and the model performances are assessed on the basis of AIC. For the two aftershock sequences, the AIC values of the ETAS model are around 3-9 smaller (i.e., better) than those of the mixture model. On the contrary, for the Matsushiro swarm, the AIC value of the mixture model is 5.8 smaller than that of the ETAS model, indicating that the mixture of the two models results in significant improvement of the seismicity model.


    Directory of Open Access Journals (Sweden)



    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture. An explicit solution is found and experimental data on the catalytic cracking of a mixture of alkanes are used for deactivation and kinetic parameter estimation.

  20. Classifying Gamma-Ray Bursts with Gaussian Mixture Model

    CERN Document Server

    Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young


    Using Gaussian Mixture Model (GMM) and Expectation Maximization Algorithm, we perform an analysis of time duration ($T_{90}$) for \\textit{CGRO}/BATSE, \\textit{Swift}/BAT and \\textit{Fermi}/GBM Gamma-Ray Bursts. The $T_{90}$ distributions of 298 redshift-known \\textit{Swift}/BAT GRBs have also been studied in both observer and rest frames. Bayesian Information Criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the \\textit{CGRO}/BATSE and \\textit{Fermi}/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the \\textit{Swift}/BAT bursts in the rest frame, which is consistent with some previous results. However, \\textit{Swift} GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of \\textit{Swift}/BAT.

  1. Classifying gamma-ray bursts with Gaussian Mixture Model (United States)

    Zhang, Zhi-Bin; Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young


    Using Gaussian Mixture Model (GMM) and expectation-maximization algorithm, we perform an analysis of time duration (T90) for Compton Gamma Ray Observatory (CGRO)/BATSE, Swift/BAT and Fermi/GBM gamma-ray bursts (GRBs). The T90 distributions of 298 redshift-known Swift/BAT GRBs have also been studied in both observer and rest frames. Bayesian information criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the CGRO/BATSE and Fermi/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the Swift/BAT bursts in the rest frame, which is consistent with some previous results. However, Swift GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of Swift/BAT.

  2. Mixtures of Polya trees for flexible spatial frailty survival modelling. (United States)

    Zhao, Luping; Hanson, Timothy E; Carlin, Bradley P


    Mixtures of Polya trees offer a very flexible nonparametric approach for modelling time-to-event data. Many such settings also feature spatial association that requires further sophistication, either at the point level or at the lattice level. In this paper, we combine these two aspects within three competing survival models, obtaining a data analytic approach that remains computationally feasible in a fully hierarchical Bayesian framework using Markov chain Monte Carlo methods. We illustrate our proposed methods with an analysis of spatially oriented breast cancer survival data from the Surveillance, Epidemiology and End Results program of the National Cancer Institute. Our results indicate appreciable advantages for our approach over competing methods that impose unrealistic parametric assumptions, ignore spatial association or both.

  3. Learning Trajectories for Robot Programing by Demonstration Using a Coordinated Mixture of Factor Analyzers. (United States)

    Field, Matthew; Stirling, David; Pan, Zengxi; Naghdy, Fazel


    This paper presents an approach for learning robust models of humanoid robot trajectories from demonstration. In this formulation, a model of the joint space trajectory is represented as a sequence of motion primitives where a nonlinear dynamical system is learned by constructing a hidden Markov model (HMM) predicting the probability of residing in each motion primitive. With a coordinated mixture of factor analyzers as the emission probability density of the HMM, we are able to synthesize motion from a dynamic system acting along a manifold shared by both demonstrator and robot. This provides significant advantages in model complexity for kinematically redundant robots and can reduce the number of corresponding observations required for further learning. A stability analysis shows that the system is robust to deviations from the expected trajectory as well as transitional motion between manifolds. This approach is demonstrated experimentally by recording human motion with inertial sensors, learning a motion primitive model and correspondence map between the human and robot, and synthesizing motion from the manifold to control a 19 degree-of-freedom humanoid robot.

  4. Bayesian nonparametric meta-analysis using Polya tree mixture models. (United States)

    Branscum, Adam J; Hanson, Timothy E


    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  5. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)


    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  6. Performance of growth mixture models in the presence of time-varying covariates. (United States)

    Diallo, Thierno M O; Morin, Alexandre J S; Lu, HuiZhong


    Growth mixture modeling is often used to identify unobserved heterogeneity in populations. Despite the usefulness of growth mixture modeling in practice, little is known about the performance of this data analysis technique in the presence of time-varying covariates. In the present simulation study, we examined the impacts of five design factors: the proportion of the total variance of the outcome explained by the time-varying covariates, the number of time points, the error structure, the sample size, and the mixing ratio. More precisely, we examined the impact of these factors on the accuracy of parameter and standard error estimates, as well as on the class enumeration accuracy. Our results showed that the consistent Akaike information criterion (CAIC), the sample-size-adjusted CAIC (SCAIC), the Bayesian information criterion (BIC), and the integrated completed likelihood criterion (ICL-BIC) proved to be highly reliable indicators of the true number of latent classes in the data, across design conditions, and that the sample-size-adjusted BIC (SBIC) also proved quite accurate, especially in larger samples. In contrast, the Akaike information criterion (AIC), the entropy, the normalized entropy criterion (NEC), and the classification likelihood criterion (CLC) proved to be unreliable indicators of the true number of latent classes in the data. Our results also showed that substantial biases in the parameter and standard error estimates tended to be associated with growth mixture models that included only four time points.

  7. A smooth mixture of Tobits model for healthcare expenditure. (United States)

    Keane, Michael; Stavrunova, Olena


    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females.

  8. Comparison of Criteria for Choosing the Number of Classes in Bayesian Finite Mixture Models. (United States)

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Lesaffre, Emmanuel


    Identifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown that in overfitted mixture models, the overfitted latent classes will asymptotically become empty under specific conditions for the prior of the class proportions. This result may be used to construct a criterion for finding the true number of latent classes, based on the removal of latent classes that have negligible proportions. Unlike some alternative criteria, this criterion can easily be implemented in complex statistical models such as latent class mixed-effects models and multivariate mixture models using standard Bayesian software. We performed an extensive simulation study to develop practical guidelines to determine the appropriate number of latent classes based on the posterior distribution of the class proportions, and to compare this criterion with alternative criteria. The performance of the proposed criterion is illustrated using a data set of repeatedly measured hemoglobin values of blood donors.

  9. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T


    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  10. Mixture models versus free energy of hydration models for waste glass durability

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T.; Masuga, P.


    Two approaches for modeling high-level waste glass durability as a function of glass composition are compared. The mixture approach utilizes first-order mixture (FOM) or second-order mixture (SOM) polynomials in composition, whereas the free energy of hydration (FEH) approach assumes durability is linearly related to the FEH of glass. Both approaches fit their models to data using least squares regression. The mixture and FEH approaches are used to model glass durability as a function of glass composition for several simulated waste glass data sets. The resulting FEH and FOM model coefficients and goodness-of-fit statistics are compared, both within and across data sets. The goodness-of-fit statistics show that the FOM model fits/predicts durability in each data set better (sometimes much better) than the FEH model. Considerable differences also exist between some FEH and FOM model component coefficients for each of the data sets. These differences are due to the mixture approach having a greater flexibility to account for the effect of a glass component depending on the level and range of the component and on the levels of other glass components. The mixture approach can also account for higher-order (e.g., curvilinear or interactive) effects of components, whereas the FEH approach cannot. SOM models were developed for three of the data sets, and are shown to improve on the corresponding FOM models. Thus, the mixture approach has much more flexibility than the FEH approach for approximating the relationship between glass composition and durability for various glass composition regions.

  11. Improved model for mixtures of polymers and hard spheres (United States)

    D'Adamo, Giuseppe; Pelissetto, Andrea


    Extensive Monte Carlo simulations are used to investigate how model systems of mixtures of polymers and hard spheres approach the scaling limit. We represent polymers as lattice random walks of length L with an energy penalty w for each intersection (Domb-Joyce model), interacting with hard spheres of radius R c via a hard-core pair potential of range {{R}\\text{mon}}+{{R}c} , where R mon is identified as the monomer radius. We show that the mixed polymer-colloid interaction gives rise to new confluent corrections. The leading ones scale as {{L}-ν} , where ν ≈ 0.588 is the usual Flory exponent. Finally, we determine optimal values of the model parameters w and R mon that guarantee the absence of the two leading confluent corrections. This improved model shows a significantly faster convergence to the asymptotic limit L\\to ∞ and is amenable for extensive and accurate numerical simulations at finite density, with only a limited computational effort.

  12. Compressive sensing by learning a Gaussian mixture model from measurements. (United States)

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence


    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.

  13. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)


    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  14. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)


    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture proportion

  15. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht


    The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary...

  16. Fully Bayesian mixture model for differential gene expression: simulations and model checks. (United States)

    Lewin, Alex; Bochkina, Natalia; Richardson, Sylvia


    We present a Bayesian hierarchical model for detecting differentially expressed genes using a mixture prior on the parameters representing differential effects. We formulate an easily interpretable 3-component mixture to classify genes as over-expressed, under-expressed and non-differentially expressed, and model gene variances as exchangeable to allow for variability between genes. We show how the proportion of differentially expressed genes, and the mixture parameters, can be estimated in a fully Bayesian way, extending previous approaches where this proportion was fixed and empirically estimated. Good estimates of the false discovery rates are also obtained. Different parametric families for the mixture components can lead to quite different classifications of genes for a given data set. Using Affymetrix data from a knock out and wildtype mice experiment, we show how predictive model checks can be used to guide the choice between possible mixture priors. These checks show that extending the mixture model to allow extra variability around zero instead of the usual point mass null fits the data better. A software package for R is available.

  17. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias


    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  18. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  19. A study of finite mixture model: Bayesian approach on financial time series data (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir


    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  20. A Rough Set Bounded Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation. (United States)

    Ji, Zexuan; Huang, Yubo; Sun, Quansen; Cao, Guo; Zheng, Yuhui


    Accurate image segmentation is an important issue in image processing, where Gaussian mixture models play an important part and have been proven effective. However, most Gaussian mixture model (GMM) based methods suffer from one or more limitations, such as limited noise robustness, over-smoothness for segmentations, and lack of flexibility to fit data. In order to address these issues, in this paper, we propose a rough set bounded asymmetric Gaussian mixture model with spatial constraint for image segmentation. First, based on our previous work where each cluster is characterized by three automatically determined rough-fuzzy regions, we partition the target image into three rough regions with two adaptively computed thresholds. Second, a new bounded indicator function is proposed to determine the bounded support regions of the observed data. The bounded indicator and posterior probability of a pixel that belongs to each sub-region is estimated with respect to the rough region where the pixel lies. Third, to further reduce over-smoothness for segmentations, two novel prior factors are proposed that incorporate the spatial information among neighborhood pixels, which are constructed based on the prior and posterior probabilities of the within- and between-clusters, and considers the spatial direction. We compare our algorithm to state-of-the-art segmentation approaches in both synthetic and real images to demonstrate the superior performance of the proposed algorithm.

  1. Distributed Density Estimation Based on a Mixture of Factor Analyzers in a Sensor Network

    Directory of Open Access Journals (Sweden)

    Xin Wei


    Full Text Available Distributed density estimation in sensor networks has received much attention due to its broad applicability. When encountering high-dimensional observations, a mixture of factor analyzers (MFA is taken to replace mixture of Gaussians for describing the distributions of observations. In this paper, we study distributed density estimation based on a mixture of factor analyzers. Existing estimation algorithms of the MFA are for the centralized case, which are not suitable for distributed processing in sensor networks. We present distributed density estimation algorithms for the MFA and its extension, the mixture of Student’s t-factor analyzers (MtFA. We first define an objective function as the linear combination of local log-likelihoods. Then, we give the derivation process of the distributed estimation algorithms for the MFA and MtFA in details, respectively. In these algorithms, the local sufficient statistics (LSS are calculated at first and diffused. Then, each node performs a linear combination of the received LSS from nodes in its neighborhood to obtain the combined sufficient statistics (CSS. Parameters of the MFA and the MtFA can be obtained by using the CSS. Finally, we evaluate the performance of these algorithms by numerical simulations and application example. Experimental results validate the promising performance of the proposed algorithms.

  2. Subjective symptoms due to solvent mixtures, dioxin, and toluene: impact of exposure versus personality factors. (United States)

    Seeber, A; Demes, P; Golka, K; Kiesswetter, E; Schäper, M; van Thriel, C; Zupanic, M


    In this study, we analyse the impact of personality factors on the frequency of self-reported symptoms for workers under different exposure conditions. Reported symptoms may depend on the level and type of exposure, as well as on personality factors such as trait anxiety of the worker or his general sensitivity with regard to the environment. The employed data stems from three studies: The first study contains information of 60 workers who suspected to be exposed to polychlorined dibenzodioxins and dibenzofuranes (Lifetime Weighted Average Exposure, LWAE, as an index for contact with the substances). The second study concerns 40 workers who are exposed to different concentrations of solvent mixtures in paint manufacturing (LWAE of total hydrocarbons about 10 ppm). The third study includes repeated measurements of two subgroups of workers from rotogravure printing plants who are exposed to different concentrations of toluene: a "high" exposure group (n = 129, LWAE about 46 ppm, current exposure 25 ppm) and a "low" exposure group (n = 96, LWAE for toluene about 9 ppm, current exposure 3 ppm). Trait anxiety, environmental sensitivity, and self-reported symptoms are measured by validated questionnaires and age as well as verbal intelligence are controlled. To determine the effect of the individual characteristics and the different exposures on self-reported symptoms, frequency analyses and variance analyses are conducted and linear models are fitted. For all analyses, trait anxiety explains the highest share of the variance. If there is no effect of the exposure on the reported symptoms (dioxin and low-level toluene study), trait anxiety seems to have a larger explanatory power in comparison with those studies where the exposure has an effect on the reported symptoms (solvent-mixture and high-level toluene study). Neurotoxicological risk analysis has to account for the detected dependence of self-reported symptoms on personality traits: assessments for elevated

  3. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    Directory of Open Access Journals (Sweden)

    Hiroki Yoshioka


    Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.

  4. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data. (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong


    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to

  5. Simulation and reference interaction site model theory of methanol and carbon tetrachloride mixtures. (United States)

    Munaò, G; Costa, D; Saija, F; Caccamo, C


    We report molecular dynamics and reference interaction site model (RISM) theory of methanol and carbon tetrachloride mixtures. Our study encompasses the whole concentration range, by including the pure component limits. We majorly focus on an analysis of partial, total, and concentration-concentration structure factors, and examine in detail the k-->0 limits of these functions. Simulation results confirm the tendency of methanol to self-associate with the formation of ring structures in the high dilution regime of this species, in agreement with experimental studies and with previous simulations by other authors. This behavior emerges as strongly related to the high nonideality of the mixture, a quantitative estimate of which is provided in terms of concentration fluctuation correlations, through the structure factors examined. The interaggregate correlation distance is also thereby estimated. Finally, the compressibility of the mixture is found in good agreement with experimental data. The RISM predictions are throughout assessed against simulation; the theory describes better the apolar solvent than the alcohol properties. Self-association of methanol is qualitatively reproduced, though this trend is much less marked in comparison with simulation results.

  6. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.


    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  7. A person-fit index for polytomous Rasch models, latent class models, and their mixture generalizations

    NARCIS (Netherlands)

    von Davier, M; Molenaar, IW


    A normally distributed person-fit index is proposed for detecting aberrant response patterns in latent class models and mixture distribution IRT models for dichotomous and polytomous data. This article extends previous work on the null distribution of person-fit indices for the dichotomous Rasch mod

  8. Strained and unconstrained multivariate normal finite mixture modeling of Piagetian data.

    NARCIS (Netherlands)

    Dolan, C.V.; Jansen, B.R.J.; van der Maas, H.L.J.


    We present the results of multivariate normal mixture modeling of Piagetian data. The sample consists of 101 children, who carried out a (pseudo-)conservation computer task on four occasions. We fitted both cross-sectional mixture models, and longitudinal models based on a Markovian transition

  9. Global cross-calibration of Landsat spectral mixture models

    CERN Document Server

    Sousa, Daniel


    Data continuity for the Landsat program relies on accurate cross-calibration among sensors. The Landsat 8 OLI has been shown to exhibit superior performance to the sensors on Landsats 4-7 with respect to radiometric calibration, signal to noise, and geolocation. However, improvements to the positioning of the spectral response functions on the OLI have resulted in known biases for commonly used spectral indices because the new band responses integrate absorption features differently from previous Landsat sensors. The objective of this analysis is to quantify the impact of these changes on linear spectral mixture models that use imagery collected by different Landsat sensors. The 2013 underflight of Landsat 7 and 8 provides an opportunity to cross calibrate the spectral mixing spaces of the ETM+ and OLI sensors using near-simultaneous acquisitions from a wide variety of land cover types worldwide. We use 80,910,343 pairs of OLI and ETM+ spectra to characterize the OLI spectral mixing space and perform a cross-...

  10. Fuzzy local Gaussian mixture model for brain MR image segmentation. (United States)

    Ji, Zexuan; Xia, Yong; Sun, Quansen; Chen, Qiang; Xia, Deshen; Feng, David Dagan


    Accurate brain tissue segmentation from magnetic resonance (MR) images is an essential step in quantitative brain image analysis. However, due to the existence of noise and intensity inhomogeneity in brain MR images, many segmentation algorithms suffer from limited accuracy. In this paper, we assume that the local image data within each voxel's neighborhood satisfy the Gaussian mixture model (GMM), and thus propose the fuzzy local GMM (FLGMM) algorithm for automated brain MR image segmentation. This algorithm estimates the segmentation result that maximizes the posterior probability by minimizing an objective energy function, in which a truncated Gaussian kernel function is used to impose the spatial constraint and fuzzy memberships are employed to balance the contribution of each GMM. We compared our algorithm to state-of-the-art segmentation approaches in both synthetic and clinical data. Our results show that the proposed algorithm can largely overcome the difficulties raised by noise, low contrast, and bias field, and substantially improve the accuracy of brain MR image segmentation.

  11. Accuracy assessment of linear spectral mixture model due to terrain undulation (United States)

    Wang, Tianxing; Chen, Songlin; Ma, Ya


    Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be

  12. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes


    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  13. Thermal diffusion factor for carbon tetrachloride-cyclohexane and benzene-n-heptane mixtures from thermogravitational column separation

    Energy Technology Data Exchange (ETDEWEB)

    Navarro, J.L.; Madariaga, J.A.; Santamaria, C.M.; Saviron, J.M.; Carrion, J.A.


    Measurements of the separation of liquid mixtures of n-heptane/benzene and carbon tetrachloride/cyclohexane in a thermogravitational column are reported. The results show that thermal diffusion columns of little mechanical precision can furnish suitable thermal diffusion factors when the diffusion coefficient, viscosity, density, and compressibility factor for the mixture are known. 23 references, 3 figures, 1 table.

  14. Granular mixtures modeled as elastic hard spheres subject to a drag force. (United States)

    Vega Reyes, Francisco; Garzó, Vicente; Santos, Andrés


    Granular gaseous mixtures under rapid flow conditions are usually modeled as a multicomponent system of smooth inelastic hard disks (two dimensions) or spheres (three dimensions) with constant coefficients of normal restitution alpha{ij}. In the low density regime an adequate framework is provided by the set of coupled inelastic Boltzmann equations. Due to the intricacy of the inelastic Boltzmann collision operator, in this paper we propose a simpler model of elastic hard disks or spheres subject to the action of an effective drag force, which mimics the effect of dissipation present in the original granular gas. For each collision term ij, the model has two parameters: a dimensionless factor beta{ij} modifying the collision rate of the elastic hard spheres, and the drag coefficient zeta{ij}. Both parameters are determined by requiring that the model reproduces the collisional transfers of momentum and energy of the true inelastic Boltzmann operator, yielding beta{ij}=(1+alpha{ij})2 and zeta{ij} proportional, variant1-alpha{ij}/{2}, where the proportionality constant is a function of the partial densities, velocities, and temperatures of species i and j. The Navier-Stokes transport coefficients for a binary mixture are obtained from the model by application of the Chapman-Enskog method. The three coefficients associated with the mass flux are the same as those obtained from the inelastic Boltzmann equation, while the remaining four transport coefficients show a general good agreement, especially in the case of the thermal conductivity. The discrepancies between both descriptions are seen to be similar to those found for monocomponent gases. Finally, the approximate decomposition of the inelastic Boltzmann collision operator is exploited to construct a model kinetic equation for granular mixtures as a direct extension of a known kinetic model for elastic collisions.

  15. Numerical simulation of slurry jets using mixture model

    Directory of Open Access Journals (Sweden)

    Wen-xin HUAI


    Full Text Available Slurry jets in a static uniform environment were simulated with a two-phase mixture model in which flow-particle interactions were considered. A standard k-ε turbulence model was chosen to close the governing equations. The computational results were in agreement with previous laboratory measurements. The characteristics of the two-phase flow field and the influences of hydraulic and geometric parameters on the distribution of the slurry jets were analyzed on the basis of the computational results. The calculated results reveal that if the initial velocity of the slurry jet is high, the jet spreads less in the radial direction. When the slurry jet is less influenced by the ambient fluid (when the Stokes number St is relatively large, the turbulent kinetic energy k and turbulent dissipation rate ε, which are relatively concentrated around the jet axis, decrease more rapidly after the slurry jet passes through the nozzle. For different values of St, the radial distributions of streamwise velocity and particle volume fraction are both self-similar and fit a Gaussian profile after the slurry jet fully develops. The decay rate of the particle velocity is lower than that of water velocity along the jet axis, and the axial distributions of the centerline particle streamwise velocity are self-similar along the jet axis. The pattern of particle dispersion depends on the Stokes number St. When St = 0.39, the particle dispersion along the radial direction is considerable, and the relative velocity is very low due to the low dynamic response time. When St = 3.08, the dispersion of particles along the radial direction is very little, and most of the particles have high relative velocities along the streamwise direction.

  16. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong


    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are

  17. Thermogravitational column as a technique for thermal diffusion factor measurement in liquid mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Ecenarro, O.; Madariaga, J.A.; Navarro, J.; Santamaria, C.M. (Universidad Pais Vasco, Bilbao (Spain)); Carrion, J.A.; Saviron, J.M. (Universidad de Zaragoza (Spain))

    Thermogravitational thermal diffusion separations are studied for benzene-n-heptane, benzene-n-hexane, toluene-n-heptane, toluene-n-hexane, carbon tetrachloride-n-hexane, and cyclohexane-n-hexane mixtures at a mean temperature of 37.5{degree}C. The column used was of 90 cm length with a 0.095 cm gap. Despite its length, this column can be used as a standard for the value of {alpha}{sub T} extraction when the separation factor is extrapolated to {Delta}t = 0{degree}C. Thermal diffusion factors are calculated for benzene-n-heptane and benzene-n-hexane mixtures in different concentrations. For the rest of the systems investigated, {alpha}{sub T}D{sub 12} values (D{sub 12} being the ordinary diffusion coefficient) are calculated.

  18. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail:; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)


    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  19. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard


    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  20. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models. (United States)

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek


    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. Copyright © 2016. Published by Elsevier B.V.

  1. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)


    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  2. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.


    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  3. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.


    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  4. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C


    Full Text Available mixture models have been used to generate sub-pixel land cover classifications, however, traditionally this makes use of mixtures of normal distributions. However, these models fail to represent many land cover classes accurately, as these are usually...

  5. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank


    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  6. Deconvolution of petroleum mixtures using mid-FTIR analysis and non-negative matrix factorization (United States)

    Livanos, George; Zervakis, Michalis; Pasadakis, Nikos; Karelioti, Marouso; Giakos, George


    The aim of this study is to develop an efficient, robust and cost effective methodology capable of both identifying the chemical fractions in complex commercial petroleum products and numerically estimating their concentration within the mixture sample. We explore a methodology based on attenuated total reflectance fourier transform infrared (ATR-FTIR) analytical signals, combined with a modified factorization algorithm to solve this ‘mixture problem’, first in qualitative and then in quantitative mode. The proposed decomposition approach is self-adapting to data without prior knowledge and is able of accurately estimating the weight contributions of constituents in the entire chemical compound. The results of the presented work to petroleum analysis indicate that it is possible to deconvolve the mixing process and recover the content in a chemically complex petroleum mixture using the infrared signals of a limited number of samples and the principal substances forming the mixture. A focus application of the proposed methodology is the quality control of commercial gasoline by identifying and quantifying the individual fractions utilized for its formulation via a fast, robust and efficient procedure based on mathematical analysis of the acquired spectra.

  7. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria


    -alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... of the model of relevance to the petroleum and chemical industries: high pressure vapor-liquid and liquid-liquid equilibrium in alcohol-containing mixtures, mixtures with gas hydrate inhibitors and mixtures with polar and hydrogen bonding chemicals including organic acids. Some comparisons with conventional...

  8. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios


    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  9. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit


    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into dis

  10. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model (United States)

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo


    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  11. Bayesian mixture modeling using a hybrid sampler with application to protein subfamily identification. (United States)

    Fong, Youyi; Wakefield, Jon; Rice, Kenneth


    Predicting protein function is essential to advancing our knowledge of biological processes. This article is focused on discovering the functional diversification within a protein family. A Bayesian mixture approach is proposed to model a protein family as a mixture of profile hidden Markov models. For a given mixture size, a hybrid Markov chain Monte Carlo sampler comprising both Gibbs sampling steps and hierarchical clustering-based split/merge proposals is used to obtain posterior inference. Inference for mixture size concentrates on comparing the integrated likelihoods. The choice of priors is critical with respect to the performance of the procedure. Through simulation studies, we show that 2 priors that are based on independent data sets allow correct identification of the mixture size, both when the data are homogeneous and when the data are generated from a mixture. We illustrate our method using 2 sets of real protein sequences.


    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão


    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  13. Self-organising mixture autoregressive model for non-stationary time series modelling. (United States)

    Ni, He; Yin, Hujun


    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  14. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan


    In this research work, we assumed that the densities of each component in a mixture are linearly distributed across the interface between the coexisting vapor and liquid phases, and we developed a linear gradient theory model for computing interfacial tensions of mixtures, especially mixtures...... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile....

  15. Mathematical model of the component mixture distribution in the molten cast iron during centrifugation (sedimentation) (United States)

    Bikulov, R. A.; Kotlyar, L. M.


    For the development and management of the manufacturing processes of axisymmetric articles with compositional structure by centrifugal casting method [1,2,3,4] is necessary to create a generalized mathematical model of the dynamics of component mixture in the molten cast iron during centrifugation. In article. based on the analysis of the dynamics of two-component mixture at sedimentation, a method of successive approximations to determine the distribution of a multicomponent mixture by centrifugation in a parabolic crucible is developed.

  16. The Probiotic Mixture VSL#3 Accelerates Gastric Ulcer Healing by Stimulating Vascular Endothelial Growth Factor (United States)

    Dharmani, Poonam; De Simone, Claudio; Chadee, Kris


    Studies assessing the effect and mechanism of probiotics on diseases of the upper gastrointestinal tract (GI) including gastric ulcers are limited despite extensive work and promising results of this therapeutic option for other GI diseases. In this study, we investigated the mechanisms by which the probiotic mixture VSL#3 (a mixture of eight probiotic bacteria including Lactobacilli, Bifidobacteria and Streptococcus species) heals acetic acid induced gastric ulcer in rats. VSL#3 was administered orally at low (6×109 bacteria) or high (1.2×1010 bacteria) dosages from day 3 after ulcer induction for 14 consecutive days. VSL#3 treatments significantly enhanced gastric ulcer healing in a dose-dependent manner. To assess the mechanism(s) whereby VSL#3 exerted its protective effects, we quantified the gene expression of several pro-inflammatory cytokines, protein and expression of stomach mucin-Muc5ac, regulatory cytokine-IL-10, COX-2 and various growth factors. Of all the components examined, only expression and protein production of VEGF was increased 332-fold on day 7 in the ulcerated tissues of animals treated with VSL#3. Predictably, animals treated with VEGF neutralizing antibody significantly delayed gastric ulcer healing in VSL#3 treated animals. This is the first report to demonstrate high efficacy of the probiotic mixture VSL#3 in enhancing gastric ulcer healing. Probiotic efficacy was effective at higher concentrations of VSL#3 by specifically increasing the expression and production of angiogenesis promoting growth factors, primarily VEGF. PMID:23484048

  17. Adaptive Mixture Modelling Metropolis Methods for Bayesian Analysis of Non-linear State-Space Models. (United States)

    Niemi, Jarad; West, Mike


    We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.

  18. A Binomial Mixture Model for Classification Performance: A Commentary on Waxman, Chambers, Yntema, and Gelman (1989). (United States)

    Thomas, Hoben


    Individual differences in children's performance on a classification task are modeled by a two component binomial mixture distribution. The model accounts for data well, with variance accounted for ranging from 87 to 95 percent. (RJC)

  19. Modelling viscosity and mass fraction of bitumen - diluent mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Miadonye, A.; Latour, N.; Puttagunta, V.R. [Lakehead Univ., Thunder Bay, ON (Canada)


    In recovery of bitumen in oil sands extraction, the reduction of the viscosity is important above and below ground. The addition of liquid diluent breaks down or weakens the intermolecular forces that create a high viscosity in bitumen. The addition of even 5% of diluent can cause a viscosity reduction in excess of 8%, thus facilitating the in situ recovery and pipeline transportation of bitumen. Knowledge of bitumen - diluent viscosity is highly important because without it, determination of upgrading processes, in situ recovery, well simulation, heat transfer, fluid flow and a variety of other engineering problems would be difficult or impossible to solve. The development of a simple correlation to predict the viscosity of binary mixtures of bitumen - diluent in any proportion is described. The developed correlation used to estimate the viscosities and mass fractions of bitumen - diluent mixtures was within acceptable limits of error. For the prediction of mixture viscosities, the developed correlation gave the best results with an overall average absolute deviation of 12% compared to those of Chironis (17%) and Cragoe (23%). Predictions of diluent mass fractions yielded a much better result with an overall average absolute deviation of 5%. The unique features of the correlation include its computational simplicity, its applicability to mixtures at temperatures other than 30 degrees C, and the fact that only the bitumen and diluent viscosities are needed to make predictions. It is the only correlation capable of predicting viscosities of mixtures, as well as diluent mass fractions required to reduce bitumen viscosity to pumping viscosities. The prediction of viscosities at 25, 60.3, and 82.6 degrees C produced excellent results, particularly at high temperatures with an average absolute deviation of below 10%. 11 refs., 3 figs., 8 tabs.

  20. Unsupervised Segmentation of Spectral Images with a Spatialized Gaussian Mixture Model and Model Selection

    Directory of Open Access Journals (Sweden)

    Cohen S.X.


    Full Text Available In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.

  1. Applying mixture toxicity modelling to predict bacterial bioluminescence inhibition by non-specifically acting pharmaceuticals and specifically acting antibiotics. (United States)

    Neale, Peta A; Leusch, Frederic D L; Escher, Beate I


    Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Numerical Simulation of Water Jet Flow Using Diffusion Flux Mixture Model

    Directory of Open Access Journals (Sweden)

    Zhi Shang


    Full Text Available A multidimensional diffusion flux mixture model was developed to simulate water jet two-phase flows. Through the modification of the gravity using the gradients of the mixture velocity, the centrifugal force on the water droplets was able to be considered. The slip velocities between the continuous phase (gas and the dispersed phase (water droplets were able to be calculated through multidimensional diffusion flux velocities based on the modified multidimensional drift flux model. Through the numerical simulations, comparing with the experiments and the simulations of traditional algebraic slip mixture model on the water mist spray, the model was validated.

  3. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)


    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  4. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard


    In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties for ...

  5. A Lattice Boltzmann Model of Binary Fluid Mixture

    CERN Document Server

    Orlandini, E; Yeomans, J M; Orlandini, Enzo; Swift, Michael R.


    We introduce a lattice Boltzmann for simulating an immiscible binary fluid mixture. Our collision rules are derived from a macroscopic thermodynamic description of the fluid in a way motivated by the Cahn-Hilliard approach to non-equilibrium dynamics. This ensures that a thermodynamically consistent state is reached in equilibrium. The non-equilibrium dynamics is investigated numerically and found to agree with simple analytic predictions in both the one-phase and the two-phase region of the phase diagram.

  6. A Bayesian estimation on right censored survival data with mixture and non-mixture cured fraction model based on Beta-Weibull distribution (United States)

    Yusuf, Madaki Umar; Bakar, Mohd. Rizam B. Abu


    Models for survival data that includes the proportion of individuals who are not subject to the event under study are known as a cure fraction models or simply called long-term survival models. The two most common models used to estimate the cure fraction are the mixture model and the non-mixture model. in this work, we present mixture and the non-mixture cure fraction models for survival data based on the beta-Weibull distribution. This four parameter distribution has been proposed as an alternative extension of the Weibull distribution in the analysis of lifetime data. This approach allows the inclusion of covariates in the models, where the estimation of the parameters was obtained under a Bayesian approach using Gibbs sampling methods.

  7. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu


    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  8. A Bayesian Mixture Model for PoS Induction Using Multiple Features


    Christodoulopoulos, Christos; Goldwater, Sharon; Steedman, Mark


    In this paper we present a fully unsupervised syntactic class induction system formulated as a Bayesian multinomial mixture model, where each word type is constrained to belong to a single class. By using a mixture model rather than a sequence model (e.g., HMM), we are able to easily add multiple kinds of features, including those at both the type level (morphology features) and token level (context and alignment features, the latter from parallel corpora). Using only context features, our sy...

  9. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group


    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  10. Structural and thermodynamical properties of charged hard spheres in a mixture with core-softened model solvent. (United States)

    Lukšič, Miha; Hribar-Lee, Barbara; Vlachy, Vojko; Pizio, O


    The canonical Monte Carlo computer simulations and integral equation theory were applied to examine the structural and thermodynamic properties of a mixture of ions and a core-softened fluid molecules. The positive and negative ions forming a +1:-1 salt were modeled as charged hard spheres, immersed in the dielectric medium. It was shown previously that the core-softened fluid under study is characterized by a set of structural, thermodynamic, and dynamic anomalies. The principal objective of this work was to elucidate how the presence of ions alters this behavior. The structural properties of the mixtures are discussed in terms of the pair distribution functions; in addition, the pair contribution to the excess entropy was calculated. Thermodynamic properties are investigated by using the dependencies of energy and compressibility factor on density, composition of the mixture, and reduced temperature. The heat capacity was also evaluated. Our principal findings concern the description of structural anomalies in the mixture, the dependence of the temperature of maximum density on the ionic concentration, and establishing the regions delimiting the structural and thermodynamic anomalies of the model mixture.

  11. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur


    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  12. Irreversible Processes in a Universe modelled as a mixture of a Chaplygin gas and radiation

    CERN Document Server

    Kremer, G M


    The evolution of a Universe modelled as a mixture of a Chaplygin gas and radiation is determined by taking into account irreversible processes. This mixture could interpolate periods of a radiation dominated, a matter dominated and a cosmological constant dominated Universe. The results of a Universe modelled by this mixture are compared with the results of a mixture whose constituents are radiation and quintessence. Among other results it is shown that: (a) for both models there exists a period of a past deceleration with a present acceleration; (b) the slope of the acceleration of the Universe modelled as a mixture of a Chaplygin gas with radiation is more pronounced than that modelled as a mixture of quintessence and radiation; (c) the energy density of the Chaplygin gas tends to a constant value at earlier times than the energy density of quintessence does; (d) the energy density of radiation for both mixtures coincide and decay more rapidly than the energy densities of the Chaplygin gas and of quintessen...

  13. Modeling adsorption of liquid mixtures on porous materials

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander


    The multicomponent potential theory of adsorption (MPTA), which was previously applied to adsorption from gases, is extended onto adsorption of liquid mixtures on porous materials. In the MPTA, the adsorbed fluid is considered as an inhomogeneous liquid with thermodynamic properties that depend...... on the distance from the solid surface (or position in the porous space). The theory describes the two kinds of interactions present in the adsorbed fluid, i.e. the fluid-fluid and fluid-solid interactions, by means of an equation of state and interaction potentials, respectively. The proposed extension...

  14. Evaluation of the Thermodynamic Models for the Thermal Diffusion Factor

    DEFF Research Database (Denmark)

    Gonzalez-Bagnoli, Mariana G.; Shapiro, Alexander; Stenby, Erling Halfdan


    Over the years, several thermodynamic models for the thermal diffusion factors for binary mixtures have been proposed. The goal of this paper is to test some of these models in combination with different equations of state. We tested the following models: those proposed by Rutherford and Drickame...

  15. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte Ca...... forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications....

  16. Human Factors Model (United States)


    Jack is an advanced human factors software package that provides a three dimensional model for predicting how a human will interact with a given system or environment. It can be used for a broad range of computer-aided design applications. Jack was developed by the computer Graphics Research Laboratory of the University of Pennsylvania with assistance from NASA's Johnson Space Center, Ames Research Center and the Army. It is the University's first commercial product. Jack is still used for academic purposes at the University of Pennsylvania. Commercial rights were given to Transom Technologies, Inc.

  17. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J


    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  18. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey


    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  19. Optimal Penalty Functions Based on MCMC for Testing Homogeneity of Mixture Models

    Directory of Open Access Journals (Sweden)

    Rahman Farnoosh


    Full Text Available This study is intended to provide an estimation of penalty function for testing homogeneity of mixture models based on Markov chain Monte Carlo simulation. The penalty function is considered as a parametric function and parameter of determinative shape of the penalty function in conjunction with parameters of mixture models are estimated by a Bayesian approach. Different mixture of uniform distribution are used as prior. Some simulation examples are perform to confirm the efficiency of the present work in comparison with the previous approaches.

  20. Scattering for mixtures of hard spheres: comparison of total scattering intensities with model. (United States)

    Anderson, B J; Gopalakrishnan, V; Ramakrishnan, S; Zukoski, C F


    The angular dependence of the intensity of x-rays scattered from binary and ternary hard sphere mixtures is investigated and compared to the predictions of two scattering models. Mixture ratio and total volume fraction dependent effects are investigated for size ratios equal to 0.51 and 0.22. Comparisons of model predictions with experimental results indicate the significant impact of the role of particle size distributions in interpreting the angular dependence of the scattering at wave vectors probing density fluctuations intermediate between the sizes of the particles in the mixture.

  1. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data


    Lei Wang; Satoshi Uchida


    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classificat...



    Liu, C. Y.; Ren, H.


    Hyperspectral spectrometers can record electromagnetic energy with hundreds or thousands of spectral channels. With such high spectral resolution, the spectral information has better capability for material identification. Because of the spatial resolution, one pixel in hyperspectral images usually covers several meters, and it may contain more than one material. Therefore, the mixture model must be considered. Linear mixture model (LMM) has been widely used for remote sensing target...

  3. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander


    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only...... components and to only one parameter for mixtures consisting of non-polar components. A possibility of complete prediction of the parameters is discussed....

  4. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J


    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...... categorizing only the most extreme SCS observations as mastitic, and such cases of subclinical infections may be the most closely related to clinical (treated) mastitis...

  5. Infinite mixture-of-experts model for sparse survival regression with application to breast cancer

    Directory of Open Access Journals (Sweden)

    Dahl Edgar


    Full Text Available Abstract Background We present an infinite mixture-of-experts model to find an unknown number of sub-groups within a given patient cohort based on survival analysis. The effect of patient features on survival is modeled using the Cox’s proportionality hazards model which yields a non-standard regression component. The model is able to find key explanatory factors (chosen from main effects and higher-order interactions for each sub-group by enforcing sparsity on the regression coefficients via the Bayesian Group-Lasso. Results Simulated examples justify the need of such an elaborate framework for identifying sub-groups along with their key characteristics versus other simpler models. When applied to a breast-cancer dataset consisting of survival times and protein expression levels of patients, it results in identifying two distinct sub-groups with different survival patterns (low-risk and high-risk along with the respective sets of compound markers. Conclusions The unified framework presented here, combining elements of cluster and feature detection for survival analysis, is clearly a powerful tool for analyzing survival patterns within a patient group. The model also demonstrates the feasibility of analyzing complex interactions which can contribute to definition of novel prognostic compound markers.

  6. On the Controlling Factor of Catalyst Temperature in C3H8-Air Mixture

    Institute of Scientific and Technical Information of China (English)

    Goro ONUMA; Mitsuaki TANABE; Kiyoshi AOKI


    Catalytic combustion of propane-air mixture was investigated. Platinum catalysts over a flat stainless steel with y alumina washcoat were employed. The employed burner has three catalysts set parallel to the mixture flow, spaced at an interval of 5, 10 and 15 mm. Both experiment and numerical simulation were made at inlet temperature of 553 K, inlet velocity of 3 to 7 rn/s and equivalence ratio of 0.3 to 0.5. In the numerical simulation, two-dimensional,steady state model was developed to calculate the temperature and species concentration in gas-phase. In this model,chemical reaction on the catalyst surface and that in the gas phase were assumed to occur in three-steps. The numerical results show good agreement with experimental results. It was found that the properties of the catalyst strongly affect the catalyst surface temperature. Especially, the thermal conductivity of catalyst has a great effect,while the emissivity of catalyst has less effect.

  7. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan


    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... surface tensions of 34 binary mixtures with an overall average absolute deviation of 3.46%. The results show good agreement between the predicted and experimental surface tensions. Next, the SGT model was applied to correlate surface tensions of binary mixtures containing alcohols, water or/and glycerol...

  8. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.;


    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, even...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved....

  9. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Nsiri Benayad


    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  10. Discriminative variable subsets in Bayesian classification with mixture models, with application in flow cytometry studies. (United States)

    Lin, Lin; Chan, Cliburn; West, Mike


    We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets.

  11. Explosive limits and its container factors of polybasic explosive mixture gas containing H2, CH4 and CO

    Institute of Scientific and Technical Information of China (English)

    胡耀元; 李勇; 朱凯汉; 周邦智; 杨元法


    Explosive characteristics of polybasic explosive mixture gas are systematically researched. Over 28000 experimental data have been obtained from 1278 effective experiments. The paper probes into the concentration explosive limits and the container factors of polybasic explosive mixture gas which contains H2, CH4 and CO. It has worked out the sufficient and necessary condition for branch-chain explosion and the unified expression of the probability of the heterogeneous chain termination. Experiments indicate that the concentration explosive limits of polybasic explosive mixture gas (H2, CH4, CO) relate to many factors. They enlarge with the augmentability of the container (linear size, geometric shape, and flame spread direction). This will be of great significance to guiding the revision of related industrial safety targets, reclaiming and reusing related industrial tail gas and waste gas, taking precautions against the explosion hazard of mixture gas in correlated industry and mines, and applying the br

  12. 40 CFR Table 2b to Subpart E of... - Reactivity Factors for Aliphatic Hydrocarbon Solvent Mixtures (United States)


    ... Hydrocarbon Solvent Mixtures 2B Table 2B to Subpart E of Part 59 Protection of Environment ENVIRONMENTAL... Hydrocarbon Solvent Mixtures Bin Averageboiling point * (degrees F) Criteria Reactivityfactor 1 80-205 Alkanes... + Dry Point) / 2 (b) Aromatic Hydrocarbon Solvents...

  13. Volumetric Properties of Chloroalkanes + Amines Mixtures: Theoretical Analysis Using the ERAS-Model (United States)

    Tôrres, R. B.; Hoga, H. E.; Magalhães, J. G.; Volpe, P. L. O.


    In this study, experimental data of excess molar volumes of {dichloromethane (DCM), or trichloromethane (TCM) + n-butylamine (n-BA), or +s-butylamine (s-BA), or +t-butylamine (t-BA), or +diethylamine (DEA), or +triethylamine (TEA)} mixtures as a function of composition have been used to test the applicability of the extended real associated solution model (ERAS-Model). The values of the excess molar volume were negative for (DCM + t-BA, or +DEA, or +TEA and TCM + n-BA, or +s-BA, or +DEA, or +TEA) mixtures and present sigmoid curves for (DCM + n-BA, or +s-BA) mixtures over the complete mole-fraction range. The agreement between theoretical and experimental results is discussed in terms of cross-association between the components present in the mixtures.

  14. Kinetic Modeling of Gasoline Surrogate Components and Mixtures under Engine Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Pitz, W J; Westbrook, C K; Curran, H J


    Real fuels are complex mixtures of thousands of hydrocarbon compounds including linear and branched paraffins, naphthenes, olefins and aromatics. It is generally agreed that their behavior can be effectively reproduced by simpler fuel surrogates containing a limited number of components. In this work, an improved version of the kinetic model by the authors is used to analyze the combustion behavior of several components relevant to gasoline surrogate formulation. Particular attention is devoted to linear and branched saturated hydrocarbons (PRF mixtures), olefins (1-hexene) and aromatics (toluene). Model predictions for pure components, binary mixtures and multicomponent gasoline surrogates are compared with recent experimental information collected in rapid compression machine, shock tube and jet stirred reactors covering a wide range of conditions pertinent to internal combustion engines (3-50 atm, 650-1200K, stoichiometric fuel/air mixtures). Simulation results are discussed focusing attention on the mixing effects of the fuel components.

  15. Simulating asymmetric colloidal mixture with adhesive hard sphere model. (United States)

    Jamnik, A


    Monte Carlo simulation and Percus-Yevick (PY) theory are used to investigate the structural properties of a two-component system of the Baxter adhesive fluids with the size asymmetry of the particles of both components mimicking an asymmetric binary colloidal mixture. The radial distribution functions for all possible species pairs, g(11)(r), g(22)(r), and g(12)(r), exhibit discontinuities at the interparticle distances corresponding to certain combinations of n and m values (n and m being integers) in the sum nsigma(1)+msigma(2) (sigma(1) and sigma(2) being the hard-core diameters of individual components) as a consequence of the impulse character of 1-1, 2-2, and 1-2 attractive interactions. In contrast to the PY theory, which predicts the delta function peaks in the shape of g(ij)(r) only at the distances which are the multiple of the molecular sizes corresponding to different linear structures of successively connected particles, the simulation results reveal additional peaks at intermediate distances originating from the formation of rigid clusters of various geometries.

  16. Tumour promotion by complex mixtures of polyhalogeneted aromatic hydrocarbons (PHAHs) an the applicability of the toxic equivalency factor (TEF) concept


    Plas, van der, H.C.


    The aim of the project described in this thesis consisted of two main objectives, first, to examine the tumour promotion potential of complex, environmentally relevant mixtures of polychlorinated biphenyls (PCBs), polychlorinated dibenzo- p -dioxins (PCDDs) and polychlorinated dibenzo- p -furans (PCDFs) and secondly, to evaluate the applicability of the Toxic Equivalency Factor (TEF) concept for the tumour promotion potential of complex mixtures of PCBs, PCDDs and PCDFs. In addition, the effe...

  17. Some covariance models based on normal scale mixtures

    CERN Document Server

    Schlather, Martin


    Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (J. Amer. Statist. Assoc. 97 (2002) 590--600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.

  18. Mixture Models for the Analysis of Repeated Count Data.

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Böckenholt, U


    Repeated count data showing overdispersion are commonly analysed by using a Poisson model with varying intensity parameter. resulting in a mixed model. A mixed model with a gamma distribution for the Poisson parameter does not adequately fit a data set on 721 children's spelling errors. An

  19. Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria, Density, and Viscosity (United States)


    Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, and Viscosity Nathaniel...Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, And Viscosity 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: phase equilibria ; modified Sanchez-Lacombe equation of state

  20. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)



    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  1. Mixture modeling methods for the assessment of normal and abnormal personality, part I: cross-sectional models. (United States)

    Hallquist, Michael N; Wright, Aidan G C


    Over the past 75 years, the study of personality and personality disorders has been informed considerably by an impressive array of psychometric instruments. Many of these tests draw on the perspective that personality features can be conceptualized in terms of latent traits that vary dimensionally across the population. A purely trait-oriented approach to personality, however, might overlook heterogeneity that is related to similarities among subgroups of people. This article describes how factor mixture modeling (FMM), which incorporates both categories and dimensions, can be used to represent person-oriented and trait-oriented variability in the latent structure of personality. We provide an overview of different forms of FMM that vary in the degree to which they emphasize trait- versus person-oriented variability. We also provide practical guidelines for applying FMM to personality data, and we illustrate model fitting and interpretation using an empirical analysis of general personality dysfunction.

  2. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den


    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  3. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den


    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  4. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.


    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  5. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models (United States)

    Karagiannis, Georgios; Lin, Guang


    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  6. Structure-reactivity modeling using mixture-based representation of chemical reactions (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre


    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  7. Microstructure modeling and virtual test of asphalt mixture based on three-dimensional discrete element method

    Institute of Scientific and Technical Information of China (English)

    马涛; 张德育; 张垚; 赵永利; 黄晓明


    The objective of this work is to model the microstructure of asphalt mixture and build virtual test for asphalt mixture by using Particle Flow Code in three dimensions (PFC3D) based on three-dimensional discrete element method. A randomly generating algorithm was proposed to capture the three-dimensional irregular shape of coarse aggregate. And then, modeling algorithm and method for graded aggregates were built. Based on the combination of modeling of coarse aggregates, asphalt mastic and air voids, three-dimensional virtual sample of asphalt mixture was modeled by using PFC3D. Virtual tests for penetration test of aggregate and uniaxial creep test of asphalt mixture were built and conducted by using PFC3D. By comparison of the testing results between virtual tests and actual laboratory tests, the validity of the microstructure modeling and virtual test built in this study was verified. Additionally, compared with laboratory test, the virtual test is easier to conduct and has less variability. It is proved that microstructure modeling and virtual test based on three-dimensional discrete element method is a promising way to conduct research of asphalt mixture.

  8. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik


    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...... in the commercial software PFC3D, including the slip model, linear stiffness-contact model, and contact bond model. A macro-scale Burger's model was first established and the input parameters of Burger's contact model were calibrated by adjusting them so that the model fitted the experimental data for the complex...

  9. Mixture Density Mercer Kernels (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  10. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard


    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  11. Modeling and experimental measurements of thermodynamic properties of natural gas mixtures and their components (United States)

    Gomez Osorio, Martin Alonso

    Chemical process design requires mathematical models for predicting thermophysical properties. Those models, called equations of state (EoS), need experimental data for parameter estimation and validation. This work presents a detailed description of a vibrating tube densimeter, which is an alternative technique for measurement of p-rho-T data in gases at critical conditions. This apparatus can measure fluids in a temperature range of 300 K to 470 K and pressures up to 140 MPa. This work calibrates the vibrating tube using a physical-based methodology with nitrogen, methane and argon measurements. Carbon dioxide and ethane p-rho-T data validate calibration procedures covering a wide range in density and pressure. The vibrating tube densimeter performs density measurements for nitrogen + methane mixtures for pressures up to 140 MPa. This work also presents a new equation of state (EoS) having a rational form that can describe properties with accuracy comparable to the best multi-parametric equations with less mathematical complexity. This EoS presents the Helmholtz residual energy as a ratio of two polynomial functions in density (no exponential terms in density are included), which can describe the behavior of pure components. The EoS can be transformed to describe other thermophysical properties as pressure, compressibility factor, heat capacity and speed of sound. Also this equation can calculate saturated liquid-vapor properties with 20 times less computational time. This work presents rational EoS for nitrogen, argon and methane applicable in wide ranges of pressure and temperature. Finally, this work proposes a new mixing rule for binary mixtures of gases based upon a quadratic combination of residual Helmholtz energy. This approach divides the energy contribution between interactions of same species and interaction of different species molecules. A rational form is proposed for description of energy interaction between molecules of different species. The

  12. Treatment of nonignorable missing data when modeling unobserved heterogeneity with finite mixture models. (United States)

    Lehmann, Thomas; Schlattmann, Peter


    Multiple imputation has become a widely accepted technique to deal with the problem of incomplete data. Typically, imputation of missing values and the statistical analysis are performed separately. Therefore, the imputation model has to be consistent with the analysis model. If the data are analyzed with a mixture model, the parameter estimates are usually obtained iteratively. Thus, if the data are missing not at random, parameter estimation and treatment of missingness should be combined. We solve both problems by simultaneously imputing values using the data augmentation method and estimating parameters using the EM algorithm. This iterative procedure ensures that the missing values are properly imputed given the current parameter estimates. Properties of the parameter estimates were investigated in a simulation study. The results are illustrated using data from the National Health and Nutrition Examination Survey.

  13. A class-adaptive spatially variant mixture model for image segmentation. (United States)

    Nikou, Christophoros; Galatsanos, Nikolaos P; Likas, Aristidis C


    We propose a new approach for image segmentation based on a hierarchical and spatially variant mixture model. According to this model, the pixel labels are random variables and a smoothness prior is imposed on them. The main novelty of this work is a new family of smoothness priors for the label probabilities in spatially variant mixture models. These Gauss-Markov random field-based priors allow all their parameters to be estimated in closed form via the maximum a posteriori (MAP) estimation using the expectation-maximization methodology. Thus, it is possible to introduce priors with multiple parameters that adapt to different aspects of the data. Numerical experiments are presented where the proposed MAP algorithms were tested in various image segmentation scenarios. These experiments demonstrate that the proposed segmentation scheme compares favorably to both standard and previous spatially constrained mixture model-based segmentation.

  14. Introduction to the special section on mixture modeling in personality assessment. (United States)

    Wright, Aidan G C; Hallquist, Michael N


    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  15. A Model-Selection-Based Self-Splitting Gaussian Mixture Learning with Application to Speaker Identification

    Directory of Open Access Journals (Sweden)

    Shih-Sian Cheng


    Full Text Available We propose a self-splitting Gaussian mixture learning (SGML algorithm for Gaussian mixture modelling. The SGML algorithm is deterministic and is able to find an appropriate number of components of the Gaussian mixture model (GMM based on a self-splitting validity measure, Bayesian information criterion (BIC. It starts with a single component in the feature space and splits adaptively during the learning process until the most appropriate number of components is found. The SGML algorithm also performs well in learning the GMM with a given component number. In our experiments on clustering of a synthetic data set and the text-independent speaker identification task, we have observed the ability of the SGML for model-based clustering and automatically determining the model complexity of the speaker GMMs for speaker identification.

  16. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  17. Theoretic model and computer simulation of separating mixture metal particles from waste printed circuit board by electrostatic separator. (United States)

    Li, Jia; Xu, Zhenming; Zhou, Yaohe


    Traditionally, the mixture metals from waste printed circuit board (PCB) were sent to the smelt factory to refine pure copper. Some valuable metals (aluminum, zinc and tin) with low content in PCB were lost during smelt. A new method which used roll-type electrostatic separator (RES) to recovery low content metals in waste PCB was presented in this study. The theoretic model which was established from computing electric field and the analysis of forces on the particles was used to write a program by MATLAB language. The program was design to simulate the process of separating mixture metal particles. Electrical, material and mechanical factors were analyzed to optimize the operating parameters of separator. The experiment results of separating copper and aluminum particles by RES had a good agreement with computer simulation results. The model could be used to simulate separating other metal (tin, zinc, etc.) particles during the process of recycling waste PCBs by RES.

  18. Isothermal (vapour + liquid) equilibrium of (cyclic ethers + chlorohexane) mixtures: Experimental results and SAFT modelling

    Energy Technology Data Exchange (ETDEWEB)

    Bandres, I.; Giner, B.; Lopez, M.C.; Artigas, H. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain); Lafuente, C. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)], E-mail:


    Experimental data for the isothermal (vapour + liquid) equilibrium of mixtures formed by several cyclic ethers (tetrahydrofuran, tetrahydropyran, 1,3-dioxolane, and 1,4-dioxane) and chlorohexane at temperatures of (298.15 and 328.15) K are presented. Experimental results have been discussed in terms of both, molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. Furthermore, the influence of the temperature on the (vapour + liquid) equilibrium of these mixtures has been explored and discussed. Transferable parameters of the SAFT-VR approach together with standard combining rules have been used to model the phase equilibrium of the mixtures and a description of the (vapour + liquid) equilibrium of them that is in excellent agreement with the experimental data are provided.

  19. Modeling dependence based on mixture copulas and its application in risk management

    Institute of Scientific and Technical Information of China (English)

    OUYANG Zi-sheng; LIAO Hui; YANG Xiang-qun


    This paper is concerned with the statistical modeling of the dependence structure of multivariate financial data using the copula, and the application of copula functions in VaR valuation. After the introduction of the pure copula method and the maximum and minimum mixture copula method, authors present a new algorithm based on the more generalized mixture copula functions and the dependence measure, and apply the method to the portfolio of Shanghai stock composite index and Shenzhen stock component index. Comparing with the results from various methods, one can find that the mixture copula method is better than the pure Gaussia copula method and the maximum and minimum mixture copula method on different VaR level.

  20. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik


    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  1. Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm. (United States)

    Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z


    Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.


    Directory of Open Access Journals (Sweden)

    Orishenko I. V.


    Full Text Available In this article the basic principles of air-fuel mixture explosions and striking factors, such as air-striking wave, gas streams, splinters, flame heat, light radiation and sharp sounds are observed. The calculation technique of the emergency emission consequences which is for a quantitative estimation of air-striking wave parameters at air-fuel mixture explosions forming in the atmosphere at industrial failures is given. The basic structural elements of calculation algorithm are listed. It is supposed partial depressurization or full destruction of the equipment containing combustible substance in a gaseous or liquid phase, the emission of this substance in the atmosphere, the air-fuel mixture cloud formation, the air-fuel mixture initiation (ignition and the explosive transformation (deflagration or detonation in the air-fuel mixture cloud. The technique allows making the approached estimation of air-striking wave various parameters and defining the probable degrees of men defeat and building damage at failures with air-fuel mixture cloud explosions. The given technique is developed in C# language in the integrated environment of software Microsoft VisualStudio 2010 working out. The program fragment in which the calculation of dimensionless Px pressure and dimensionless Ix impulse is given

  3. Motif Yggdrasil: sampling sequence motifs from a tree mixture model. (United States)

    Andersson, Samuel A; Lagergren, Jens


    In phylogenetic foot-printing, putative regulatory elements are found in upstream regions of orthologous genes by searching for common motifs. Motifs in different upstream sequences are subject to mutations along the edges of the corresponding phylogenetic tree, consequently taking advantage of the tree in the motif search is an appealing idea. We describe the Motif Yggdrasil sampler; the first Gibbs sampler based on a general tree that uses unaligned sequences. Previous tree-based Gibbs samplers have assumed a star-shaped tree or partially aligned upstream regions. We give a probabilistic model (MY model) describing upstream sequences with regulatory elements and build a Gibbs sampler with respect to this model. The model allows toggling, i.e., the restriction of a position to a subset of nucleotides, but does not require aligned sequences nor edge lengths, which may be difficult to come by. We apply the collapsing technique to eliminate the need to sample nuisance parameters, and give a derivation of the predictive update formula. We show that the MY model improves the modeling of difficult motif instances and that the use of the tree achieves a substantial increase in nucleotide level correlation coefficient both for synthetic data and 37 bacterial lexA genes. We investigate the sensitivity to errors in the tree and show that using random trees MY sampler still has a performance similar to the original version.

  4. Solvable model of a trapped mixture of Bose-Einstein condensates (United States)

    Klaiman, Shachar; Streltsov, Alexej I.; Alon, Ofir E.


    A mixture of two kinds of identical bosons held in a harmonic potential and interacting by harmonic particle-particle interactions is discussed. This is an exactly-solvable model of a mixture of two trapped Bose-Einstein condensates which allows us to examine analytically various properties. Generalizing the treatments in Cohen and Lee (1985) and Osadchii and Muraktanov (1991), closed form expressions for the mixture's frequencies and ground-state energy and wave-function, and the lowest-order densities are obtained and analyzed for attractive and repulsive intra-species and inter-species particle-particle interactions. A particular mean-field solution of the corresponding Gross-Pitaevskii theory is also found analytically. This allows us to compare properties of the mixture at the exact, many-body and mean-field levels, both for finite systems and at the limit of an infinite number of particles. We discuss the renormalization of the mixture's frequencies at the mean-field level. Mainly, we hereby prove that the exact ground-state energy per particle and lowest-order intra-species and inter-species densities per particle converge at the infinite-particle limit (when the products of the number of particles times the intra-species and inter-species interaction strengths are held fixed) to the results of the Gross-Pitaevskii theory for the mixture. Finally and on the other end, we use the mixture's and each species' center-of-mass operators to show that the Gross-Pitaevskii theory for mixtures is unable to describe the variance of many-particle operators in the mixture, even in the infinite-particle limit. The variances are computed both in position and momentum space and the respective uncertainty products compared and discussed. The role of the center-of-mass separability and, for generically trapped mixtures, inseparability is elucidated when contrasting the variance at the many-body and mean-field levels in a mixture. Our analytical results show that many

  5. A general mixture model and its application to coastal sandbar migration simulation (United States)

    Liang, Lixin; Yu, Xiping


    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  6. Use of a Modified Vector Model for Odor Intensity Prediction of Odorant Mixtures

    Directory of Open Access Journals (Sweden)

    Luchun Yan


    Full Text Available Odor intensity (OI indicates the perceived intensity of an odor by the human nose, and it is usually rated by specialized assessors. In order to avoid restrictions on assessor participation in OI evaluations, the Vector Model which calculates the OI of a mixture as the vector sum of its unmixed components’ odor intensities was modified. Based on a detected linear relation between the OI and the logarithm of odor activity value (OAV—a ratio between chemical concentration and odor threshold of individual odorants, OI of the unmixed component was replaced with its corresponding logarithm of OAV. The interaction coefficient (cosα which represented the degree of interaction between two constituents was also measured in a simplified way. Through a series of odor intensity matching tests for binary, ternary and quaternary odor mixtures, the modified Vector Model provided an effective way of relating the OI of an odor mixture with the lnOAV values of its constituents. Thus, OI of an odor mixture could be directly predicted by employing the modified Vector Model after usual quantitative analysis. Besides, it was considered that the modified Vector Model was applicable for odor mixtures which consisted of odorants with the same chemical functional groups and similar molecular structures.

  7. Reverse Engineering Boolean Networks: From Bernoulli Mixture Models to Rule Based Systems (United States)

    Saeed, Mehreen; Ijaz, Maliha; Javed, Kashif; Babri, Haroon Atique


    A Boolean network is a graphical model for representing and analyzing the behavior of gene regulatory networks (GRN). In this context, the accurate and efficient reconstruction of a Boolean network is essential for understanding the gene regulation mechanism and the complex relations that exist therein. In this paper we introduce an elegant and efficient algorithm for the reverse engineering of Boolean networks from a time series of multivariate binary data corresponding to gene expression data. We call our method ReBMM, i.e., reverse engineering based on Bernoulli mixture models. The time complexity of most of the existing reverse engineering techniques is quite high and depends upon the indegree of a node in the network. Due to the high complexity of these methods, they can only be applied to sparsely connected networks of small sizes. ReBMM has a time complexity factor, which is independent of the indegree of a node and is quadratic in the number of nodes in the network, a big improvement over other techniques and yet there is little or no compromise in accuracy. We have tested ReBMM on a number of artificial datasets along with simulated data derived from a plant signaling network. We also used this method to reconstruct a network from real experimental observations of microarray data of the yeast cell cycle. Our method provides a natural framework for generating rules from a probabilistic model. It is simple, intuitive and illustrates excellent empirical results. PMID:23284654

  8. A homogenized constrained mixture (and mechanical analog) model for growth and remodeling of soft tissue. (United States)

    Cyron, C J; Aydin, R C; Humphrey, J D


    Most mathematical models of the growth and remodeling of load-bearing soft tissues are based on one of two major approaches: a kinematic theory that specifies an evolution equation for the stress-free configuration of the tissue as a whole or a constrained mixture theory that specifies rates of mass production and removal of individual constituents within stressed configurations. The former is popular because of its conceptual simplicity, but relies largely on heuristic definitions of growth; the latter is based on biologically motivated micromechanical models, but suffers from higher computational costs due to the need to track all past configurations. In this paper, we present a temporally homogenized constrained mixture model that combines advantages of both classical approaches, namely a biologically motivated micromechanical foundation, a simple computational implementation, and low computational cost. As illustrative examples, we show that this approach describes well both cell-mediated remodeling of tissue equivalents in vitro and the growth and remodeling of aneurysms in vivo. We also show that this homogenized constrained mixture model suggests an intimate relationship between models of growth and remodeling and viscoelasticity. That is, important aspects of tissue adaptation can be understood in terms of a simple mechanical analog model, a Maxwell fluid (i.e., spring and dashpot in series) in parallel with a "motor element" that represents cell-mediated mechanoregulation of extracellular matrix. This analogy allows a simple implementation of homogenized constrained mixture models within commercially available simulation codes by exploiting available models of viscoelasticity.

  9. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van


    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  10. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    CERN Document Server

    Scalas, E


    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  11. Mixtures of compound Poisson processes as models of tick-by-tick financial data (United States)

    Scalas, Enrico


    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  12. Solvatochromic and Kinetic Response Models in (Ethyl Acetate + Chloroform or Methanol Solvent Mixtures

    Directory of Open Access Journals (Sweden)

    L. R. Vottero


    Full Text Available The present work analyzes the solvent effects upon the solvatochromic response models for a set of chemical probes and the kinetic response models for an aromatic nucleophilic substitution reaction, in binary mixtures in which both pure components are able to form intersolvent complexes by hydrogen bonding.

  13. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation (United States)

    Rafal Podlaski; Francis A. Roesch


    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  14. Detecting Gustatory–Olfactory Flavor Mixtures: Models of Probability Summation (United States)

    Veldhuizen, Maria G.; Shepard, Timothy G.; Shavit, Adam Y.


    Odorants and flavorants typically contain many components. It is generally easier to detect multicomponent stimuli than to detect a single component, through either neural integration or probability summation (PS) (or both). PS assumes that the sensory effects of 2 (or more) stimulus components (e.g., gustatory and olfactory components of a flavorant) are detected in statistically independent channels, that each channel makes a separate decision whether a component is detected, and that the behavioral response depends solely on the separate decisions. Models of PS traditionally assume high thresholds for detecting each component, noise being irrelevant. The core assumptions may be adapted, however, to signal-detection theory, where noise limits detection. The present article derives predictions of high-threshold and signal-detection models of independent-decision PS in detecting gustatory–olfactory flavorants, comparing predictions in yes/no and 2-alternative forced-choice tasks using blocked and intermixed stimulus designs. The models also extend to measures of response times to suprathreshold flavorants. Predictions derived from high-threshold and signal-detection models differ markedly. Available empirical evidence on gustatory–olfactory flavor detection suggests that neither the high-threshold nor the signal-detection versions of PS can readily account for the results, which likely reflect neural integration in the flavor system. PMID:22075720

  15. Mathematical modeling of gas-condensate mixture filtration in porous media taking into account non-equilibrium of phase transitions (United States)

    Kachalov, V. V.; Molchanov, D. A.; Sokotushchenko, V. N.; Zaichenko, V. M.


    At the present time, a considerable part of the largest dry gas reservoirs in Russia are found in the stage of declining production, therefore active exploitation of gas-condensate fields will begin in the coming decades. There is a significant discrepancy between the project and the actual value of condensate recovery factor while producing reservoir of this type, which is caused by insufficient knowledge about non-equilibrium filtration mechanisms of gas-condensate mixtures in reservoir conditions. A system of differential equations to describe filtration process of two-phase multicomponent mixture for one-, two- and three-dimensional cases is presented in this work. The solution of the described system was made by finite-element method in the software package FlexPDE. Comparative distributions of velocities, pressures, saturations and phase compositions of three-component mixture along the reservoir model and in time in both cases of equilibrium and non-equilibrium filtration processes were obtained. Calculation results have shown that system deviation from the thermodynamic equilibrium increases gas phase flow rate and reduces liquid phase flow rate during filtration process of gas-condensate mixture.

  16. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne


    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  17. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx. (United States)

    Grimm, Kevin J; Ram, Nilam; Estabrook, Ryne


    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use.

  18. Memoized Online Variational Inference for Dirichlet Process Mixture Models (United States)


    for unsupervised modeling of struc- tured data like text documents, time series, and images. They are especially promising for large datasets, as...non-convex unsupervised learning problems, frequently yielding poor solutions (see Fig. 2). While taking the best of multiple runs is possible, this is...16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 9 19a. NAME OF RESPONSIBLE PERSON a. REPORT

  19. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter (United States)

    Kruskopf, Ari


    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{{p}} , which include values that are similar for industrial-scale steel converter. The k- ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  20. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter (United States)

    Kruskopf, Ari


    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{p} , which include values that are similar for industrial-scale steel converter. The k-ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  1. A generalized physiologically-based toxicokinetic modeling system for chemical mixtures containing metals

    Directory of Open Access Journals (Sweden)

    Isukapalli Sastry S


    Full Text Available Abstract Background Humans are routinely and concurrently exposed to multiple toxic chemicals, including various metals and organics, often at levels that can cause adverse and potentially synergistic effects. However, toxicokinetic modeling studies of exposures to these chemicals are typically performed on a single chemical basis. Furthermore, the attributes of available models for individual chemicals are commonly estimated specifically for the compound studied. As a result, the available models usually have parameters and even structures that are not consistent or compatible across the range of chemicals of concern. This fact precludes the systematic consideration of synergistic effects, and may also lead to inconsistencies in calculations of co-occurring exposures and corresponding risks. There is a need, therefore, for a consistent modeling framework that would allow the systematic study of cumulative risks from complex mixtures of contaminants. Methods A Generalized Toxicokinetic Modeling system for Mixtures (GTMM was developed and evaluated with case studies. The GTMM is physiologically-based and uses a consistent, chemical-independent physiological description for integrating widely varying toxicokinetic models. It is modular and can be directly "mapped" to individual toxicokinetic models, while maintaining physiological consistency across different chemicals. Interaction effects of complex mixtures can be directly incorporated into the GTMM. Conclusions The application of GTMM to different individual metals and metal compounds showed that it explains available observational data as well as replicates the results from models that have been optimized for individual chemicals. The GTMM also made it feasible to model toxicokinetics of complex, interacting mixtures of multiple metals and nonmetals in humans, based on available literature information. The GTMM provides a central component in the development of a "source

  2. Theory of phase equilibria for model mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants (United States)

    Dos Ramos, María Carolina; Blas, Felipe J.


    An extension of the SAFT-VR equation of state, the so-called hetero-SAFT approach [Y. Peng, H. Zhao, and C. McCabe, Molec. Phys. 104, 571 (2006)], is used to examine the phase equilibria exhibited by a number of model binary mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants. Despite the increasing recent interest in semifluorinated alkanes (or perfluoroalkylalkane diblock molecules), the phase behaviour of mixtures involving these molecules with n-alkanes or perfluoroalkanes is practically unknown from the experimental point of view. In this work, we use simple molecular models for n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock molecules to predict, from a molecular perspective, the phase behaviour of selected model mixtures of perfluoroalkylalkanes with n-alkanes and perfluoroalkanes. In particular, we focus our interest on the understanding of the microscopic conditions that control the liquid-liquid separation and the stabilization of these mixtures. n-Alkanes and perfluoroalkanes are modelled as tangentially bonded monomer segments with molecular parameters taken from the literature. The perfluoroalkylalkane diblock molecules are modelled as heterosegmented diblock chains, with parameters for the alkyl and perfluoroalkyl segments developed in earlier work. This simple approach, which was proposed in previous work [P. Morgado, H. Zhao, F. J. Blas, C. McCabe, L. P. N. Rebelo, and E. J. M. Filipe, J. Phys. Chem. B, 111, 2856], is now extended to describe model n-alkane (or perfluoroalkane) + perfluroalkylalkane binary mixtures. We have obtained the phase behaviour of different mixtures and studied the effect of the molecular weight of n-alkanes and perfluoroalkanes on the type of phase behaviour observed in these mixtures. We have also analysed the effect of the number of alkyl and perfluoroalkyl chemical groups in the surfactant molecule on the phase behaviour. In addition to the usual vapour-liquid phase


    Institute of Scientific and Technical Information of China (English)

    Luo Hong; Pu Zhilin


    Existence and regularity of solutions to model for liquid mixture of 3He-4He is considered in this paper.First,it is proved that this system possesses a unique global weak solution in H1(Ω,C × R) by using Galerkin method.Secondly,by using an iteration procedure,regularity estimates for the linear semigroups,it is proved that the model for liquid mixture of 3He-4He has a unique solution in Hk(Ω,C × R) for all k ≥ 1.

  4. Non-racemic mixture model: a computational approach. (United States)

    Polanco, Carlos; Buhse, Thomas


    The behavior of a slight chiral bias in favor of l-amino acids over d-amino acids was studied in an evolutionary mathematical model generating mixed chiral peptide hexamers. The simulations aimed to reproduce a very generalized prebiotic scenario involving a specified couple of amino acid enantiomers and a possible asymmetric amplification through autocatalytic peptide self-replication while forming small multimers of a defined length. Our simplified model allowed the observation of a small ascending but not conclusive tendency in the l-amino acid over the d-amino acid profile for the resulting mixed chiral hexamers in computer simulations of 100 peptide generations. This simulation was carried out by changing the chiral bias from 1% to 3%, in three stages of 15, 50 and 100 generations to observe any alteration that could mean a drastic change in behavior. So far, our simulations lead to the assumption that under the exposure of very slight non-racemic conditions, a significant bias between l- and d-amino acids, as present in our biosphere, was unlikely generated under prebiotic conditions if autocatalytic peptide self-replication was the main or the only driving force of chiral auto-amplification.

  5. A multiscale transport model for binary Lennard Jones mixtures in slit nanopores (United States)

    Bhadauria, Ravi; Aluru, N. R.


    We present a quasi-continuum multiscale hydrodynamic transport model for one dimensional isothermal, non-reacting binary mixture confined in slit shaped nanochannels. We focus on species transport equation that includes the viscous dissipation and interspecies diffusion term of the Maxwell-Stefan form. Partial viscosity variation is modeled by van der Waals one fluid approximation and the Local Average Density Method. We use friction boundary conditions where the wall-species friction parameter is computed using a novel species specific Generalized Langevin Equation model. The transport model accuracy is tested by predicting the velocity profiles of Lennard-Jones (LJ) methane-hydrogen and LJ methane-argon mixtures in graphene slit channels of different width. The resultant slip length from the continuum model is found to be invariant of channel width for a fixed mixture molar concentration. The mixtures considered are observed to behave as single species pseudo fluid, with the friction parameter displaying a linear dependence on the molar composition. The proposed model yields atomistic level accuracy with continuum scale efficiency.

  6. A Finite Mixture of Nonlinear Random Coefficient Models for Continuous Repeated Measures Data. (United States)

    Kohli, Nidhi; Harring, Jeffrey R; Zopluoglu, Cengiz


    Nonlinear random coefficient models (NRCMs) for continuous longitudinal data are often used for examining individual behaviors that display nonlinear patterns of development (or growth) over time in measured variables. As an extension of this model, this study considers the finite mixture of NRCMs that combine features of NRCMs with the idea of finite mixture (or latent class) models. The efficacy of this model is that it allows the integration of intrinsically nonlinear functions where the data come from a mixture of two or more unobserved subpopulations, thus allowing the simultaneous investigation of intra-individual (within-person) variability, inter-individual (between-person) variability, and subpopulation heterogeneity. Effectiveness of this model to work under real data analytic conditions was examined by executing a Monte Carlo simulation study. The simulation study was carried out using an R routine specifically developed for the purpose of this study. The R routine used maximum likelihood with the expectation-maximization algorithm. The design of the study mimicked the output obtained from running a two-class mixture model on task completion data.

  7. EEG Signal Classification With Super-Dirichlet Mixture Model

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Tan, Zheng-Hua; Prasad, Swati


    Classification of the Electroencephalogram (EEG) signal is a challengeable task in the brain-computer interface systems. The marginalized discrete wavelet transform (mDWT) coefficients extracted from the EEG signals have been frequently used in researches since they reveal features related to the...... vector machine (SVM) based classifier, the SDMM based classifier performs more stable and shows a promising improvement, with both channel selection strategies....... by the Dirichlet distribution and the distribution of the mDWT coefficients from more than one channels is described by a super-Dirichletmixture model (SDMM). The Fisher ratio and the generalization error estimation are applied to select relevant channels, respectively. Compared to the state-of-the-art support...

  8. Analysis and modeling of 3D complex modulus tests on hot and warm bituminous mixtures (United States)

    Pham, Nguyen Hoang; Sauzéat, Cédric; Di Benedetto, Hervé; González-León, Juan A.; Barreto, Gilles; Nicolaï, Aurélia; Jakubowski, Marc


    This paper presents the results of laboratory testing of hot and warm bituminous mixtures containing Reclaimed Asphalt Pavement (RAP). Complex modulus measurements, using the tension-compression test on cylindrical specimens, were conducted to determine linear viscoelastic (LVE) behavior. Sinusoidal cyclic loadings, with strain amplitude of approximately 50ṡ10-6, were applied at several temperatures (from -25 to +45 °C) and frequencies (from 0.03 Hz to 10 Hz). In addition to axial stresses and strains, radial strains were also measured. The complex modulus E ∗ and complex Poisson's ratios ν ∗ were then obtained in two perpendicular directions. Measured values in these two directions do not indicate anisotropy on Poisson's ratio. The time-temperature superposition principle (TTSP) was verified with good approximation in one-dimensional (1D) and three-dimensional (3D) conditions for the same values of shift factor. Experimental results were modeled using the 2S2P1D model previously developed at the University of Lyon/ENTPE. In addition, specific analysis showed that eventual damage created during complex modulus test is very small and is equivalent to the effect of an increase of temperature of about 0.25 °C.

  9. Land Cover Classification for Polarimetric SAR Images Based on Mixture Models

    Directory of Open Access Journals (Sweden)

    Wei Gao


    Full Text Available In this paper, two mixture models are proposed for modeling heterogeneous regions in single-look and multi-look polarimetric SAR images, along with their corresponding maximum likelihood classifiers for land cover classification. The classical Gaussian and Wishart models are suitable for modeling scattering vectors and covariance matrices from homogeneous regions, while their performance deteriorates for regions that are heterogeneous. By comparison, the proposed mixture models reduce the modeling error by expressing the data distribution as a weighted sum of multiple component distributions. For single-look and multi-look polarimetric SAR data, complex Gaussian and complex Wishart components are adopted, respectively. Model parameters are determined by employing the expectation-maximization (EM algorithm. Two maximum likelihood classifiers are then constructed based on the proposed mixture models. These classifiers are assessed using polarimetric SAR images from the RADARSAT-2 sensor of the Canadian Space Agency (CSA, the AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the EMISAR sensor of the Technical University of Denmark (DTU. Experiment results demonstrate that the new models fit heterogeneous regions preferably to the classical models and are especially appropriate for extremely heterogeneous regions, such as urban areas. The overall accuracy of land cover classification is also improved due to the more refined modeling.

  10. Kinetic model for astaxanthin aggregation in water-methanol mixtures (United States)

    Giovannetti, Rita; Alibabaei, Leila; Pucciarelli, Filippo


    The aggregation of astaxanthin in hydrated methanol was kinetically studied in the temperature range from 10 °C to 50 °C, at different astaxanthin concentrations and solvent composition. A kinetic model for the formation and transformation of astaxanthin aggregated has been proposed. Spectrophotometric studies showed that monomeric astaxanthin decayed to H-aggregates that after-wards formed J-aggregates when water content was 50% and the temperature lower than 20 °C; at higher temperatures, very stable J-aggregates were formed directly. Monomer formed very stable H-aggregates when the water content was greater than 60%; in these conditions H-aggregates decayed into J-aggregates only when the temperature was at least 50 °C. Through these findings it was possible to establish that the aggregation reactions took place through a two steps consecutive reaction with first order kinetic constants and that the values of these depended on the solvent composition and temperature.

  11. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS. (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A


    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  12. A Mechanistic Modeling Framework for Predicting Metabolic Interactions in Complex Mixtures (United States)

    Cheng, Shu


    Background: Computational modeling of the absorption, distribution, metabolism, and excretion of chemicals is now theoretically able to describe metabolic interactions in realistic mixtures of tens to hundreds of substances. That framework awaits validation. Objectives: Our objectives were to a) evaluate the conditions of application of such a framework, b) confront the predictions of a physiologically integrated model of benzene, toluene, ethylbenzene, and m-xylene (BTEX) interactions with observed kinetics data on these substances in mixtures and, c) assess whether improving the mechanistic description has the potential to lead to better predictions of interactions. Methods: We developed three joint models of BTEX toxicokinetics and metabolism and calibrated them using Markov chain Monte Carlo simulations and single-substance exposure data. We then checked their predictive capabilities for metabolic interactions by comparison with mixture kinetic data. Results: The simplest joint model (BTEX interacting competitively for cytochrome P450 2E1 access) gives qualitatively correct and quantitatively acceptable predictions (with at most 50% deviations from the data). More complex models with two pathways or back-competition with metabolites have the potential to further improve predictions for BTEX mixtures. Conclusions: A systems biology approach to large-scale prediction of metabolic interactions is advantageous on several counts and technically feasible. However, ways to obtain the required parameters need to be further explored. PMID:21835728

  13. Calculated flame temperature (CFT) modeling of fuel mixture lower flammability limits. (United States)

    Zhao, Fuman; Rogers, William J; Mannan, M Sam


    Heat loss can affect experimental flammability limits, and it becomes indispensable to quantify flammability limits when apparatus quenching effect becomes significant. In this research, the lower flammability limits of binary hydrocarbon mixtures are predicted using calculated flame temperature (CFT) modeling, which is based on the principle of energy conservation. Specifically, the hydrocarbon mixture lower flammability limit is quantitatively correlated to its final flame temperature at non-adiabatic conditions. The modeling predictions are compared with experimental observations to verify the validity of CFT modeling, and the minor deviations between them indicated that CFT modeling can represent experimental measurements very well. Moreover, the CFT modeling results and Le Chatelier's Law predictions are also compared, and the agreement between them indicates that CFT modeling provides a theoretical justification for the Le Chatelier's Law.

  14. A joint finite mixture model for clustering genes from independent Gaussian and beta distributed data

    Directory of Open Access Journals (Sweden)

    Yli-Harja Olli


    Full Text Available Abstract Background Cluster analysis has become a standard computational method for gene function discovery as well as for more general explanatory data analysis. A number of different approaches have been proposed for that purpose, out of which different mixture models provide a principled probabilistic framework. Cluster analysis is increasingly often supplemented with multiple data sources nowadays, and these heterogeneous information sources should be made as efficient use of as possible. Results This paper presents a novel Beta-Gaussian mixture model (BGMM for clustering genes based on Gaussian distributed and beta distributed data. The proposed BGMM can be viewed as a natural extension of the beta mixture model (BMM and the Gaussian mixture model (GMM. The proposed BGMM method differs from other mixture model based methods in its integration of two different data types into a single and unified probabilistic modeling framework, which provides a more efficient use of multiple data sources than methods that analyze different data sources separately. Moreover, BGMM provides an exceedingly flexible modeling framework since many data sources can be modeled as Gaussian or beta distributed random variables, and it can also be extended to integrate data that have other parametric distributions as well, which adds even more flexibility to this model-based clustering framework. We developed three types of estimation algorithms for BGMM, the standard expectation maximization (EM algorithm, an approximated EM and a hybrid EM, and propose to tackle the model selection problem by well-known model selection criteria, for which we test the Akaike information criterion (AIC, a modified AIC (AIC3, the Bayesian information criterion (BIC, and the integrated classification likelihood-BIC (ICL-BIC. Conclusion Performance tests with simulated data show that combining two different data sources into a single mixture joint model greatly improves the clustering

  15. Statistical-thermodynamic model for light scattering from eye lens protein mixtures (United States)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.


    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model

  16. Quantitative structure-retardation factor relationship of protein amino acids in different solvent mixtures for normal-phase thin-layer chromatography. (United States)

    Yousefinejad, Saeed; Honarasa, Fatemeh; Saeed, Negar


    A quantitative predictive/descriptive model was proposed for the retardation factors of protein amino acids in normal-phase thin-layer chromatography. The experimental retardation factors of 126 chromatographic mixtures (21 protein amino acids in different mobile phases) were used as the independent variable. The matrix of dependent variables of the model was built using structural descriptors of amino acids and empirical parameters of solvents of the applied mobile phases. After variable selection, a five-parametr model was proposed for the retardation factor of amino acids, which covered about 84 and 77% variance of data in training and cross-validation, respectively. The correlation coefficient of the external test set was 0.80, which shows the prediction potential of proposed model as well as its good applicability domain that was checked using a standardized residual-leverage plot. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Shell model and spectroscopic factors

    Energy Technology Data Exchange (ETDEWEB)

    Poves, P. [Madrid Univ. Autonoma and IFT, UAM/CSIC, E-28049 (Spain)


    In these lectures, I introduce the notion of spectroscopic factor in the shell model context. A brief review is given of the present status of the large scale applications of the Interacting Shell Model. The spectroscopic factors and the spectroscopic strength are discussed for nuclei in the vicinity of magic closures and for deformed nuclei. (author)

  18. Using the Mixture Rasch Model to Explore Knowledge Resources Students Invoke in Mathematic and Science Assessments (United States)

    Zhang, Danhui; Orrill, Chandra; Campbell, Todd


    The purpose of this study was to investigate whether mixture Rasch models followed by qualitative item-by-item analysis of selected Programme for International Student Assessment (PISA) mathematics and science items offered insight into knowledge students invoke in mathematics and science separately and combined. The researchers administered an…

  19. The Impact of Misspecifying Class-Specific Residual Variances in Growth Mixture Models (United States)

    Enders, Craig K.; Tofighi, Davood


    The purpose of this study was to examine the impact of misspecifying a growth mixture model (GMM) by assuming that Level-1 residual variances are constant across classes, when they do, in fact, vary in each subpopulation. Misspecification produced bias in the within-class growth trajectories and variance components, and estimates were…

  20. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS


    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite mix

  1. Bayesian Inference for Growth Mixture Models with Latent Class Dependent Missing Data (United States)

    Lu, Zhenqiu Laura; Zhang, Zhiyong; Lubke, Gitta


    "Growth mixture models" (GMMs) with nonignorable missing data have drawn increasing attention in research communities but have not been fully studied. The goal of this article is to propose and to evaluate a Bayesian method to estimate the GMMs with latent class dependent missing data. An extended GMM is first presented in which class…

  2. Estimating Lion Abundance using N-mixture Models for Social Species. (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E


    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  3. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.


    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  4. Multivariate compressive sensing for image reconstruction in the wavelet domain: using scale mixture models. (United States)

    Wu, Jiao; Liu, Fang; Jiao, L C; Wang, Xiaodong; Hou, Biao


    Most wavelet-based reconstruction methods of compressive sensing (CS) are developed under the independence assumption of the wavelet coefficients. However, the wavelet coefficients of images have significant statistical dependencies. Lots of multivariate prior models for the wavelet coefficients of images have been proposed and successfully applied to the image estimation problems. In this paper, the statistical structures of the wavelet coefficients are considered for CS reconstruction of images that are sparse or compressive in wavelet domain. A multivariate pursuit algorithm (MPA) based on the multivariate models is developed. Several multivariate scale mixture models are used as the prior distributions of MPA. Our method reconstructs the images by means of modeling the statistical dependencies of the wavelet coefficients in a neighborhood. The proposed algorithm based on these scale mixture models provides superior performance compared with many state-of-the-art compressive sensing reconstruction algorithms.

  5. Growth of Saccharomyces cerevisiae CBS 426 on mixtures of glucose and succinic acid: a model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, J.A.B.A.F.; Koellmann, C.J.W.; Dekkers-de Kok, H.E.; Roels, J.A.


    Saccharomyces cerevisiae CBS 426 was grown in continuous culture in a defined medium with a mixture of glucose and succinic acid as the carbon source. Growth on succinic acid was possible after long adaptation periods. The flows of glucose, succinic acid, oxygen, carbon dioxide, and biomass to and from the system were measured. It proved necessary to expand our previous model to accommodate the active transport of succinic acid by the cell. The values found for the efficiency of the oxidative phosphorylation (PIO) and the amount of ATP needed for production of biomass from monomers gave the same values as found for substrate mixtures taken up passively. (Refs. 13).

  6. Numerical Investigation of Nanofluid Thermocapillary Convection Based on Two-Phase Mixture Model (United States)

    Jiang, Yanni; Xu, Zelin


    Numerical investigation of nanofluid thermocapillary convection in a two-dimensional rectangular cavity was carried out, in which the two-phase mixture model was used to simulate the nanoparticles-fluid mixture flow, and the influences of volume fraction of nanoparticles on the flow characteristics and heat transfer performance were discussed. The results show that, with the increase of nanoparticle volume fraction, thermocapillary convection intensity weakens gradually, and the heat conduction effect strengthens; meanwhile, the temperature gradient at free surface increases but the free surface velocity decreases gradually. The average Nusselt number of hot wall and the total entropy generation decrease with nanoparticle volume fraction increasing.

  7. Infrared image segmentation based on region of interest extraction with Gaussian mixture modeling (United States)

    Yeom, Seokwon


    Infrared (IR) imaging has the capability to detect thermal characteristics of objects under low-light conditions. This paper addresses IR image segmentation with Gaussian mixture modeling. An IR image is segmented with Expectation Maximization (EM) method assuming the image histogram follows the Gaussian mixture distribution. Multi-level segmentation is applied to extract the region of interest (ROI). Each level of the multi-level segmentation is composed of the k-means clustering, the EM algorithm, and a decision process. The foreground objects are individually segmented from the ROI windows. In the experiments, various methods are applied to the IR image capturing several humans at night.

  8. GIS disconnector model performance with SF{sub 6}/N{sub 2} mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Gaillac, C. [Schneider Electric (France)


    The lightning impulse breakdown voltage of a model, 145 kV GIS disconnector was studied using SF{sub 6}/N{sub 2} mixtures. Mixtures with between 0% and 15% SF{sub 6} were used. Sphere-sphere, point-plane and sphere-rod geometrics were studied. In most cases, breakdown strength increased with both SF{sub 6} content and pressure. In the case of surface flashover, a pressure of about 8 bar with 15% SF{sub 6}, gave roughly equivalent results to that of 4 bar pure SF{sub 6}. (author)

  9. The cyclical component factor model

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; Hansen, Henrik; Smidt, John

    Forecasting using factor models based on large data sets have received ample attention due to the models' ability to increase forecast accuracy with respect to a range of key macroeconomic variables in the US and the UK. However, forecasts based on such factor models do not uniformly outperform...... the simple autoregressive model when using data from other countries. In this paper we propose to estimate the factors based on the pure cyclical components of the series entering the large data set. Monte Carlo evidence and an empirical illustration using Danish data shows that this procedure can indeed...

  10. Development and application of a multimetal multibiotic ligand model for assessing aquatic toxicity of metal mixtures. (United States)

    Santore, Robert C; Ryan, Adam C


    A multimetal, multiple binding site version of the biotic ligand model (mBLM) has been developed for predicting and explaining the bioavailability and toxicity of mixtures of metals to aquatic organisms. The mBLM was constructed by combining information from single-metal BLMs to preserve compatibility between the single-metal and multiple-metal approaches. The toxicities from individual metals were predicted by assuming additivity of the individual responses. Mixture toxicity was predicted based on both dissolved metal and mBLM-normalized bioavailable metal. Comparison of the 2 prediction methods indicates that metal mixtures frequently appear to have greater toxicity than an additive estimation of individual effects on a dissolved metal basis. However, on an mBLM-normalized basis, mixtures of metals appear to be additive or less than additive. This difference results from interactions between metals and ligands in solutions including natural organic matter, processes that are accounted for in the mBLM. As part of the mBLM approach, a technique for considering variability was developed to calculate confidence bounds (called response envelopes) around the central concentration-response relationship. Predictions using the mBLM and response envelope were compared with observed toxicity for a number of invertebrate and fish species. The results show that the mBLM is a useful tool for considering bioavailability when assessing the toxicity of metal mixtures.

  11. Dynamic mean field theory for lattice gas models of fluid mixtures confined in mesoporous materials. (United States)

    Edison, J R; Monson, P A


    We present the extension of dynamic mean field theory (DMFT) for fluids in porous materials (Monson, P. A. J. Chem. Phys. 2008, 128, 084701) to the case of mixtures. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable equilibrium states for fluids in pores after a change in the bulk pressure or composition. It is especially useful for studying systems where there are capillary condensation or evaporation transitions. Nucleation processes associated with these transitions are emergent features of the theory and can be visualized via the time dependence of the density distribution and composition distribution in the system. For mixtures an important component of the dynamics is relaxation of the composition distribution in the system, especially in the neighborhood of vapor-liquid interfaces. We consider two different types of mixtures, modeling hydrocarbon adsorption in carbon-like slit pores. We first present results on bulk phase equilibria of the mixtures and then the equilibrium (stable/metastable) behavior of these mixtures in a finite slit pore and an inkbottle pore. We then use DMFT to describe the evolution of the density and composition in the pore in the approach to equilibrium after changing the state of the bulk fluid via composition or pressure changes.

  12. Sleep-promoting effects of the GABA/5-HTP mixture in vertebrate models. (United States)

    Hong, Ki-Bae; Park, Yooheon; Suh, Hyung Joo


    The aim of this study was to investigate the sleep-promoting effect of combined γ-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) on sleep quality and quantity in vertebrate models. Pentobarbital-induced sleep test and electroencephalogram (EEG) analysis were applied to investigate sleep latency, duration, total sleeping time and sleep quality of two amino acids and GABA/5-HTP mixture. In addition, real-time PCR and HPLC analysis were applied to analyze the signaling pathway. The GABA/5-HTP mixture significantly regulated the sleep latency, duration (pHTP mixture modulates both GABAergic and serotonergic signaling. Moreover, the sleep architecture can be controlled by the regulation of GABAA receptor and GABA content with 5-HTP.

  13. Mixtures of endocrine disrupting contaminants modelled on human high end exposures

    DEFF Research Database (Denmark)

    Christiansen, Sofie; Kortenkamp, A.; Petersen, Marta Axelstad


    in vivo endocrine disrupting effects and information about human exposures was available, including phthalates, pesticides, UV‐filters, bisphenol A, parabens and the drug paracetamol. The mixture ratio was chosen to reflect high end human intakes. To make decisions about the dose levels for studies...... though each individual chemical is present at low, ineffective doses, but the effects of mixtures modelled based on human intakes have not previously been investigated. To address this issue for the first time, we selected 13 chemicals for a developmental mixture toxicity study in rats where data about...... in the rat, we employed the point of departure index (PODI) approach, which sums up ratios between estimated exposure levels and no‐observed‐adverse‐effect‐level (NOAEL) values of individual substances. For high end human exposures to the 13 selected chemicals, we calculated a PODI of 0.016. As only a PODI...

  14. Modelling of phase equilibria and related properties of mixtures involving lipids

    DEFF Research Database (Denmark)

    Cunico, Larissa

    Many challenges involving physical and thermodynamic properties in the production of edible oils and biodiesel are observed, such as availability of experimental data and realiable prediction. In the case of lipids, a lack of experimental data for pure components and also for their mixtures in open...... literature was observed, what makes it necessary to development reliable predictive models from limited data. One of the first steps of this project was the creation of a database containing properties of mixtures involved in tasks related to process design, simulation, and optimization as well as design...... of chemicals based products. This database was combined with the existing lipids database of pure component properties. To contribute to the missing data, measurements of isobaric vapour-liquid equilibrium (VLE) data of two binary mixtures at two different pressures were performed using Differential Scanning...

  15. Finite mixture models for the computation of isotope ratios in mixed isotopic samples (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas


    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  16. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images. (United States)

    Li, Xingyu; Plataniotis, Konstantinos N


    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  17. M3B: A coarse grain model for the simulation of oligosaccharides and their water mixtures. (United States)

    Goddard, William A.; Cagin, Tahir; Molinero, Valeria


    Water and sugar dynamics in concentrated carbohydrate solutions is of utmost importance in food and pharmaceutical technology. Water diffusion in concentrated sugar mixtures can be slowed down many orders of magnitude with respect to bulk water [1], making extremely expensive the simulation of these systems with atomistic detail for the required time-scales. We present a coarse grain model (M3B) for malto-oligosaccharides and their water mixtures. M3B speeds up molecular dynamics simulations about 500-1000 times with respect to the atomistic model while retaining enough detail to be mapped back to the atomistic structures with low uncertainty in the positions. The former characteristic allows the study of water and carbohydrate dynamics in supercooled and polydisperse mixtures with characteristic time scales above the nanosecond. The latter makes M3B well suited for combined atomistic-mesoscale simulations. We present the parameterization of M3B force field for water and a family of technologically relevant glucose oligosaccharides, the alpha-(1->4) glucans. The coarse grain force field is completely parameterized from atomistic simulations to reproduce the density, cohesive energy and structural parameters of amorphous sugars. We will show that M3B is capable to describe the helical character of the higher oligosaccharides, and that the water structure in low moisture mixtures shows the same features obtained with the atomistic and M3B models. [1] R Parker, SG Ring: Carbohydr. Res. 273 (1995) 147-55.

  18. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: a comparative study. (United States)

    Naguib, Ibrahim A; Darwish, Hany W


    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  19. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P


    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  20. Modulational instability, solitons and periodic waves in a model of quantum degenerate boson-fermion mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Belmonte-Beitia, Juan [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Perez-Garcia, Victor M. [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Vekslerchik, Vadym [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain)


    In this paper, we study a system of coupled nonlinear Schroedinger equations modelling a quantum degenerate mixture of bosons and fermions. We analyze the stability of plane waves, give precise conditions for the existence of solitons and write explicit solutions in the form of periodic waves. We also check that the solitons observed previously in numerical simulations of the model correspond exactly to our explicit solutions and see how plane waves destabilize to form periodic waves.

  1. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli


    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  2. Mixture models of geometric distributions in genomic analysis of inter-nucleotide distances

    Directory of Open Access Journals (Sweden)

    Adelaide Valente Freitas


    Full Text Available The mapping defined by inter-nucleotide distances (InD provides a reversible numerical representation of the primary structure of DNA. If nucleotides were independently placed along the genome, a finite mixture model of four geometric distributions could be fitted to the InD where the four marginal distributions would be the expected distributions of the four nucleotide types. We analyze a finite mixture model of geometric distributions (f_2, with marginals not explicitly addressed to the nucleotide types, as an approximation to the InD. We use BIC in the composite likelihood framework for choosing the number of components of the mixture and the EM algorithm for estimating the model parameters. Based on divergence profiles, an experimental study was carried out on the complete genomes of 45 species to evaluate f_2. Although the proposed model is not suited to the InD, our analysis shows that divergence profiles involving the empirical distribution of the InD are also exhibited by profiles involving f_2. It suggests that statistical regularities of the InD can be described by the model f_2. Some characteristics of the DNA sequences captured by the model f_2 are illustrated. In particular, clusterings of subgroups of eukaryotes (primates, mammalians, animals and plants are detected.


    Institute of Scientific and Technical Information of China (English)

    Hu Xiaofei; Zhu Xiuchang


    In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).

  4. Applicability of linearized Dusty Gas Model for multicomponent diffusion of gas mixtures in porous solids

    Directory of Open Access Journals (Sweden)

    Marković Jelena


    Full Text Available The transport of gaseous components through porous media could be described according to the well-known Fick model and its modifications. It is also known that Fick’s law is not suitable for predicting the fluxes in multicomponent gas mixtures, excluding binary mixtures. This model is still frequently used in chemical engineering because of its simplicity. Unfortunately, besides the Fick’s model there is no generally accepted model for mass transport through porous media (membranes, catalysts etc.. Numerous studies on transport through porous media reveal that Dusty Gas Model (DGM is superior in its ability to predict fluxes in multicomponent mixtures. Its wider application is limited by more complicated calculation procedures comparing to Fick’s model. It should be noted that there were efforts to simplify DGM in order to obtain satisfactory accurate results. In this paper linearized DGM, as the simplest form of DGM, is tested under conditions of zero system pressure drop, small pressure drop, and different temperatures. Published experimental data are used in testing the accuracy of the linearized procedure. It is shown that this simplified procedure is accurate enough compared to the standard more complicated calculations.

  5. Comparison of activity coefficient models for atmospheric aerosols containing mixtures of electrolytes, organics, and water (United States)

    Tong, Chinghang; Clegg, Simon L.; Seinfeld, John H.

    Atmospheric aerosols generally comprise a mixture of electrolytes, organic compounds, and water. Determining the gas-particle distribution of volatile compounds, including water, requires equilibrium or mass transfer calculations, at the heart of which are models for the activity coefficients of the particle-phase components. We evaluate here the performance of four recent activity coefficient models developed for electrolyte/organic/water mixtures typical of atmospheric aerosols. Two of the models, the CSB model [Clegg, S.L., Seinfeld, J.H., Brimblecombe, P., 2001. Thermodynamic modelling of aqueous aerosols containing electrolytes and dissolved organic compounds. Journal of Aerosol Science 32, 713-738] and the aerosol diameter dependent equilibrium model (ADDEM) [Topping, D.O., McFiggans, G.B., Coe, H., 2005. A curved multi-component aerosol hygroscopicity model framework: part 2—including organic compounds. Atmospheric Chemistry and Physics 5, 1223-1242] treat ion-water and organic-water interactions but do not include ion-organic interactions; these can be referred to as "decoupled" models. The other two models, reparameterized Ming and Russell model 2005 [Raatikainen, T., Laaksonen, A., 2005. Application of several activity coefficient models to water-organic-electrolyte aerosols of atmospheric interest. Atmospheric Chemistry and Physics 5, 2475-2495] and X-UNIFAC.3 [Erdakos, G.B., Change, E.I., Pandow, J.F., Seinfeld, J.H., 2006. Prediction of activity coefficients in liquid aerosol particles containing organic compounds, dissolved inorganic salts, and water—Part 3: Organic compounds, water, and ionic constituents by consideration of short-, mid-, and long-range effects using X-UNIFAC.3. Atmospheric Environment 40, 6437-6452], include ion-organic interactions; these are referred to as "coupled" models. We address the question—Does the inclusion of a treatment of ion-organic interactions substantially improve the performance of the coupled models over

  6. A Note Comparing Component-Slope, Scheffé, and Cox Parameterizations of the Linear Mixture Experiment Model

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.


    A mixture experiment involves combining two or more components in various proportions and collecting data on one or more responses. A linear mixture model may adequately represent the relationship between a response and mixture component proportions and be useful in screening the mixture components. The Scheffé and Cox parameterizations of the linear mixture model are commonly used for analyzing mixture experiment data. With the Scheffé parameterization, the fitted coefficient for a component is the predicted response at that pure component (i.e., single-component mixture). With the Cox parameterization, the fitted coefficient for a mixture component is the predicted difference in response at that pure component and at a pre-specified reference composition. This paper presents a new component-slope parameterization, in which the fitted coefficient for a mixture component is the predicted slope of the linear response surface along the direction determined by that pure component and at a pre-specified reference composition. The component-slope, Scheffé, and Cox parameterizations of the linear mixture model are compared and their advantages and disadvantages are discussed.

  7. Modeling of the Thermodynamics of the Acetic Acid−Water Mixture Using the Cubic-Plus-Association Equation of State

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Kontogeorgis, Georgios; Behrens, Paul K.


    density, enthalpy of vaporization, and vapor-phase compressibility factor data. The CPA-HV parameters have been fitted to, among others, experimental vapor compressibility factor data and experimental relative volatility data at different temperature ranges. The purpose of the work was to investigate...... whether the CPA-HV model can describe the vapor−liquid equilibrium for acetic acid−water over a temperature range of 200 K and at the same time represent the behavior of pure acetic acid and acetic acid−water mixtures with respect to enthalpies of vaporization and compressibility factors. It is shown...... that satisfactory results are overall obtained, but if an excellent match is needed over the whole temperature range, then different interaction parameters need to be used at the various temperature ranges....

  8. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro

    DEFF Research Database (Denmark)

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;


    , antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...

  9. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models (United States)

    Liu, Junhui


    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  10. Fitting a mixture model by expectation maximization to discover motifs in biopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, T.L.; Elkan, C. [Univ. of California, La Jolla, CA (United States)


    The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.

  11. Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples

    CERN Document Server

    Melchior, Peter


    We extend the common mixtures-of-Gaussians density estimation approach to account for a known sample incompleteness by simultaneous imputation from the current model. The method called GMMis generalizes existing Expectation-Maximization techniques for truncated data to arbitrary truncation geometries and probabilistic rejection. It can incorporate an uniform background distribution as well as independent multivariate normal measurement errors for each of the observed samples, and recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. We compare GMMis to the standard Gaussian mixture model for simple test cases with different types of incompleteness, and apply it to observational data from the NASA Chandra X-ray telescope. The python code is capable of performing density estimation with millions of samples and thousands of model components and is released as an open-source package at

  12. Modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C

    Energy Technology Data Exchange (ETDEWEB)

    Necula, H.; Badea, A. [Universite Politecnica de Bucarest (Romania). Faculte d' Energetique; Lallemand, M. [INSA, Villeurbanne (France). Centre de Thermique de Lyon; Marvillet, C. [CEA-Grenoble (France)


    This study concerns the steady state modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C. In this local type model, the control volumes are a function of the geometric configuration of the evaporator in which baffles are fitted. The validation of the model has been made by comparison between theoretical and experimental results obtained from an experimental investigation with a refrigerating machine. For test conditions, the flow pattern has been identified from a flow pattern map as being stratified. Theoretical results show the effect of different parameters such as the saturation pressure, the inlet quality, etc. on the local variables (temperature, slip ratio). The effect of leakage on the mixture composition has also been investigated. (author)

  13. A lattice traffic model with consideration of preceding mixture traffic information

    Institute of Scientific and Technical Information of China (English)

    Li Zhi-Peng; Liu Fu-Qiang; Sun Jian


    In this paper,the lattice model is presented,incorporating not only site information about preceding cars but also relative currents in front.We derive the stability condition of the extended model by considering a small perturbation around the homogeneous flow solution and find that the improvement in the stability of traffic flow is obtained by taking into account preceding mixture traffic information.Direct simulations also confirm that the traffic jam can be suppressed efficiently by considering the relative currents ahead,just like incorporating site information in front.Moreover,from the nonlinear analysis of the extended models,the preceding mixture traffic information dependence of the propagating kink solutions for traffic jams is obtained by deriving the modified KdV equation near the critical point using the reductive perturbation method.

  14. A Bayesian threshold-normal mixture model for analysis of a continuous mastitis-related trait. (United States)

    Ødegård, J; Madsen, P; Gianola, D; Klemetsdal, G; Jensen, J; Heringstad, B; Korsgaard, I R


    Mastitis is associated with elevated somatic cell count in milk, inducing a positive correlation between milk somatic cell score (SCS) and the absence or presence of the disease. In most countries, selection against mastitis has focused on selecting parents with genetic evaluations that have low SCS. Univariate or multivariate mixed linear models have been used for statistical description of SCS. However, an observation of SCS can be regarded as drawn from a 2- (or more) component mixture defined by the (usually) unknown health status of a cow at the test-day on which SCS is recorded. A hierarchical 2-component mixture model was developed, assuming that the health status affecting the recorded test-day SCS is completely specified by an underlying liability variable. Based on the observed SCS, inferences can be drawn about disease status and parameters of both SCS and liability to mastitis. The prior probability of putative mastitis was allowed to vary between subgroups (e.g., herds, families), by specifying fixed and random effects affecting both SCS and liability. Using simulation, it was found that a Bayesian model fitted to the data yielded parameter estimates close to their true values. The model provides selection criteria that are more appealing than selection for lower SCS. The proposed model can be extended to handle a wide range of problems related to genetic analyses of mixture traits.

  15. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)


    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  16. Extensions to Multivariate Space Time Mixture Modeling of Small Area Cancer Data

    Directory of Open Access Journals (Sweden)

    Rachel Carroll


    Full Text Available Oral cavity and pharynx cancer, even when considered together, is a fairly rare disease. Implementation of multivariate modeling with lung and bronchus cancer, as well as melanoma cancer of the skin, could lead to better inference for oral cavity and pharynx cancer. The multivariate structure of these models is accomplished via the use of shared random effects, as well as other multivariate prior distributions. The results in this paper indicate that care should be taken when executing these types of models, and that multivariate mixture models may not always be the ideal option, depending on the data of interest.

  17. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan


    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... a simplified gradient theory (SGT) model for computing surface tensions. With this model, it is not required to solve the time-consuming density profile equations of the gradient theory model. The SRK EOS was applied to calculate the properties of the homogeneous fluid. First, the SGT model was used to predict...

  18. Analysis of Two-sample Censored Data Using a Semiparametric Mixture Model

    Institute of Scientific and Technical Information of China (English)

    Gang Li; Chien-tai Lin


    In this article we study a semiparametric mixture model for the two-sample problem with right censored data. The model implies that the densities for the continuous outcomes are related by a parametric tilt but otherwise unspecified. It provides a useful alternative to the Cox (1972) proportional hazards model for the comparison of treatments based on right censored survival data. We propose an iterative algorithm for the semiparametric maximum likelihood estimates of the parametric and nonparametric components of the model. The performance of the proposed method is studied using simulation. We illustrate our method in an application to melanoma.

  19. Cerebrolysin, a mixture of neurotrophic factors induces marked neuroprotection in spinal cord injury following intoxication of engineered nanoparticles from metals. (United States)

    Menon, Preeti Kumaran; Muresanu, Dafin Fior; Sharma, Aruna; Mössler, Herbert; Sharma, Hari Shanker


    Spinal cord injury (SCI) is the world's most disastrous disease for which there is no effective treatment till today. Several studies suggest that nanoparticles could adversely influence the pathology of SCI and thereby alter the efficacy of many neuroprotective agents. Thus, there is an urgent need to find suitable therapeutic agents that could minimize cord pathology following trauma upon nanoparticle intoxication. Our laboratory has been engaged for the last 7 years in finding suitable therapeutic strategies that could equally reduce cord pathology in normal and in nanoparticle-treated animal models of SCI. We observed that engineered nanoparticles from metals e.g., aluminum (Al), silver (Ag) and copper (Cu) (50-60 nm) when administered in rats daily for 7 days (50 mg/kg, i.p.) resulted in exacerbation of cord pathology after trauma that correlated well with breakdown of the blood-spinal cord barrier (BSCB) to serum proteins. The entry of plasma proteins into the cord leads to edema formation and neuronal damage. Thus, future drugs should be designed in such a way to be effective even when the SCI is influenced by nanoparticles. Previous research suggests that a suitable combination of neurotrophic factors could induce marked neuroprotection in SCI in normal animals. Thus, we examined the effects of a new drug; cerebrolysin that is a mixture of different neurotrophic factors e.g., brain-derived neurotrophic factor (BDNF), glial cell line derived neurotrophic factor (GDNF), nerve growth factor (NGF), ciliary neurotrophic factor (CNTF) and other peptide fragments to treat normal or nanoparticle-treated rats after SCI. Our observations showed that cerebrolysin (2.5 ml/kg, i.v.) before SCI resulted in good neuroprotection in normal animals, whereas nanoparticle-treated rats required a higher dose of the drug (5.0 ml/kg, i.v.) to induce comparable neuroprotection in the cord after SCI. Cerebrolysin also reduced spinal cord water content, leakage of plasma proteins

  20. Neck/shoulder and back pain in new graduate nurses: A growth mixture modeling analysis. (United States)

    Lövgren, Malin; Gustavsson, Petter; Melin, Bo; Rudman, Ann


    Although it is well known that musculoskeletal disorders are common among registered nurses, little longitudinal research has been conducted to examine this problem from nursing education to working life. The aim was to investigate the prevalence and incidence of neck/shoulder and back pain in nursing students in their final semester, and one and two years after graduation. Furthermore, to identify common trajectories of neck/shoulder and back pain, and explore sociodemographic and lifestyle-related factors, contextual factors and health outcome that might be characteristic of individuals in the various trajectories. Longitudinal study following nursing students from their final year of studies, with follow-ups one and two years after graduation. Nursing students who graduated from the 26 universities providing undergraduate nursing education in Sweden 2002 were invited to participate (N=1700). Of those asked, 1153 gave their informed consent. The participants answered postal surveys at yearly intervals. Descriptive statistics were used to analyze prevalence and incidence of pain, and growth mixture modeling was applied to identify different homogeneous clusters of individuals following similar trajectories in pain development across time. The prevalence of neck/shoulder and back pain remained constant over time (around 50% for neck/shoulder pain and just over 40% for back pain). Six different development trajectories for each symptom were found, reflecting patterns of stable pain levels or variation in levels over time: one symptom-free group, two decreasing pain groups, two increasing pain groups, and one chronic pain group. With few exceptions, the same factors (sex, children, chronic disease, working overtime, work absence, sickness presence, physical load, depression, self-rated health, sleep quality and muscular tension) were associated with neck/shoulder and back pain trajectories. Different types of physical load characterized new nurses with neck

  1. Using a Genetic mixture model to study Phenotypic traits: Differential fecundity among Yukon river Chinook Salmon (United States)

    Bromaghin, J.F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.


    Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length-adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large-mesh gill-net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size-selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock-specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error.The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length-adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted

  2. Modeling the surface tension of complex, reactive organic-inorganic mixtures (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye


    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  3. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Directory of Open Access Journals (Sweden)

    A. N. Schwier


    Full Text Available Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2–6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski–Langmuir (S–L model which was first presented by Henning et al. (2005. Two approaches for modeling the effects of salt were tested: (1 the Tuckermann approach (an extension of the Henning model with an additional explicit salt term, and (2 a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2 for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems and Tuckermann approach provide similar modeling fits and goodness of fit (χ2 values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  4. A dirichlet process covarion mixture model and its assessments using posterior predictive discrepancy tests. (United States)

    Zhou, Yan; Brinkmann, Henner; Rodrigue, Nicolas; Lartillot, Nicolas; Philippe, Hervé


    Heterotachy, the variation of substitution rate at a site across time, is a prevalent phenomenon in nucleotide and amino acid alignments, which may mislead probabilistic-based phylogenetic inferences. The covarion model is a special case of heterotachy, in which sites change between the "ON" state (allowing substitutions according to any particular model of sequence evolution) and the "OFF" state (prohibiting substitutions). In current implementations, the switch rates between ON and OFF states are homogeneous across sites, a hypothesis that has never been tested. In this study, we developed an infinite mixture model, called the covarion mixture (CM) model, which allows the covarion parameters to vary across sites, controlled by a Dirichlet process prior. Moreover, we combine the CM model with other approaches. We use a second independent Dirichlet process that models the heterogeneities of amino acid equilibrium frequencies across sites, known as the CAT model, and general rate-across-site heterogeneity is modeled by a gamma distribution. The application of the CM model to several large alignments demonstrates that the covarion parameters are significantly heterogeneous across sites. We describe posterior predictive discrepancy tests and use these to demonstrate the importance of these different elements of the models.

  5. Cure fraction estimation from the mixture cure models for grouped survival data. (United States)

    Yu, Binbing; Tiwari, Ram C; Cronin, Kathleen A; Feuer, Eric J


    Mixture cure models are usually used to model failure time data with long-term survivors. These models have been applied to grouped survival data. The models provide simultaneous estimates of the proportion of the patients cured from disease and the distribution of the survival times for uncured patients (latency distribution). However, a crucial issue with mixture cure models is the identifiability of the cure fraction and parameters of kernel distribution. Cure fraction estimates can be quite sensitive to the choice of latency distributions and length of follow-up time. In this paper, sensitivity of parameter estimates under semi-parametric model and several most commonly used parametric models, namely lognormal, loglogistic, Weibull and generalized Gamma distributions, is explored. The cure fraction estimates from the model with generalized Gamma distribution is found to be quite robust. A simulation study was carried out to examine the effect of follow-up time and latency distribution specification on cure fraction estimation. The cure models with generalized Gamma latency distribution are applied to the population-based survival data for several cancer sites from the Surveillance, Epidemiology and End Results (SEER) Program. Several cautions on the general use of cure model are advised.

  6. Symmetrization of excess Gibbs free energy: A simple model for binary liquid mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Castellanos-Suarez, Aly J., E-mail: acastell@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of); Garcia-Sucre, Maximo, E-mail: mgs@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of)


    A symmetric expression for the excess Gibbs free energy of liquid binary mixtures is obtained using an appropriate definition for the effective contact fraction. We have identified a mechanism of local segregation as the main cause of the contact fraction variation with the concentration. Starting from this mechanism we develop a simple model for describing binary liquid mixtures. In this model two parameters appear: one adjustable, and the other parameter depending on the first one. Following this procedure we reproduce the experimental data of (liquid + vapor) equilibrium with a degree of accuracy comparable to well-known more elaborated models. The way in which we take into account the effective contacts between molecules allows identifying the compound which may be considered to induce one of the following processes: segregation, anti-segregation and dispersion of the components in the liquid mixture. Finally, the simplicity of the model allows one to obtain only one resulting interaction energy parameter, which makes easier the physical interpretation of the results.

  7. Reconstruction of coronary artery centrelines from x-ray rotational angiography using a probabilistic mixture model (United States)

    Ćimen, Serkan; Gooya, Ali; Frangi, Alejandro F.


    Three-dimensional reconstructions of coronary arterial trees from X-ray rotational angiography (RA) images have the potential to compensate the limitations of RA due to projective imaging. Most of the existing model based reconstruction algorithms are either based on forward-projection of a 3D deformable model onto X-ray angiography images or back-projection of 2D information extracted from X-ray angiography images to 3D space for further processing. All of these methods have their shortcomings such as dependency on accurate 2D centerline segmentations. In this paper, the reconstruction is approached from a novel perspective, and is formulated as a probabilistic reconstruction method based on mixture model (MM) representation of point sets describing the coronary arteries. Specifically, it is assumed that the coronary arteries could be represented by a set of 3D points, whose spatial locations denote the Gaussian components in the MM. Additionally, an extra uniform distribution is incorporated in the mixture model to accommodate outliers (noise, over-segmentation etc.) in the 2D centerline segmentations. Treating the given 2D centreline segmentations as data points generated from MM, the 3D means, isotropic variance, and mixture weights of the Gaussian components are estimated by maximizing a likelihood function. Initial results from a phantom study show that the proposed method is able to handle outliers in 2D centreline segmentations, which indicates the potential of our formulation. Preliminary reconstruction results in the clinical data are also presented.

  8. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  9. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Directory of Open Access Journals (Sweden)

    Lei Wang


    Full Text Available MODIS (Moderate Resolution Imaging Spectroradiometer is a key instrument aboard the Terra (EOS AM and Aqua (EOS PM satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day, which implies that MODIS data could be used as satellite data source for rice cultivation area estimation, possibly rice growth monitoring and yield forecasting on the regional scale.

  10. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.


    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power...... for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome...... of variance explained by genotyped SNPs, CD and SZ have a broadly dissimilar genetic architecture, due to differing mean effect size and proportion of non-null loci....

  11. Automated sleep spindle detection using IIR filters and a Gaussian Mixture Model. (United States)

    Patti, Chanakya Reddy; Penzel, Thomas; Cvetkovic, Dean


    Sleep spindle detection using modern signal processing techniques such as the Short-Time Fourier Transform and Wavelet Analysis are common research methods. These methods are computationally intensive, especially when analysing data from overnight sleep recordings. The authors of this paper propose an alternative using pre-designed IIR filters and a multivariate Gaussian Mixture Model. Features extracted with IIR filters are clustered using a Gaussian Mixture Model without the use of any subject independent thresholds. The Algorithm was tested on a database consisting of overnight sleep PSG of 5 subjects and an online public spindles database consisting of six 30 minute sleep excerpts. An overall sensitivity of 57% and a specificity of 98.24% was achieved in the overnight database group and a sensitivity of 65.19% at a 16.9% False Positive proportion for the 6 sleep excerpts.

  12. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Institute of Scientific and Technical Information of China (English)


    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites.Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers.Shaoxing county of Zhcjiang Province in China was chosen to be the study site and early rice was selected as the study crop.The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day,which implies that MODIS data could be used as satellite data source for rice cultivation area estimation,possibly rice growth monitoring and yield forecasting on the regional scale.

  13. A cross-association model for CO2-methanol and CO2-ethanol mixtures

    Institute of Scientific and Technical Information of China (English)


    A cross-association model was proposed for CO2-alcohol mixtures based on the statistical associating fluid theory (SAFT).CO2 was treated as a pseudo-associating molecule and both the self-association between alcohol hydroxyls and the cross-association between CO2 and alcohol hydroxyls were considered.The equilibrium properties from low temperature-pressure to high temperature-pressure were investigated using this model.The calculated p-x and p-p diagrams of CO2-methanol and CO2-ethanol mixtures agreed with the experimental data.The results showed that when the cross-association was taken into account for Helmholtz free energy,the calculated equilibrium properties could be significantly improved,and the error prediction of the three phase equilibria and triple points in low temperature regions could be avoided.

  14. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies. (United States)

    Noma, Hisashi; Matsui, Shigeyuki


    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.

  15. A generalized longitudinal mixture IRT model for measuring differential growth in learning environments. (United States)

    Kadengye, Damazo T; Ceulemans, Eva; Van den Noortgate, Wim


    This article describes a generalized longitudinal mixture item response theory (IRT) model that allows for detecting latent group differences in item response data obtained from electronic learning (e-learning) environments or other learning environments that result in large numbers of items. The described model can be viewed as a combination of a longitudinal Rasch model, a mixture Rasch model, and a random-item IRT model, and it includes some features of the explanatory IRT modeling framework. The model assumes the possible presence of latent classes in item response patterns, due to initial person-level differences before learning takes place, to latent class-specific learning trajectories, or to a combination of both. Moreover, it allows for differential item functioning over the classes. A Bayesian model estimation procedure is described, and the results of a simulation study are presented that indicate that the parameters are recovered well, particularly for conditions with large item sample sizes. The model is also illustrated with an empirical sample data set from a Web-based e-learning environment.

  16. Phase Equilibria of Water/CO2 and Water/n-Alkane Mixtures from Polarizable Models. (United States)

    Jiang, Hao; Economou, Ioannis G; Panagiotopoulos, Athanassios Z


    Phase equilibria of water/CO2 and water/n-alkane mixtures over a range of temperatures and pressures were obtained from Monte Carlo simulations in the Gibbs ensemble. Three sets of Drude-type polarizable models for water, namely the BK3, GCP, and HBP models, were combined with a polarizable Gaussian charge CO2 (PGC) model to represent the water/CO2 mixture. The HBP water model describes hydrogen bonds between water and CO2 explicitly. All models underestimate CO2 solubility in water if standard combining rules are used for the dispersion interactions between water and CO2. With the dispersion parameters optimized to phase compositions, the BK3 and GCP models were able to represent the CO2 solubility in water, however, the water composition in CO2-rich phase is systematically underestimated. Accurate representation of compositions for both water- and CO2-rich phases cannot be achieved even after optimizing the cross interaction parameters. By contrast, accurate compositions for both water- and CO2-rich phases were obtained with hydrogen bonding parameters determined from the second virial coefficient for water/CO2. Phase equilibria of water/n-alkane mixtures were also studied using the HBP water and an exponenial-6 united-atom n-alkanes model. The dispersion interactions between water and n-alkanes were optimized to Henry's constants of methane and ethane in water. The HBP water and united-atom n-alkane models underestimate water content in the n-alkane-rich phase; this underestimation is likely due to the neglect of electrostatic and induction energies in the united-atom model.

  17. The Precise Measurement of Vapor-Liquid Equilibrium Properties of the CO2/Isopentane Binary Mixture, and Fitted Parameters for a Helmholtz Energy Mixture Model (United States)

    Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.


    Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.

  18. Catalytically stabilized combustion of lean methane-air-mixtures: a numerical model

    Energy Technology Data Exchange (ETDEWEB)

    Dogwiler, U.; Benz, P.; Mantharas, I. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)


    The catalytically stabilized combustion of lean methane/air mixtures has been studied numerically under conditions closely resembling the ones prevailing in technical devices. A detailed numerical model has been developed for a laminar, stationary, 2-D channel flow with full heterogeneous and homogeneous reaction mechanisms. The computations provide direct information on the coupling between heterogeneous-homogeneous combustion and in particular on the means of homogeneous ignitions and stabilization. (author) 4 figs., 3 refs.

  19. Condition monitoring of oil-impregnated paper bushings using extension neural network, Gaussian mixture and hidden Markov models

    CSIR Research Space (South Africa)

    Miya, WS


    Full Text Available In this paper, a comparison between Extension Neural Network (ENN), Gaussian Mixture Model (GMM) and Hidden Markov model (HMM) is conducted for bushing condition monitoring. The monitoring process is a two-stage implementation of a classification...

  20. New dielectric mixture equation for porous materials based on depolarization factors

    NARCIS (Netherlands)

    Hilhorst, M.A.; Dirksen, C.; Kampers, F.W.H.; Feddes, R.A.


    A change in the relative proportions of the constituents of a porous material like soil will cause a change in its electrical permittivity. The measured permittivity reflects the impact of the permittivities of the individual material constituents. Numerous dielectric mixture equations are

  1. Novel pseudo-divergence of Gaussian mixture models based speaker clustering method

    Institute of Scientific and Technical Information of China (English)

    Wang Bo; Xu Yiqiong; Li Bicheng


    Serial structure is applied to speaker recognition to reduce the algorithm delay and computational complexity. The speech is first classified into speaker class, and then searches the most likely one inside the class.Difference between Gaussian Mixture Models (GMMs) is widely applied in speaker classification. The paper proposes a novel mean of pseudo-divergence, the ratio of Inter-Model dispersion to Intra-Model dispersion, to present the difference between GMMs, to perform speaker cluster. Weight, mean and variance, GMM's components, are involved in the dispersion. Experiments indicate that the measurement can well present the difference of GMMs and has improved performance of speaker clustering.

  2. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    Directory of Open Access Journals (Sweden)

    Jae Young Lee


    Full Text Available Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water and glutaric acid solutions (0–5 mol/kg water agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%. However, for solutions containing mixtures of malonic/glutaric acids, malonic/succinic acids, and glutaric/succinic acids, the disagreements between the measurements and predictions using the ZSR model or Peng’s modified UNIFAC model are higher than those for the single-component cases. Measurements of the overall water vapor pressure for 50 : 50 molar mixtures of malonic/glutaric acids closely followed that for malonic acid alone. For mixtures of malonic/succinic acids and glutaric/succinic acids, the influence of a constant concentration of succinic acid on water uptake became more significant as the concentration of malonic acid or glutaric acid was increased.

  3. Psychophysical model of chromatic perceptual transparency based on substractive color mixture. (United States)

    Faul, Franz; Ekroll, Vebjørn


    Variants of Metelli's episcotister model, which are based on additive color mixture, have been found to describe the luminance conditions for perceptual transparency very accurately. However, the findings in the chromatic domain are not that clear-cut, since there exist chromatic stimuli that conform to the additive model but do not appear transparent. We present evidence that such failures are of a systematic nature, and we propose an alternative psychophysical model based on subtractive color mixture. Results of a computer simulation revealed that this model approximately describes color changes that occur when a surface is covered by a filter. We present the results of two psychophysical experiments with chromatic stimuli, in which we directly compared the predictions of the additive model and the predictions of the new model. These results show that the color relations leading to the perception of a homogeneous transparent layer conform very closely to the predictions of the new model and deviate systematically from the predictions of the additive model.

  4. Modelling plant interspecific interactions from experiments of perennial crop mixtures to predict optimal combinations. (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo


    The contribution of plant species richness to productivity and ecosystem functioning is a long standing issue in Ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modelling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modelled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e. a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficientsfrom, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modelling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Study of Formation and Propagation of Streamers in SF6 and Its Gas Mixtures with Low Content of SF6 Using a One-Dimensional Fluid Model

    Institute of Scientific and Technical Information of China (English)

    李付亮; 汪沨; 王国利; W. PFEIFFER; 何荣涛


    Using a hybrid Monte Carlo Collision/Fluid model, the formation and propagation of streamers in SF6 and its gas mixtures are simulated. The simulation is based on an accurate numerical solution of Poisson's equation in conjunction with the continuity fluid equation for electrons, negative ions, and positive ions. The factors that influence the formation and propagation of streamers are investigated. The electron density, positive and negative ion density, and electric field in the discharge channel are also presented, which are very important in understanding the phenomena of streamers and in assessing the insulation strength of the gas mixture.

  6. A Neural Network Based Hybrid Mixture Model to Extract Information from Non-linear Mixed Pixels

    Directory of Open Access Journals (Sweden)

    Uttam Kumar


    Full Text Available Signals acquired by sensors in the real world are non-linear combinations, requiring non-linear mixture models to describe the resultant mixture spectra for the endmember’s (pure pixel’s distribution. This communication discusses inferring class fraction through a novel hybrid mixture model (HMM. HMM is a three-step process, where the endmembers are first derived from the images themselves using the N-FINDR algorithm. These endmembers are used by the linear mixture model (LMM in the second step that provides an abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual ground proportions are fed into neural network based multi-layer perceptron (MLP architecture as input to train the neurons. The neural output further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. HMM is first implemented and validated on simulated hyper spectral data of 200 bands and subsequently on real time MODIS data with a spatial resolution of 250 m. The results on computer simulated data show that the method gives acceptable results for unmixing pixels with an overall RMSE of 0.0089 ± 0.0022 with LMM and 0.0030 ± 0.0001 with the HMM when compared to actual class proportions. The unmixed MODIS images showed overall RMSE with HMM as 0.0191 ± 0.022 as compared to the LMM output considered alone that had an overall RMSE of 0.2005 ± 0.41, indicating that individual class abundances obtained from HMM are very close to the real observations.

  7. A proposed experimental platform for measuring the properties of warm dense mixtures: Testing the applicability of the linear mixing model (United States)

    Hawreliak, James


    This paper presents a proposed experimental technique for investigating the impact of chemical interactions in warm dense liquid mixtures. It uses experimental equation of state (EOS) measurements of warm dense liquid mixtures with different compositions to determine the deviation from the linear mixing model. Statistical mechanics is used to derive the EOS of a mixture with a constant pressure linear mixing term (Amagat's rule) and an interspecies interaction term. A ratio between the particle density of two different compositions of mixtures, K(P, T)i: ii, is defined. By comparing this ratio for a range of mixtures, the impact of interspecies interactions can be studied. Hydrodynamic simulations of mixtures with different carbon/hydrogen ratios are used to demonstrate the application of this proposed technique to multiple shock and ramp compression experiments. The limit of the pressure correction that can be measured due to interspecies interactions using this methodology is determined by the uncertainty in the density measurement.

  8. Binding of Solvent Molecules to a Protein Surface in Binary Mixtures Follows a Competitive Langmuir Model. (United States)

    Kulschewski, Tobias; Pleiss, Jürgen


    The binding of solvent molecules to a protein surface was modeled by molecular dynamics simulations of of Candida antarctica (C. antarctica) lipase B in binary mixtures of water, methanol, and toluene. Two models were analyzed: a competitive Langmuir model which assumes identical solvent binding sites with a different affinity toward water (KWat), methanol (KMet), and toluene (KTol) and a competitive Langmuir model with an additional interaction between free water and already bound water (KWatWat). The numbers of protein-bound molecules of both components of a binary mixture were determined for different compositions as a function of their thermodynamic activities in the bulk phase, and the binding constants were simultaneously fitted to the six binding curves (two components of three different mixtures). For both Langmuir models, the values of KWat, KMet, and KTol were highly correlated. The highest binding affinity was found for methanol, which was almost 4-fold higher than the binding affinities of water and toluene (KMet ≫ KWat ≈ KTol). Binding of water was dominated by the water-water interaction (KWatWat). Even for the three protein surface patches of highest water affinity, the binding affinity of methanol was 2-fold higher than water and 8-fold higher than toluene (KMet > KWat > KTol). The Langmuir model provides insights into the protein destabilizing mechanism of methanol which has a high binding affinity toward the protein surface. Thus, destabilizing solvents compete with intraprotein interactions and disrupt the tertiary structure. In contrast, benign solvents such as water or toluene have a low affinity toward the protein surface. Water is a special solvent: only few water molecules bind directly to the protein; most water molecules bind to already bound water molecules thus forming water patches. A quantitative mechanistic model of protein-solvent interactions that includes competition and miscibility of the components contributes a robust basis

  9. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.;


    .15 and for several methane compositions. Although very far from real petroleum fluids, these mixtures are interesting in order to study the potential of extending various models to the simulation of complex fluids with asymmetrical components (light/heavy hydrocarbon). These data (575 data points) have been...... discussed in the framework of recent representative models (hard sphere scheme, friction theory, and free volume model) and with mixing laws and two empirical models (particularly the LBC model which is commonly used in petroleum engineering, and the self-referencing model). This comparative study shows...

  10. An innovation resistance factor model

    Directory of Open Access Journals (Sweden)

    Siti Salwa Mohd Ishak


    Full Text Available The process and implementation strategy of information technology in construction is generally considered through the limiting prism of theoretical contexts generated from innovation diffusion and acceptance. This research argues that more attention should be given to understanding the positive effects of resistance. The study develops a theoretical framing for the Integrated Resistance Factor Model (IRFM. The framing uses a combination of diffusion of innovation theory, technology acceptance model and social network perspective. The model is tested to identify the most significant resistance factors using Partial Least Square (PLS technique. All constructs proposed in the model are found to be significant, valid and consistent with the theoretical framework. IRFM is shown to be an effective and appropriate model of user resistance factors. The most critical factors to influence technology resistance in the online project information management system (OPIMS context are: support from leaders and peers, complexity of the technology, compatibility with key work practices; and pre-trial of the technology before it is actually deployed. The study provides a new model for further research in technology innovation specific to the construction industry.

  11. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi


    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  12. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik


    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  13. Highlighting pitfalls in the Maxwell-Stefan modeling of water-alcohol mixture permeation across pervaporation membranes

    NARCIS (Netherlands)

    Krishna, R.; van Baten, J.M.


    The Maxwell-Stefan (M-S) equations are widely used for modeling permeation of water-alcohol mixtures across microporous membranes in pervaporation and dehydration process applications. For binary mixtures, for example, the following set of assumptions is commonly invoked, either explicitly or

  14. Microstructural Analysis and Rheological Modeling of Asphalt Mixtures Containing Recycled Asphalt Materials

    Directory of Open Access Journals (Sweden)

    Augusto Cannone Falchetto


    Full Text Available The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP, and, more recently, Recycled Asphalt Shingles (RAS on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP and modeling of rheological data obtained with the Bending Beam Rheometer (BBR. Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder.

  15. Computational modeling of photoacoustic signals from mixtures of melanoma and red blood cells. (United States)

    Saha, Ratan K


    A theoretical approach to model photoacoustic (PA) signals from mixtures of melanoma cells (MCs) and red blood cells (RBCs) is discussed. The PA signal from a cell approximated as a fluid sphere was evaluated using a frequency domain method. The tiny signals from individual cells were summed up obtaining the resultant PA signal. The local signal to noise ratio for a MC was about 5.32 and 5.40 for 639 and 822 nm illuminations, respectively. The PA amplitude exhibited a monotonic rise with increasing number of MCs for each incident radiation. The power spectral lines also demonstrated similar variations over a large frequency range (5-200 MHz). For instance, spectral intensity was observed to be 5.5 and 4.0 dB greater at 7.5 MHz for a diseased sample containing 1 MC and 22,952 RBCs than a normal sample composed of 22,958 RBCs at those irradiations, respectively. The envelope histograms generated from PA signals for mixtures of small numbers of MCs and large numbers of RBCs seemed to obey pre-Rayleigh statistics. The generalized gamma distribution found to facilitate better fits to the histograms than the Rayleigh and Nakagami distributions. The model provides a means to study PAs from mixtures of different populations of absorbers.

  16. Cost-effectiveness model for a specific mixture of prebiotics in The Netherlands. (United States)

    Lenoir-Wijnkoop, I; van Aalderen, W M C; Boehm, G; Klaassen, D; Sprikkelman, A B; Nuijten, M J C


    The objective of this study was to assess the cost-effectiveness of the use of prebiotics for the primary prevention of atopic dermatitis in The Netherlands. A model was constructed using decision analytical techniques. The model was developed to estimate the health economic impact of prebiotic preventive disease management of atopic dermatitis. Data sources used include published literature, clinical trials and official price/tariff lists and national population statistics. The comparator was no supplementation with prebiotics. The primary perspective for conducting the economic evaluation was based on the situation in The Netherlands in 2009. The results show that the use of prebiotics infant formula (IMMUNOFORTIS(®)) leads to an additional cost of € 51 and an increase in Quality Adjusted Life Years (QALY) of 0.108, when compared with no prebiotics. Consequently, the use of infant formula with a specific mixture of prebiotics results in an incremental cost-effectiveness ratio (ICER) of € 472. The sensitivity analyses show that the ICER remains in all analyses far below the threshold of € 20,000/QALY. This study shows that the favourable health benefit of the use of a specific mixture of prebiotics results in positive short- and long-term health economic benefits. In addition, this study demonstrates that the use of infant formula with a specific mixture of prebiotics is a highly cost-effective way of preventing atopic dermatitis in The Netherlands.

  17. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    Energy Technology Data Exchange (ETDEWEB)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L. [NeuroSpin/CEA, F-91191 Gif-sur-Yvette (France); Fouque, A.L.; Ciuciu, Ph.; Risser, L. [IFR 49, Institut d' Imagerie Neurofonctionnelle, Paris (France)


    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  18. Hyperspectral Small Target Detection by Combining Kernel PCA with Linear Mixture Model

    Institute of Scientific and Technical Information of China (English)

    GUYanfeng; ZHANGYe


    In this paper, a kernel-based invariant detection method is proposed for small target detection of hyperspectral images. The method combines Kernel principal component analysis (KPCA) with Iinear mixture model (LMM) together. The LMM is used to describe each pixel in the hyperspectral images as a mixture of target,background and noise. The KPCA is used to build back-ground subspace. Finally, a generalized likelihood ratio test is used to detect whether each pixel in hyperspectral image includes target. The numerical experiments are performed on hyperspectral data with 126 bands collected by Airborne visible/infrared imaging spectrometer (AVIRIS).The experimental results show the effectiveness of the proposed method and prove that this method can commendably overcome spectral variability and sparsity of target in the hyperspectral target detection, and it has great ability to separate target from background.

  19. Two-component mixture model: Application to palm oil and exchange rate (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad


    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  20. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior (United States)

    Hess, Julian; Wang, Yongqi


    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  1. Beyond GLMs: a generative mixture modeling approach to neural system identification. (United States)

    Theis, Lucas; Chagas, Andrè Maia; Arnstein, Daniel; Schwarz, Cornelius; Bethge, Matthias


    Generalized linear models (GLMs) represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  2. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  3. Fourth-order strain-gradient phase mixture model for nanocrystalline fcc materials (United States)

    Klusemann, Benjamin; Bargmann, Swantje; Estrin, Yuri


    The proposed modeling approach for nanocrystalline materials is an extension of the local phase mixture model introduced by Kim et al (2000 Acta Mater. 48 493-504). Local models cannot account for any non-uniformities or strain patterns, i.e. such models describe the behavior correctly only as long as it is homogeneous. In order to capture heterogeneities, the phase mixture model is augmented with gradient terms of higher order, namely second and fourth order. Different deformation mechanisms are assumed to operate in grain interior and grain boundaries concurrently. The deformation mechanism in grain boundaries is associated with diffusional mass transport along the boundaries, while in the grain interior dislocation glide as well as diffusion controlled mechanisms are considered. In particular, the mechanical response of nanostructured polycrystals is investigated. The model is capable of correctly predicting the transition of flow stress from Hall-Petch behavior in conventional grain size range to an inverse Hall-Petch relation in the nanocrystalline grain size range. The consideration of second- and fourth-order strain gradients allows non-uniformities within the strain field to represent strain patterns in combination with a regularization effect. Details of the numerical implementation are provided.

  4. Inhalation pressure distributions for medical gas mixtures calculated in an infant airway morphology model. (United States)

    Gouinaud, Laure; Katz, Ira; Martin, Andrew; Hazebroucq, Jean; Texereau, Joëlle; Caillibotte, Georges


    A numerical pressure loss model previously used for adult human airways has been modified to simulate the inhalation pressure distribution in a healthy 9-month-old infant lung morphology model. Pressure distributions are calculated for air as well as helium and xenon mixtures with oxygen to investigate the effects of gas density and viscosity variations for this age group. The results indicate that there are significant pressure losses in infant extrathoracic airways due to inertial effects leading to much higher pressures to drive nominal flows in the infant airway model than for an adult airway model. For example, the pressure drop through the nasopharynx model of the infant is much greater than that for the nasopharynx model of the adult; that is, for the adult-versus-child the pressure differences are 0.08 cm H2O versus 0.4 cm H2O, 0.16 cm H2O versus 1.9 cm H2O and 0.4 cm H2O versus 7.7 cm H2O, breathing helium-oxygen (78/22%), nitrogen-oxygen (78/22%) and xenon-oxygen (60/40%), respectively. Within the healthy lung, viscous losses are of the same order for the three gas mixtures, so the differences in pressure distribution are relatively small.

  5. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks (United States)

    Vakanski, A; Ferguson, JM; Lee, S


    Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach

  6. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    Energy Technology Data Exchange (ETDEWEB)

    Finne, E.F. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway) and University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway)]. E-mail:; Cooper, G.A. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Koop, B.F. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Hylland, K. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway); University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway); Tollefsen, K.E. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway)


    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17{alpha}-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 {mu}M paraquat (PQ) and 0.75 {mu}M 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as

  7. Explaining slow convergence of EM in low noise linear mixtures


    Petersen, Kaare Brandt; Winther, Ole


    This report conducts an investigation of the convergence properties of the EM algorithm used for linear mixture models. Since the linear mixture model is a rather general approach, the analysis is relevant for a wide range of models which to some degree are subsets of each other: Independent Component Analysis (ICA), probabilistic PCA, Factor Analysis (FA), Independent Factor Analysis (IFA) and Mean Field ICA.

  8. A poromechanical model for coal seams saturated with binary mixtures of CH4 and CO2 (United States)

    Nikoosokhan, Saeid; Vandamme, Matthieu; Dangla, Patrick


    Underground coal bed reservoirs naturally contain methane which can be produced. In parallel of the production of this methane, carbon dioxide can be injected, either to enhance the production of methane, or to have this carbon dioxide stored over geological periods of time. As a prerequisite to any simulation of an Enhanced Coal Bed Methane recovery process (ECBM), we need state equations to model the behavior of the seam when cleats are saturated with a miscible mixture of CH4 and CO2. This paper presents a poromechanical model of coal seams exposed to such binary mixtures filling both the cleats in the seam and the porosity of the coal matrix. This model is an extension of a previous work which dealt with pure fluid. Special care is dedicated to keep the model consistent thermodynamically. The model is fully calibrated with a mix of experimental data and numerical data from molecular simulations. Predicting variations of porosity or permeability requires only calibration based on swelling data. With the calibrated state equations, we predict numerically how porosity, permeability, and adsorbed amounts of fluid vary in a representative volume element of coal seam in isochoric or oedometric conditions, as a function of the pressure and of the composition of the fluid in the cleats.

  9. Generalized Observables, Bell's Inequalities and Mixtures in the ESR Model for QM

    CERN Document Server

    Garola, Claudio


    The extended semantic realism (ESR) model proposes a new theoretical perspective which embodies the mathematical formalism of standard (Hilbert space) quantum mechanics (QM) into a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide in this review an overall view on the present status of our research on this topic. We attain in a new, shortened way a mathematical representation of the generalized observables introduced by the ESR model and a generalization of the projection postulate of elementary QM. Basing on these results we prove that the Bell-Clauser-Horne-Shimony-Holt (BCHSH) inequality, a modified BCHSH inequality and quantum predictions hold together in the ESR model because they refer to different parts of the picture of the physical world supplied by the model. Then we show that a new mathematical representation of mixtures must be introduced in the ESR model which does not coincide with the standard representation in QM and avoids some deep p...

  10. Heavy metals (Pb, Cd, As and MeHg) as risk factors for cognitive dysfunction: A general review of metal mixture mechanism in brain. (United States)

    Karri, Venkatanaidu; Schuhmacher, Marta; Kumar, Vikas


    Human exposure to toxic heavy metals is a global challenge. Concurrent exposure of heavy metals, such as lead (Pb), cadmium (Cd), arsenic (As) and methylmercury (MeHg) are particularly important due to their long lasting effects on the brain. The exact toxicological mechanisms invoked by exposure to mixtures of the metals Pb, Cd, As and MeHg are still unclear, however they share many common pathways for causing cognitive dysfunction. The combination of metals may produce additive/synergetic effects due to their common binding affinity with NMDA receptor (Pb, As, MeHg), Na(+) - K(+) ATP-ase pump (Cd, MeHg), biological Ca(+2) (Pb, Cd, MeHg), Glu neurotransmitter (Pb, MeHg), which can lead to imbalance between the pro-oxidant elements (ROS) and the antioxidants (reducing elements). In this process, ROS dominates the antioxidants factors such as GPx, GS, GSH, MT-III, Catalase, SOD, BDNF, and CERB, and finally leads to cognitive dysfunction. The present review illustrates an account of the current knowledge about the individual metal induced cognitive dysfunction mechanisms and analyse common Mode of Actions (MOAs) of quaternary metal mixture (Pb, Cd, As, MeHg). This review aims to help advancement in mixture toxicology and development of next generation predictive model (such as PBPK/PD) combining both kinetic and dynamic interactions of metals. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Self-assembly in a model colloidal mixture of dimers and spherical particles (United States)

    Prestipino, Santi; Munaò, Gianmarco; Costa, Dino; Caccamo, Carlo


    We investigate the structure of a dilute mixture of amphiphilic dimers and spherical particles, a model relevant to the problem of encapsulating globular "guest" molecules in a dispersion. Dimers and spheres are taken to be hard particles, with an additional attraction between spheres and the smaller monomers in a dimer. Using the Monte Carlo simulation, we document the low-temperature formation of aggregates of guests (clusters) held together by dimers, whose typical size and shape depend on the guest concentration χ. For low χ (less than 10%), most guests are isolated and coated with a layer of dimers. As χ progressively increases, clusters grow in size becoming more and more elongated and polydisperse; after reaching a shallow maximum for χ ≈50 % , the size of clusters again reduces upon increasing χ further. In one case only (χ =50 % and moderately low temperature) the mixture relaxed to a fluid of lamellae, suggesting that in this case clusters are metastable with respect to crystal-vapor separation. On heating, clusters shrink until eventually the system becomes homogeneous on all scales. On the other hand, as the mixture is made denser and denser at low temperature, clusters get increasingly larger until a percolating network is formed.

  12. Modelling shallow debris flows of the Coulomb-mixture type over temporally varying topography

    Directory of Open Access Journals (Sweden)

    Y. C. Tai


    Full Text Available We propose a saturated binary mixture model for debris flows of the Coulomb-mixture type over temporally varying topography, where the effects of erosion and deposition are considered. Due to the deposition or erosion processes, the interface between the moving material and the stagnant base is a non-material singular surface. The motion of this singular surface is determined by the mass exchange between the flowing layer and the ground. The ratio of the relative velocity between the two constituents to the velocity of the solid phase is assumed to be small, so that the governing equations can be reduced to a system of the quasi-single-phase type. A shock-capturing numerical scheme is implemented to solve the derived equation system. The deposition shapes of a finite mass sliding down an inclined planary chute are investigated for a range of mixture ratios. The geometric evolution of the deposition is presented, which allows the possibility of mimicking the development of levee deposition.

  13. Separation of deviatoric stress tensors from heterogeneous calcite twin data using a statistical mixture model (United States)

    Yamaji, Atsushi


    It is essential for the techniques of paleostress analysis to separate stresses from heterogeneous data (e.g., Tikoff et al., 2013). A statistical mixture model is shown in this paper to be effective for calcite twinning paleopiezometry: Given the orientations of twinned e-planes and their gliding directions, the present inverse method based on the mixture model determines not only deviatoric stress tensors, but also estimates the number of tensors that should be read from a data set using Bayesian information criterion. The present method is based on the fact that mechanical twinning occurs on an e-plane if the resolved shear stress along its gliding direction, τ, is greater than a critical value, τc (e.g., Lacombe, 2010). The orientation data from e-planes corresponds to points on a 5-dimensional unit sphere, a spherical cap on which indicates a deviatoric stress tensor. The twinning condition, τ > τc, is identical with the condition that the points corresponding to the orientation data are distributed upon the spherical cap (Yamaji, 2015a). It means that the paleostress analysis of calcite twins comes down to the problem of fitting a spherical cap to data points on the sphere (Yamaji, 2015b). Given a heterogeneous data set, two or more spherical caps should be fitted to the data point on the sphere. A statistical mixture model is employed for this fitting in the present work. Such a statistical model enables us to evaluate the number of stresses recorded in the data set. The present method was tested with artificial data sets and a natural data set obtained from a Miocene graben in central Japan. From the former type of data sets, the method determined the deviatoric stress tensors that were assumed to generate the data sets. The natural data were inverted to give two stresses that appeared appropriate for the tectonic setting of the area where the data were obtained.

  14. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    Directory of Open Access Journals (Sweden)

    G. Ganbavale


    Full Text Available This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic–Organic Mixtures Functional groups Activity Coefficients model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic–water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients. This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~275 to ~400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures with the introduction of a new temperature dependence parameterisation. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multicomponent system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~190 to ~440 K for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of

  15. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  16. Growth mixture modeling as an exploratory analysis tool in longitudinal quantitative trait loci analysis. (United States)

    Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J


    We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.

  17. Multigrid Nonlocal Gaussian Mixture Model for Segmentation of Brain Tissues in Magnetic Resonance Images. (United States)

    Chen, Yunjie; Zhan, Tianming; Zhang, Ji; Wang, Hongyuan


    We propose a novel segmentation method based on regional and nonlocal information to overcome the impact of image intensity inhomogeneities and noise in human brain magnetic resonance images. With the consideration of the spatial distribution of different tissues in brain images, our method does not need preestimation or precorrection procedures for intensity inhomogeneities and noise. A nonlocal information based Gaussian mixture model (NGMM) is proposed to reduce the effect of noise. To reduce the effect of intensity inhomogeneity, the multigrid nonlocal Gaussian mixture model (MNGMM) is proposed to segment brain MR images in each nonoverlapping multigrid generated by using a new multigrid generation method. Therefore the proposed model can simultaneously overcome the impact of noise and intensity inhomogeneity and automatically classify 2D and 3D MR data into tissues of white matter, gray matter, and cerebral spinal fluid. To maintain the statistical reliability and spatial continuity of the segmentation, a fusion strategy is adopted to integrate the clustering results from different grid. The experiments on synthetic and clinical brain MR images demonstrate the superior performance of the proposed model comparing with several state-of-the-art algorithms.

  18. Online estimation of B-spline mixture models from TOF-PET list-mode data

    Energy Technology Data Exchange (ETDEWEB)

    Schretter, Colas; Kobbelt, Leif [RWTH Aachen Univ. (Germany). Computer Graphics Group; Sun, Jianyong [Nottingham Univ. (United Kingdom). Intelligent Modelling and Analysis Research Group


    In emission tomography, images are usually represented by regular grids of voxels or overlapping smooth image elements (blobs). Few other image models have been proposed like tetrahedral meshes or point clouds that are adapted to an anatomical image. This work proposes a practical sparse and continuous image model inspired from the field of parametric density estimation for Gaussian mixture models. The position, size, aspect ratio and orientation of each image element is optimized as well as its weight with a very fast online estimation method. Furthermore, the number of mixture components, hence the image resolution, is locally adapted according to the available data. The system model is represented in the same basis as image elements and captures time of flight and positron range effects in an exact way. Computations use apodized B-spline approximations of Gaussians and simple closed-form analytical expressions without any sampling or interpolation. In consequence, the reconstructed image never suffers from spurious aliasing artifacts. Noiseless images of the XCAT brain phantom were reconstructed from simulated data. (orig.)

  19. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.


    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  20. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data (United States)

    Lu, Yi


    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  1. Three-dimensional modeling and simulation of asphalt concrete mixtures based on X-ray CT microstructure images

    Directory of Open Access Journals (Sweden)

    Hainian Wang


    Full Text Available X-ray CT (computed tomography was used to scan asphalt mixture specimen to obtain high resolution continuous cross-section images and the meso-structure. According to the theory of three-dimensional (3D reconstruction, the 3D reconstruction algorithm was investigated in this paper. The key to the reconstruction technique is the acquisition of the voxel positions and the relationship between the pixel element and node. Three-dimensional numerical model of asphalt mixture specimen was created by a self-developed program. A splitting test was conducted to predict the stress distributions of the asphalt mixture and verify the rationality of the 3D model.

  2. Decomposition driven interface evolution for layers of binary mixtures: I. Model derivation and stratified base states

    CERN Document Server

    Thiele, Uwe; Frastia, Lubor


    A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surface film of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological non-equilibrium thermodynamics for a general non-isothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analysed.

  3. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu


    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  4. Bayesian sensitivity analysis of incomplete data: bridging pattern-mixture and selection models. (United States)

    Kaciroti, Niko A; Raghunathan, Trivellore


    Pattern-mixture models (PMM) and selection models (SM) are alternative approaches for statistical analysis when faced with incomplete data and a nonignorable missing-data mechanism. Both models make empirically unverifiable assumptions and need additional constraints to identify the parameters. Here, we first introduce intuitive parameterizations to identify PMM for different types of outcome with distribution in the exponential family; then we translate these to their equivalent SM approach. This provides a unified framework for performing sensitivity analysis under either setting. These new parameterizations are transparent, easy-to-use, and provide dual interpretation from both the PMM and SM perspectives. A Bayesian approach is used to perform sensitivity analysis, deriving inferences using informative prior distributions on the sensitivity parameters. These models can be fitted using software that implements Gibbs sampling.

  5. Bayesian Estimation of Categorical Dynamic Factor Models (United States)

    Zhang, Zhiyong; Nesselroade, John R.


    Dynamic factor models have been used to analyze continuous time series behavioral data. We extend 2 main dynamic factor model variations--the direct autoregressive factor score (DAFS) model and the white noise factor score (WNFS) model--to categorical DAFS and WNFS models in the framework of the underlying variable method and illustrate them with…

  6. Generation of a mixture model ground-motion prediction equation for Northern Chile (United States)

    Haendel, A.; Kuehn, N. M.; Scherbaum, F.


    In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from

  7. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)


    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  8. A solid-fluid mixture model allowing for solid dilatation under external pressure

    CERN Document Server

    Sciarra, Giulio; Hutter, Kolumban


    A sponge subjected to an increase of the outside fluid pressure expands its volume but nearly mantains its true density and thus gives way to an increase of the interstitial volume. This behaviour, not yet properly described by solid-fluid mixture theories, is studied here by using the Principle of Virtual Power with the most simple dependence of the free energy as a function of the partial apparent densities of the solid and the fluid. The model is capable of accounting for the above mentioned dilatational behaviour, but in order to isolate its essential features more clearly we compromise on the other aspects of deformation.

  9. Heteroscedastic nonlinear regression models based on scale mixtures of skew-normal distributions. (United States)

    Lachos, Victor H; Bandyopadhyay, Dipankar; Garay, Aldo M


    An extension of some standard likelihood based procedures to heteroscedastic nonlinear regression models under scale mixtures of skew-normal (SMSN) distributions is developed. We derive a simple EM-type algorithm for iteratively computing maximum likelihood (ML) estimates and the observed information matrix is derived analytically. Simulation studies demonstrate the robustness of this flexible class against outlying and influential observations, as well as nice asymptotic properties of the proposed EM-type ML estimates. Finally, the methodology is illustrated using an ultrasonic calibration data.

  10. Linear Mixture Models and Partial Unmixing in Multi- and Hyperspectral Image Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg


    As a supplement or an alternative to classification of hyperspectral image data the linear mixture model is considered in order to obtain estimates of abundance of each class or end-member in pixels with mixed membership. Full unmixing and the partial unmixing methods orthogonal subspace projection...... (OSP), constrained energy minimization (CEM) and an eigenvalue formulation alternative are dealt with. The solution to the eigenvalue formulation alternative proves to be identical to the CEM solution. The matrix inversion involved in CEM can be avoided by working on (a subset of) orthogonally...

  11. An effects addition model based on bioaccumulation of metals from exposure to mixtures of metals can predict chronic mortality in the aquatic invertebrate Hyalella azteca. (United States)

    Norwood, Warren P; Borgmann, Uwe; Dixon, D George


    Chronic toxicity tests of mixtures of 9 metals and 1 metalloid (As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Tl, and Zn) at equitoxic concentrations over an increasing concentration range were conducted with the epibenthic, freshwater amphipod Hyalella azteca. The authors conducted 28-d, water-only tests. The bioaccumulation trends changed for 8 of the elements in exposures to mixtures of the metals compared with individual metal exposures. The bioaccumulation of Co and Tl were affected the most. These changes may be due to interactions between all the metals as well as interactions with waterborne ligands. A metal effects addition model (MEAM) is proposed as a more accurate method to assess the impact of mixtures of metals and to predict chronic mortality. The MEAM uses background-corrected body concentration to predict toxicity. This is important because the chemical characteristics of different waters can greatly alter the bioavailability and bioaccumulation of metals, and interactions among metals for binding at the site of action within the organism can affect body concentration. The MEAM accurately predicted toxicity in exposures to mixtures of metals, and predicted results were within a factor of 1.1 of the observed data, using 24-h depurated body concentrations. The traditional concentration addition model overestimated toxicity by a factor of 2.7.

  12. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture (United States)

    Stroev, N. E.; Iosilevskiy, I. L.


    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications.

  13. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    CERN Document Server

    Stroev, N E


    Non-congruent gas-liquid phase transition (NCPT) have been studied in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a \\textit{uniformly compressible} ideal electronic background /BIM($\\sim$)/. The features of NCPT in improved version of the BIM($\\sim$) model for the same mixture on background of \\textit{non-ideal} electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to EoS of electronic and ionic subsystems were used in present calculations within the Gibbs--Guggenheim conditions of non-congruent phase equilibrium.Parameters of critical point-line (CPL) were calculated on the entire range of proportions of mixed ions $0model. Just similar distillation was obtained in variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM($\\sim$) in contrast to explicit e...

  14. One-dimensional modelling of DBDs in Ne-Xe mixtures for excimer lamps (United States)

    Belasri, A.; Khodja, K.; Bendella, S.; Harrache, Z.


    Dielectric barrier discharges (DBDs) are a promising technology for high-intensity sources of specific ultraviolet (UV) and vacuum ultraviolet (VUV) radiation. In this work, the microdischarge dynamics in DBDs for Ne-Xe mixtures under the close conditions of excimer lamp working has been investigated. The computer model including the cathode fall, the positive column and the dielectric is composed of two coupled sub-models. The first submodel describes the electrical properties of the discharge and is based on a fluid, two-moments description of electron and ion transport coupled with Poisson's equation during the discharge pulse. The second submodel, based on three main modules: a plasma chemistry module, a circuit module and a Boltzmann equation module, with source terms deduced from the electric model, describes the time variations of charged and excited species concentrations and the UV photon emission. The use of the present description allows a good resolution near the sheath at high pressure and it predicts correctly the waveform of the discharge behaviour. The effects of operation voltage, dielectric capacitance, gas mixture composition, gas pressure, as well as the secondary electron emission by ion at the cathode on the discharge characteristics and the 173 nm photon generation have been investigated and discussed.

  15. Modeling Intensive Longitudinal Data With Mixtures of Nonparametric Trajectories and Time-Varying Effects (United States)

    Dziak, John J.; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P.


    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, inter-individual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semi-parametric regression modeling, in order to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. PMID:26390169

  16. Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model

    KAUST Repository

    Liu, Bo


    An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior distribution are first integrated forward with the dynamical model for forecasting. A GM representation of the forecast distribution is then constructed from the forecast particles. Once an observation becomes available, the forecast GM is updated according to Bayes’ rule. This leads to (i) a Kalman filter-like update of the particles, and (ii) a Particle filter-like update of their weights, generalizing the ensemble Kalman filter update to non-Gaussian distributions. We focus on investigating the impact of the clustering strategy on the behavior of the filter. Three different clustering methods for constructing the prior GM are considered: (i) a standard kernel density estimation, (ii) clustering with a specified mixture component size, and (iii) adaptive clustering (with a variable GM size). Numerical experiments are performed using a two-dimensional reactive contaminant transport model in which the contaminant concentration and the heterogenous hydraulic conductivity fields are estimated within a confined aquifer using solute concentration data. The experimental results suggest that the performance of the GM filter is sensitive to the choice of the GM model. In particular, increasing the size of the GM does not necessarily result in improved performances. In this respect, the best results are obtained with the proposed adaptive clustering scheme.

  17. Effect of many-body interactions on the bulk and interfacial phase behavior of a model colloid-polymer mixture. (United States)

    Dijkstra, Marjolein; van Roij, René; Roth, Roland; Fortini, Andrea


    We study a model suspension of sterically stabilized colloidal particles and nonadsorbing ideal polymer coils, both in bulk and adsorbed against a planar hard wall. By integrating out the degrees of freedom of the polymer coils, we derive a formal expression for the effective one-component Hamiltonian of the colloids. We employ an efficient Monte Carlo simulation scheme for this mixture based on the exact effective colloid Hamiltonian; i.e., it incorporates all many-body interactions. The many-body character of the polymer-mediated effective interactions between the colloids yields bulk phase behavior and adsorption phenomena that differ substantially from those found for pairwise simple fluids. We determine the phase behavior for size ratios q=sigma(p)/sigma(c)=1, 0.6, and 0.1, where sigma(c) and sigma(p) denote the diameters of the colloids and polymer coils, respectively. For q=1 and 0.6, we find both a fluid-solid and a stable colloidal gas-liquid transition with an anomalously large bulk liquid regime caused by the many-body interactions. We compare the phase diagrams obtained from simulations with the results of the free-volume approach and with direct simulations of the true binary mixture. Although we did not simulate the polymer coils explicitly, we are able to obtain the three partial structure factors and radial distribution functions. We compare our results with those obtained from density functional theory and the Percus-Yevick approximation. We find good agreement between all results for the structure. We also study the mixture in contact with a single hard wall for q=1. Upon approach of the gas-liquid binodal, we find far from the triple point, three layering transitions in the partial wetting regime.

  18. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel


    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  19. Modeling competitive adsorption of mixtures of volatile organic compounds in a fixed-bed of beaded activated carbon. (United States)

    Tefera, Dereje Tamiru; Hashisho, Zaher; Philips, John H; Anderson, James E; Nichols, Mark


    A two-dimensional mathematical model was developed to study competitive adsorption of n-component mixtures in a fixed-bed adsorber. The model consists of an isotherm equation to predict adsorption equilibria of n-component volatile organic compounds (VOCs) mixture from single component isotherm data, and a dynamic adsorption model, the macroscopic mass, energy and momentum conservation equations, to simulate the competitive adsorption of the n-components onto a fixed-bed of adsorbent. The model was validated with experimentally measured data of competitive adsorption of binary and eight-component VOCs mixtures onto beaded activated carbon (BAC). The mean relative absolute error (MRAE) was used to compare the modeled and measured breakthrough profiles as well as the amounts of adsorbates adsorbed. For the binary and eight-component mixtures, the MRAE of the breakthrough profiles was 13 and 12%, respectively, whereas, the MRAE of the adsorbed amounts was 1 and 2%, respectively. These data show that the model provides accurate prediction of competitive adsorption of multicomponent VOCs mixtures and the competitive adsorption isotherm equation is able to accurately predict equilibrium adsorption of VOCs mixtures.

  20. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach. (United States)

    Rafal Podlaski; Francis Roesch


    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  1. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  2. Novel Methods for Surface EMG Analysis and Exploration Based on Multi-Modal Gaussian Mixture Models.

    Directory of Open Access Journals (Sweden)

    Anna Magdalena Vögele

    Full Text Available This paper introduces a new method for data analysis of animal muscle activation during locomotion. It is based on fitting Gaussian mixture models (GMMs to surface EMG data (sEMG. This approach enables researchers/users to isolate parts of the overall muscle activation within locomotion EMG data. Furthermore, it provides new opportunities for analysis and exploration of sEMG data by using the resulting Gaussian modes as atomic building blocks for a hierarchical clustering. In our experiments, composite peak models representing the general activation pattern per sensor location (one sensor on the long back muscle, three sensors on the gluteus muscle on each body side were identified per individual for all 14 horses during walk and trot in the present study. Hereby we show the applicability of the method to identify composite peak models, which describe activation of different muscles throughout cycles of locomotion.

  3. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yen, E-mail:; Vinokur, Marcel [NASA Ames Research Center, Moffett Field, California 94035 (United States); Panesi, Marco; Sahai, Amal [University of Illinois, Urbana-Champaign, Illinois 61801 (United States)


    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  4. A two-fluid model for reactive dilute solid-liquid mixtures with phase changes (United States)

    Reis, Martina Costa; Wang, Yongqi


    Based on the Eulerian spatial averaging theory and the Müller-Liu entropy principle, a two-fluid model for reactive dilute solid-liquid mixtures is presented. Initially, some averaging theorems and properties of average quantities are discussed and, then, averaged balance equations including interfacial source terms are postulated. Moreover, constitutive equations are proposed for a reactive dilute solid-liquid mixture, where the formation of the solid phase is due to a precipitation chemical reaction that involves ions dissolved in the liquid phase. To this end, principles of constitutive theory are used to propose linearized constitutive equations that account for diffusion, heat conduction, viscous and drag effects, and interfacial deformations. A particularity of the model is that the mass interfacial source term is regarded as an independent constitutive variable. The obtained results show that the inclusion of the mass interfacial source term into the set of independent constitutive variables permits to easily describe the phase changes associated with precipitation chemical reactions.

  5. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model. (United States)

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng


    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes.

  6. Modelling of a biofiltration process of volatile organic compound mixtures in a biofilter

    Directory of Open Access Journals (Sweden)

    Rasa Vaiškūnaitė


    Full Text Available Currently, different methods for air clean-up from chemical pollutants are applied worldwide: adsorption, absorption, and thermal and catalytic oxidation. One of the most promising methods is biological air cleaning. The aim of this study was to test the performance of a developed biofilter with packing material of activated pine bark for biological air cleaning and to mathematically model the biofiltration processes. Comparative analysis of the modelling results for individual pollutants (butyl acetate, butanol and xylene showed strongest dependence of the efficiency of xylene removal from the air on the amount and ratio of other substances (from 20% to 70%. Thus, the process of removal of pollutants (butanol and butyl acetate that are easier to biologically decompose was obtained to be influenced to a lesser extent by the amount and ratio (% of other components. The results also showed that the efficiency of butyl acetate removal mostly depended on the ratio of other substances in the mixture (from 15% to 100%, whereas the efficiency of butyl acetate and xylene removal mostly depended on the amount of other substances in the mixture (from 20% to 100%. With the parameters of the biofilter (height of packing material, incoming air flow velocity and the pollutants to be removed known, the mathematical expression of the filter efficiency was found, which would allow to make theoretical calculation and selection of the most appropriate parameters of the device as well as to achieve maximum efficiency of air cleaning.

  7. Modeling and analysis of time-dependent processes in a chemically reactive mixture (United States)

    Ramos, M. P.; Ribeiro, C.; Soares, A. J.


    In this paper, we study the propagation of sound waves and the dynamics of local wave disturbances induced by spontaneous internal fluctuations in a reactive mixture. We consider a non-diffusive, non-heat conducting and non-viscous mixture described by an Eulerian set of evolution equations. The model is derived from the kinetic theory in a hydrodynamic regime of a fast chemical reaction. The reactive source terms are explicitly computed from the kinetic theory and are built in the model in a proper way. For both time-dependent problems, we first derive the appropriate dispersion relation, which retains the main effects of the chemical process, and then investigate the influence of the chemical reaction on the properties of interest in the problems studied here. We complete our study by developing a rather detailed analysis using the Hydrogen-Chlorine system as reference. Several numerical computations are included illustrating the behavior of the phase velocity and attenuation coefficient in a low-frequency regime and describing the spectrum of the eigenmodes in the small wavenumber limit.

  8. A two-fluid model for reactive dilute solid-liquid mixtures with phase changes (United States)

    Reis, Martina Costa; Wang, Yongqi


    Based on the Eulerian spatial averaging theory and the Müller-Liu entropy principle, a two-fluid model for reactive dilute solid-liquid mixtures is presented. Initially, some averaging theorems and properties of average quantities are discussed and, then, averaged balance equations including interfacial source terms are postulated. Moreover, constitutive equations are proposed for a reactive dilute solid-liquid mixture, where the formation of the solid phase is due to a precipitation chemical reaction that involves ions dissolved in the liquid phase. To this end, principles of constitutive theory are used to propose linearized constitutive equations that account for diffusion, heat conduction, viscous and drag effects, and interfacial deformations. A particularity of the model is that the mass interfacial source term is regarded as an independent constitutive variable. The obtained results show that the inclusion of the mass interfacial source term into the set of independent constitutive variables permits to easily describe the phase changes associated with precipitation chemical reactions.

  9. Improved partial least squares models for stability-indicating analysis of mebeverine and sulpiride mixtures in pharmaceutical preparation: a comparative study. (United States)

    Darwish, Hany W; Naguib, Ibrahim A


    Performance of partial least squares regression (PLSR) is enhanced in the presented work by three multivariate models, including weighted regression PLSR (Weighted-PLSR), genetic algorithm PLSR (GA-PLSR), and wavelet transform PLSR (WT-PLSR). The proposed models were applied for the stability-indicating analysis of mixtures of mebeverine hydrochloride (meb) and sulpiride (sul) in the presence of their reported impurities and degradation products. The work introduced in this paper aims to compare these different chemometric methods, showing the underlying algorithm for each and making a comparison of analysis results. For proper analysis, a 6-factor, 5-level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. A test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. Leave one out (LOO) and bootstrap were applied to predict number of PLS components. The GA-PLSR proposed method was successfully applied for the analysis of raw material (test set 101.03% ± 1.068, 101.47% ± 2.721 for meb and sul, respectively) and pharmaceutical tablets containing meb and sul mixtures (10.10% ± 0.566, 98.16% ± 1.081 for meb and sul).

  10. Europa's surface composition from near-infrared observations: A comparison of results from linear mixture modeling and radiative transfer modeling (United States)

    Shirley, James H.; Jamieson, Corey S.; Dalton, J. Bradley


    Quantitative estimates of the abundance of surface materials and of water ice particle grain sizes at five widely separated locations on the surface of Europa have been obtained by two independent methods in order to search for possible discrepancies that may be attributed to differences in the methods employed. Results of radiative transfer (RT) compositional modeling (also known as intimate mixture modeling) from two prior studies are here employed without modification. Areal (or "checkerboard") mixture modeling, also known as linear mixture (LM) modeling, was performed to allow direct comparisons. The failure to model scattering processes (whose effects may be strongly nonlinear) in the LM approach is recognized as a potential source of errors. RT modeling accounts for nonlinear spectral responses due to scattering but is subject to other uncertainties. By comparing abundance estimates for H2SO4 · nH2O and water ice, obtained through both methods as applied to identical spectra, we may gain some insight into the importance of "volume scattering" effects for investigations of Europa's surface composition. We find that both methods return similar abundances for each location analyzed; linear correlation coefficients of ≥ 0.98 are found between the derived H2SO4 · nH2O and water ice abundances returned by both methods. We thus find no evidence of a significant influence of volume scattering on the compositional solutions obtained by LM modeling for these locations. Some differences in the results obtained for water ice grain sizes are attributed to the limited selection of candidate materials allowed in the RT investigations.

  11. An exact model for predicting tablet and blend content uniformity based on the theory of fluctuations in mixtures. (United States)

    Rane, Sagar S; Hamed, Ehab; Rieschl, Sarah


    Content uniformity (CU) of tablets is a critical property that needs to be well controlled in pharmaceutical products. Methods that predict the CU accurately can greatly help in reducing the development efforts. This article presents a statistical mechanical framework for predicting CU based on first principles at the molecular level. The tablet is modeled as an open system that can be treated as a grand canonical ensemble to calculate fluctuations in the number of granules and thus the CU. Exact analytical solutions to hard sphere mixture systems are applied to derive an expression for the CU and elucidate the different factors that impact CU. The model was tested against literature data and a large set of tablet formulations specifically made and analyzed for CU using a model active pharmaceutical ingredient. The formulations covered the effect of granule size, percentage loading, and tablet weight on the CU. The model is able to predict the mean experimental coefficient of variation (CV) with good success and captures all the elements that impact the CU. The predictions of the model serve as a theoretical lower limit for the mean CV (for infinite batches or tablets) that can be expected during manufacturing assuming the best processing conditions.

  12. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  13. Insight into Signal Response of Protein Ions in Native ESI-MS from the Analysis of Model Mixtures of Covalently Linked Protein Oligomers (United States)

    Root, Katharina; Wittwer, Yves; Barylyuk, Konstantin; Anders, Ulrike; Zenobi, Renato


    Native ESI-MS is increasingly used for quantitative analysis of biomolecular interactions. In such analyses, peak intensity ratios measured in mass spectra are treated as abundance ratios of the respective molecules in solution. While signal intensities of similar-size analytes, such as a protein and its complex with a small molecule, can be directly compared, significant distortions of the peak ratio due to unequal signal response of analytes impede the application of this approach for large oligomeric biomolecular complexes. We use a model system based on concatenated maltose binding protein units (MBPn, n = 1, 2, 3) to systematically study the behavior of protein mixtures in ESI-MS. The MBP concatamers differ from each other only by their mass while the chemical composition and other properties remain identical. We used native ESI-MS to analyze model mixtures of MBP oligomers, including equimolar mixtures of two proteins, as well as binary mixtures containing different fractions of the individual components. Pronounced deviation from a linear dependence of the signal intensity with concentration was observed for all binary mixtures investigated. While equimolar mixtures showed linear signal dependence at low concentrations, distinct ion suppression was observed above 20 μM. We systematically studied factors that are most often used in the literature to explain the origin of suppression effects. Implications of this effect for quantifying protein-protein binding affinity by native ESI-MS are discussed in general and demonstrated for an example of an anti-MBP antibody with its ligand, MBP. [Figure not available: see fulltext.

  14. A model for calculating heat transfer coefficient concerning ethanol-water mixtures condensation (United States)

    Wang, J. S.; Yan, J. J.; Hu, S. H.; Yang, Y. S.


    The attempt of the author in this research is made to calculate a heat transfer coefficient (HTC) by combining the filmwise theory with the dropwise notion for ethanol-water mixtures condensation. A new model, including ethanol concentration, vapor pressure and velocity, is developed by introducing a characteristic coefficient to combine the two mentioned-above theories. Under different concentration, pressure and velocity, the calculation is in comparison with experiment. It turns out that the calculation value is in good agreement with the experimental result; the maximal error is within ±30.1%. In addition, the model is applied to calculate related experiment in other literature and the values obtained agree well with results in reference.

  15. Cosmological models described by a mixture of van der Waals fluid and dark energy

    CERN Document Server

    Kremer, G M


    The Universe is modeled as a binary mixture whose constituents are described by a van der Waals fluid and by a dark energy density. The dark energy density is considered either as the quintessence or as the Chaplygin gas. The irreversible processes concerning the energy transfer between the van der Waals fluid and the gravitational field are taken into account. This model can simulate: (a) an inflationary period where the acceleration grows exponentially and the van der Waals fluid behaves like an inflaton; (b) an inflationary period where the acceleration is positive but it decreases and tends to zero whereas the energy density of the van der Waals fluid decays; (c) a decelerated period which corresponds to a matter dominated period with a non-negative pressure; and (d) a present accelerated period where the dark energy density outweighs the energy density of the van der Waals fluid.

  16. An efficient approach for shadow detection based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    韩延祥; 张志胜; 陈芳; 陈恺


    An efficient approach was proposed for discriminating shadows from moving objects. In the background subtraction stage, moving objects were extracted. Then, the initial classification for moving shadow pixels and foreground object pixels was performed by using color invariant features. In the shadow model learning stage, instead of a single Gaussian distribution, it was assumed that the density function computed on the values of chromaticity difference or bright difference, can be modeled as a mixture of Gaussian consisting of two density functions. Meanwhile, the Gaussian parameter estimation was performed by using EM algorithm. The estimates were used to obtain shadow mask according to two constraints. Finally, experiments were carried out. The visual experiment results confirm the effectiveness of proposed method. Quantitative results in terms of the shadow detection rate and the shadow discrimination rate (the maximum values are 85.79%and 97.56%, respectively) show that the proposed approach achieves a satisfying result with post-processing step.

  17. A mixture model-based strategy for selecting sets of genes in multiclass response microarray experiments. (United States)

    Broët, Philippe; Lewin, Alex; Richardson, Sylvia; Dalmasso, Cyril; Magdelenat, Henri


    Multiclass response (MCR) experiments are those in which there are more than two classes to be compared. In these experiments, though the null hypothesis is simple, there are typically many patterns of gene expression changes across the different classes that led to complex alternatives. In this paper, we propose a new strategy for selecting genes in MCR that is based on a flexible mixture model for the marginal distribution of a modified F-statistic. Using this model, false positive and negative discovery rates can be estimated and combined to produce a rule for selecting a subset of genes. Moreover, the method proposed allows calculation of these rates for any predefined subset of genes. We illustrate the performance our approach using simulated datasets and a real breast cancer microarray dataset. In this latter study, we investigate predefined subset of genes and point out interesting differences between three distinct biological pathways.

  18. Pulsed-field capillary electrophoresis: optimizing separation parameters with model mixtures of sulfonated polystyrenes. (United States)

    Sudor, J; Novotny, M V


    The electrophoretic transport of high molecular weight charged solutes, both flexible and stiff polymers, has been studied by capillary electrophoresis under constant-field and pulsed-field conditions. Sulfonated polystyrenes were used as model solutes in different entangled polymer solutions. First, changes of the end-to-end distance vectors of flexible polymers were examined through the mobility/potential-gradient curves. Under pulsed-field conditions, the influence of different pulse shapes, frequencies, and amplitudes of forward and backward pulses on the electrophoretic mobilities of model solutes was studied. Resolution of the mixture components was strongly affected by changes in frequency of both sine-wave and square-wave pulses. The experimental results obtained under pulse-field conditions are roughly in agreement with the existing theories of electrophoretic transport.

  19. Probabilistic multi-item inventory model with varying mixture shortage cost under restrictions. (United States)

    Fergany, Hala A


    This paper proposed a new general probabilistic multi-item, single-source inventory model with varying mixture shortage cost under two restrictions. One of them is on the expected varying backorder cost and the other is on the expected varying lost sales cost. This model is formulated to analyze how the firm can deduce the optimal order quantity and the optimal reorder point for each item to reach the main goal of minimizing the expected total cost. The demand is a random variable and the lead time is a constant. The demand during the lead time is a random variable that follows any continuous distribution, for example; the normal distribution, the exponential distribution and the Chi square distribution. An application with real data is analyzed and the goal of minimization the expected total cost is achieved. Two special cases are deduced.

  20. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua;


    In Bayesian analysis of a statistical model, the predictive distribution is obtained by marginalizing over the parameters with their posterior distributions. Compared to the frequently used point estimate plug-in method, the predictive distribution leads to a more reliable result in calculating...... the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...

  1. Modeling the dispersion phenomenon in batch transfer operations by the theory of structured mixture

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, J.L. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Dept. de Engenharia Mecanica], e-mail:


    This paper investigates the design of a model based on the theory of structured mixture that retains the character of one-dimensional models from the literature, simultaneously being able to estimate the mixing volume with consistency. The determination of the mixing volume in batch transfer operations based on this novel approach can facilitate the understanding of which parameters affect their growth, and even the optimization of multi product pipeline operations, in order to reduce the mixing volume. Among other things, the proposed study can generate considerable indirect economic impacts, by providing transported goods with higher quality control, and enabling more efficient planning of the pipeline operations with regard to implementation of pumping stops, thereby providing operational flexibility and reliability. (author)

  2. Measurement and Modeling of Surface Tensions of Asymmetric Systems: Heptane, Eicosane, Docosane, Tetracosane and their Mixtures

    DEFF Research Database (Denmark)

    Queimada, Antonio; Silva, Filipa A.E; Caco, Ana I.;


    To extend the surface tension database for heavy or asymmetric n-alkane mixtures, measurements were performed using the Wilhelmy plate method. Measured systems included the binary mixtures heptane + eicosane, heptane + docosane and heptane + tetracosane and the ternary mixture heptane + eicosane...

  3. Variational Bayesian mixture of experts models and sensitivity analysis for nonlinear dynamical systems (United States)

    Baldacchino, Tara; Cross, Elizabeth J.; Worden, Keith; Rowson, Jennifer


    Most physical systems in reality exhibit a nonlinear relationship between input and output variables. This nonlinearity can manifest itself in terms of piecewise continuous functions or bifurcations, between some or all of the variables. The aims of this paper are two-fold. Firstly, a mixture of experts (MoE) model was trained on different physical systems exhibiting these types of nonlinearities. MoE models separate the input space into homogeneous regions and a different expert is responsible for the different regions. In this paper, the experts were low order polynomial regression models, thus avoiding the need for high-order polynomials. The model was trained within a Bayesian framework using variational Bayes, whereby a novel approach within the MoE literature was used in order to determine the number of experts in the model. Secondly, Bayesian sensitivity analysis (SA) of the systems under investigation was performed using the identified probabilistic MoE model in order to assess how uncertainty in the output can be attributed to uncertainty in the different inputs. The proposed methodology was first tested on a bifurcating Duffing oscillator, and it was then applied to real data sets obtained from the Tamar and Z24 bridges. In all cases, the MoE model was successful in identifying bifurcations and different physical regimes in the data by accurately dividing the input space; including identifying boundaries that were not parallel to coordinate axes.

  4. Modeling of Sunspot Numbers by a Modified Binary Mixture of Laplace Distribution Functions (United States)

    Sabarinath, A.; Anilkumar, A. K.


    This paper presents a new approach for describing the shape of 11-year sunspot cycles by considering the monthly averaged values. This paper also brings out a prediction model based on the analysis of 22 sunspot cycles from the year 1749 onward. It is found that the shape of the sunspot cycles with monthly averaged values can be described by a functional form of modified binary mixture of Laplace density functions, modified suitably by introducing two additional parameters in the standard functional form. The six parameters, namely two locations, two scales, and two area parameters, characterize this model. The nature of the estimated parameters for the sunspot cycles from 1749 onward has been analyzed and finally we arrived at a sufficient set of the parameters for the proposed model. It is seen that this model picks up the sunspot peaks more closely than any other model without losing the match at other places at the same time. The goodness of fit for the proposed model is also computed with the Hathaway Wilson Reichmann overline{χ} measure, which shows, on average, that the fitted model passes within 0.47 standard deviations of the actual averaged monthly sunspot numbers.

  5. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim


    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  6. Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model.

    Directory of Open Access Journals (Sweden)

    Ying Li

    Full Text Available Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1 the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2 the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.

  7. Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model. (United States)

    Li, Ying; Wang, Hong; Li, Xiao Bing


    Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.

  8. Density and molar volumes of imidazolium-based ionic liquid mixtures and prediction by the Jouyban-Acree model (United States)

    Ghani, Noraini Abd; Sairi, Nor Asrina; Mat, Ahmad Nazeer Che; Khoubnasabjafari, Mehry; Jouyban, Abolghasem


    The density of imidazolium-based ionic liquid, 1-ethyl-3-methylimidazolium diethylphosphate with sulfolane were measured at atmospheric pressure. The experiments were performed at T= (293 - 343) K over the complete mole fractions. Physical and thermodynamic properties such as molar volumes, V0, and excess molar volumes, VE for this binary mixtures were derived from the experimental density data. The Jouyban-Acree model was exploited to correlate the physicochemical properties (PCPs) of binary mixtures at various mole fractions and temperatures.

  9. The Spatiotemporal Oscillations of Order Parameter for Isothermal Model of the Surface-Directed Spinodal Decomposition in Bounded Binary Mixtures

    Directory of Open Access Journals (Sweden)

    Igor B. Krasnyuk


    Full Text Available The asymptotical behavior of order parameter in confined binary mixture is considered in one-dimensional geometry. The interaction between bulk and surface forces in the mixture is investigated. Its established conditions are when the bulk spinodal decomposition may be ignored and when the main role in the process of formation of the oscillating asymptotic periodic spatiotemporal structures plays the surface-directed spinodal decomposition which is modelled by nonlinear dynamical boundary conditions.

  10. Determining the source locations of martian meteorites: Hapke mixture models applied to CRISM simulated data of igneous mineral mixtures and martian meteorites (United States)

    Harris, Jennifer; Grindrod, Peter


    At present, martian meteorites represent the only samples of Mars available for study in terrestrial laboratories. However, these samples have never been definitively tied to source locations on Mars, meaning that the fundamental geological context is missing. The goal of this work is to link the bulk mineralogical analyses of martian meteorites to the surface geology of Mars through spectral mixture analysis of hyperspectral imagery. Hapke radiation transfer modelling has been shown to provide accurate (within 5 - 10% absolute error) mineral abundance values from laboratory derived hyperspectral measurements of binary [1] and ternary [2] mixtures of plagioclase, pyroxene and olivine. These three minerals form the vast bulk of the SNC meteorites [3] and the bedrock of the Amazonian provinces on Mars that are inferred to be the source regions for these meteorites based on isotopic aging. Spectral unmixing through the Hapke model could be used to quantitatively analyse the Martian surface and pinpoint the exact craters from which the SNC meteorites originated. However the Hapke model is complex with numerous variables, many of which are determinable in laboratory conditions but not from remote measurements of a planetary surface. Using binary and tertiary spectral mixtures and martian meteorite spectra from the RELAB spectral library, the accuracy of Hapke abundance estimation is investigated in the face of increasing constraints and simplifications to simulate CRISM data. Constraints and simplifications include reduced spectral resolution, additional noise, unknown endmembers and unknown particle physical characteristics. CRISM operates in two spectral resolutions, the Full Resolution Targeted (FRT) with which it has imaged approximately 2% of the martian surface, and the lower spectral resolution MultiSpectral Survey mode (MSP) with which it has covered the vast majority of the surface. On resampling the RELAB spectral mixtures to these two wavelength ranges it was

  11. Estimation of vegetation parameter for modeling soil erosion using linear Spectral Mixture Analysis of Landsat ETM data (United States)

    de Asis, Alejandro M.; Omasa, Kenji

    Soil conservation planning often requires estimates of soil erosion at a catchment or regional scale. Predictive models such as Universal Soil Loss Equation (USLE) and its subsequent Revised Universal Soil Loss Equation (RUSLE) are useful tools to generate the quantitative estimates necessary for designing sound conservation measures. However, large-scale soil erosion model-factor parameterization and quantification is difficult due to the costs, labor and time involved. Among the soil erosion parameters, the vegetative cover or C factor has been one of the most difficult to estimate over broad geographic areas. The C factor represents the effects of vegetation canopy and ground covers in reducing soil loss. Traditional methods for the extraction of vegetation information from remote sensing data such as classification techniques and vegetation indices were found to be inaccurate. Thus, this study presents a new approach based on Spectral Mixture Analysis (SMA) of Landsat ETM data to map the C factor for use in the modeling of soil erosion. A desirable feature of SMA is that it estimates the fractional abundance of ground cover and bare soils simultaneously, which is appropriate for soil erosion analysis. Hence, we estimated the C factor by utilizing the results of SMA on a pixel-by-pixel basis. We specifically used a linear SMA (LSMA) model and performed a minimum noise fraction (MNF) transformation and pixel purity index (PPI) on Landsat ETM image to derive the proportion of ground cover (vegetation and non-photosynthetic materials) and bare soil within a pixel. The end-members were selected based on the purest pixels found using PPI with reference to very high-resolution QuickBird image and actual field data. Results showed that the C factor value estimated using LSMA correlated strongly with the values measured in the field. The correlation coefficient ( r) obtained was 0.94. A comparative analysis between NDVI- and LSMA-derived C factors also proved that the

  12. Numerical analysis of convective heat transfer of nanofluids in circular ducts with two-phase mixture model approach (United States)

    Sert, İsmail Ozan; Sezer-Uzol, Nilay


    Computational fluid dynamics simulations for initially hydro-dynamically fully developed laminar flow with nanofluids in a circular duct under constant wall temperature condition are performed with two-phase mixture model by using Fluent software. Thermal behaviors of the system are investigated for constant wall temperature condition for Al2O3/water nanofluid. Hamilton-Crosser model and the Brownian motion effect are used for the thermal conductivity model of nanofluid instead of the Fluent default model for mixtures which gives extraordinary high thermal conductivity values and is valid for macro systems. Also, thermal conductivity and viscosity of the base fluid are taken as temperature dependent. The effects of nanoparticle volume fraction, nanoparticle size, and inlet Peclet number on the heat transfer enhancement are investigated. The results are compared with single-phase results which give slightly lower heat transfer coefficient values than the results of two-phase mixture model.

  13. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers


    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  14. Inhomogeneous model colloid-polymer mixtures: adsorption at a hard wall. (United States)

    Brader, J M; Dijkstra, M; Evans, R


    We study the equilibrium properties of inhomogeneous model colloid-polymer mixtures. By integrating out the degrees of freedom of the ideal polymer coils, we derive a formal expression for the effective one-component Hamiltonian of the (hard sphere) colloids that is valid for arbitrary external potentials acting on both the colloids and the polymers. We show how one can recover information about the distribution of polymer in the mixture given knowledge of the colloid correlation functions calculated using the effective one-component Hamiltonian. This result is then used to furnish the connection between the free-volume and perturbation theory approaches to determining the bulk phase equilibria. For the special case of a planar hard wall the effective Hamiltonian takes an explicit form, consisting of zero-, one-, and two-body, but no higher-body, contributions provided the size ratio q=sigma(p)/sigma(c)sigma(c) and sigma(p) denote the diameters of colloid and polymer respectively. We employ a simple density functional theory to calculate colloid density profiles from this effective Hamiltonian for q=0.1. The resulting profiles are found to agree well with those from Monte Carlo simulations for the same Hamiltonian. Adding very small amounts of polymer gives rise to strong depletion effects at the hard wall which lead to pronounced enhancement of the colloid density profile (close to the wall) over what is found for hard spheres at a hard wall.

  15. Molecular dynamics study of water and water/chlorinated hydrocargon mixtures with polarizable potential models

    Energy Technology Data Exchange (ETDEWEB)

    Dang, L.X. [Pacific Northwest National Lab., Richland, WA (United States)


    A series of molecular dynamics simulations were carried out to study water and water/chlorinated hydrocarbon mixtures. The properties of water clusters containing up to six water molecules were evaluated. A prism-like structure is predicted to be lowest in energy for the (H{sub 2}O){sub 6} cluster and a cage-like structure is the second lowest in energy with the energy about 0.2 kcal/mol higher than the prism-like structure. The computed dipole moments of water molecules in clusters indicated that there is a transition from cyclic planar configurations to three dimensional structure networks. The computed thermodynamic properties for the model including the liquid density, the enthalpy of vaporization, as well as the diffusion coefficient at room temperature, are in excellent agreement with experimental values. The computed density profile of the water of liquid/valor interface shows that the interface is not sharp at a microscopic level and has a thickness of 3.2 A at 298 K. The calculated surface tension at room temperature is in reasonably agreement with the corresponding experimental data. The computed average dipole moments of water molecules near the interface are close to their gas phase values. The thermodynamic and structural properties of water/chlorinated hydrocarbon mixtures as a function of mole fraction were evaluated.

  16. Linking in Vitro Effects and Detected Organic Micropollutants in Surface Water Using Mixture-Toxicity Modeling. (United States)

    Neale, Peta A; Ait-Aissa, Selim; Brack, Werner; Creusot, Nicolas; Denison, Michael S; Deutschmann, Björn; Hilscherová, Klára; Hollert, Henner; Krauss, Martin; Novák, Jiří; Schulze, Tobias; Seiler, Thomas-Benjamin; Serra, Helene; Shao, Ying; Escher, Beate I


    Surface water can contain countless organic micropollutants, and targeted chemical analysis alone may only detect a small fraction of the chemicals present. Consequently, bioanalytical tools can be applied complementary to chemical analysis to detect the effects of complex chemical mixtures. In this study, bioassays indicative of activation of the aryl hydrocarbon receptor (AhR), activation of the pregnane X receptor (PXR), activation of the estrogen receptor (ER), adaptive stress responses to oxidative stress (Nrf2), genotoxicity (p53) and inflammation (NF-κB) and the fish embryo toxicity test were applied along with chemical analysis to water extracts from the Danube River. Mixture-toxicity modeling was applied to determine the contribution of detected chemicals to the biological effect. Effect concentrations for between 0 to 13 detected chemicals could be found in the literature for the different bioassays. Detected chemicals explained less than 0.2% of the biological effect in the PXR activation, adaptive stress response, and fish embryo toxicity assays, while five chemicals explained up to 80% of ER activation, and three chemicals explained up to 71% of AhR activation. This study highlights the importance of fingerprinting the effects of detected chemicals.

  17. Modeling sorption of neutral organic compound mixtures to simulated aquifer sorbents with pseudocompounds. (United States)

    Joo, Jin Chul; Shackelford, Charles D; Reardon, Kenneth F


    The feasibility of the ideal adsorbed solution theory (IAST) in reducing the complexity associated with predicting the sorption behaviors of 12 neutral organic compounds (NOCs) contained in complex mixtures as a fewer number (four to six) of pseudocompounds (groups of compounds) to simulated aquifer sorbents was investigated. All sorption isotherms from individual- and multiple-pseudocompound systems were fit reasonably well ( ≥ 0.953) by the Freundlich sorption model over the range of aqueous concentrations evaluated (i.e., ≤200 μmol L). The presence and magnitude of mutual competition among pseudocompounds varied depending on the composition of the mixtures (i.e., concentrations and polarities of pseudocompounds) and the properties of sorbents (i.e., the fraction of organic carbon and the availability of hydrophilic specific sorption sites). Finally, comparisons between the IAST-based predictions with individual-pseudocompound sorption parameters and experimentally measured data revealed that the accuracy in predicting the sorption behaviors of several NOCs in terms of a fewer number of pseudocompounds decreased with increasing deviations from the assumption of equal and ideal competition in the IAST (i.e., differential availability of sorption sites and nonideal competitions among pseudocompounds).

  18. Amphypterygium adstringens anacardic acid mixture inhibits quorum sensing-controlled virulence factors of Chromobacterium violaceum and Pseudomonas aeruginosa. (United States)

    Castillo-Juárez, Israel; García-Contreras, Rodolfo; Velázquez-Guadarrama, Norma; Soto-Hernández, Marcos; Martínez-Vázquez, Mariano


    Quorum sensing (QS) is a process of bacterial cell-cell communication that controls a large number of systems affecting pathogenicity. Interrupting this communication system can provide nonvirulent pathogenic bacteria. The aim of this study was to evaluate the anti-quorum sensing (anti-QS) potential of an anacardic acids mixture isolated from Amphipterygium adstringens, a medicinal plant known as "cuachalalate", to prevent the onset of bacterial infections as an alternate to antibiotics. Initially we investigated the anti-QS activity of A. adstringens hexane extract (HE) by the inhibition of violacein production in Chromobacterium violaceum. From the active HE, an anacardic acid mixture (AAM) was obtained. The anti-quorum sensing activity of AAM was investigated by the rhamnolipid and pyocyanin production constraint as well as decrease of elastase activity, all being quorum sensing-controlled virulence factors expressed in the pathogenic bacteria Pseudomonas aeruginosa. HE induced a 91.6% of inhibition of the violecin production at 55 μg/mL concentration, whereas AAM showed 94% of inhibition at 166 μg/mL. In both cases, inhibition of violacein production did not affect the viability of the bacterium. AAM inhibited pyocyanin (86% at 200 μg/mL) and rhamnolipid (91% at 500 μg/mL) production in a dose/response form and decrease the elastase (75% at 500 μg/mL) activity in P. aeruginosa without affecting its development. Because an anacardic acids mixture isolated from A. adstringens demonstrated anti-QS, it could be further exploited for novel molecules to treat the emerging infections of antibiotic-resistant bacterial pathogens. Copyright © 2013 IMSS. Published by Elsevier Inc. All rights reserved.

  19. A Dirichlet Process Mixture Based Name Origin Clustering and Alignment Model for Transliteration

    Directory of Open Access Journals (Sweden)

    Chunyue Zhang


    Full Text Available In machine transliteration, it is common that the transliterated names in the target language come from multiple language origins. A conventional maximum likelihood based single model can not deal with this issue very well and often suffers from overfitting. In this paper, we exploit a coupled Dirichlet process mixture model (cDPMM to address overfitting and names multiorigin cluster issues simultaneously in the transliteration sequence alignment step over the name pairs. After the alignment step, the cDPMM clusters name pairs into many groups according to their origin information automatically. In the decoding step, in order to use the learned origin information sufficiently, we use a cluster combination method (CCM to build clustering-specific transliteration models by combining small clusters into large ones based on the perplexities of name language and transliteration model, which makes sure each origin cluster has enough data for training a transliteration model. On the three different Western-Chinese multiorigin names corpora, the cDPMM outperforms two state-of-the-art baseline models in terms of both the top-1 accuracy and mean F-score, and furthermore the CCM significantly improves the cDPMM.

  20. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures (United States)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.


    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~ 275 to ~ 400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures. To this end we introduce a new parameterisation for the temperature dependence. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multi-component system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~ 190 to ~ 440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 28% in

  1. A Bayesian semiparametric factor analysis model for subtype identification. (United States)

    Sun, Jiehuan; Warren, Joshua L; Zhao, Hongyu


    Disease subtype identification (clustering) is an important problem in biomedical research. Gene expression profiles are commonly utilized to infer disease subtypes, which often lead to biologically meaningful insights into disease. Despite many successes, existing clustering methods may not perform well when genes are highly correlated and many uninformative genes are included for clustering due to the high dimensionality. In this article, we introduce a novel subtype identification method in the Bayesian setting based on gene expression profiles. This method, called BCSub, adopts an innovative semiparametric Bayesian factor analysis model to reduce the dimension of the data to a few factor scores for clustering. Specifically, the factor scores are assumed to follow the Dirichlet process mixture model in order to induce clustering. Through extensive simulation studies, we show that BCSub has improved performance over commonly used clustering methods. When applied to two gene expression datasets, our model is able to identify subtypes that are clinically more relevant than those identified from the existing methods.


    Directory of Open Access Journals (Sweden)

    Silvia Rostianingsih


    Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat

  3. Capturing the Phylogeny of Holometabola with Mitochondrial Genome Data and Bayesian Site-Heterogeneous Mixture Models. (United States)

    Song, Fan; Li, Hu; Jiang, Pei; Zhou, Xuguo; Liu, Jinpeng; Sun, Changhai; Vogler, Alfried P; Cai, Wanzhi


    After decades of debate, a mostly satisfactory resolution of relationships among the 11 recognized holometabolan orders of insects has been reached based on nuclear genes, resolving one of the most substantial branches of the tree-of-life, but the relationships are still not well established with mitochondrial genome data. The main reasons have been the absence of sufficient data in several orders and lack of appropriate phylogenetic methods that avoid the systematic errors from compositional and mutational biases in insect mitochondrial genomes. In this study, we assembled the richest taxon sampling of Holometabola to date (199 species in 11 orders), and analyzed both nucleotide and amino acid data sets using several methods. We find the standard Bayesian inference and maximum-likelihood analyses were strongly affected by systematic biases, but the site-heterogeneous mixture model implemented in PhyloBayes avoided the false grouping of unrelated taxa exhibiting similar base composition and accelerated evolutionary rate. The inclusion of rRNA genes and removal of fast-evolving sites with the observed variability sorting method for identifying sites deviating from the mean rates improved the phylogenetic inferences under a site-heterogeneous model, correctly recovering most deep branches of the Holometabola phylogeny. We suggest that the use of mitochondrial genome data for resolving deep phylogenetic relationships requires an assessment of the potential impact of substitutional saturation and compositional biases through data deletion strategies and by using site-heterogeneous mixture models. Our study suggests a practical approach for how to use densely sampled mitochondrial genome data in phylogenetic analyses. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  4. The Infinite Hierarchical Factor Regression Model

    CERN Document Server

    Rai, Piyush


    We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.

  5. Alternative sigma factors in the free state are equilibrium mixtures of open and compact conformations. (United States)

    Raha, Paromita; Chattopadhyay, Suranjana; Mukherjee, Srijata; Chattopadhyay, Ruchira; Roy, Koushik; Roy, Siddhartha


    Conformational switching upon core RNA polymerase binding is an integral part of functioning of bacterial sigma factors. Here, we have studied dynamical features of two alternative sigma factors. A study of fluorescence resonance energy transfer and hydrodynamic measurements in Escherichia coli σ(32) suggest a compact shape like those found in complex with anti-sigma factors. On the other hand, the fluorescence anisotropy of probes attached to different regions of the protein and previous hydrogen exchange measurements suggest significant internal flexibility, particularly in the C-terminal half and region 1. In a homologous sigma factor, σ(F) of Mycobacterium tuberculosis, emission spectra and fluorescence resonance energy transfer between the single tryptophan (W112) and probes placed in different regions suggest a compact conformation for a major part of the N-terminal half encompassing region 2 and the flexible C-terminal half. Fluorescence anisotropy measurements suggest significant flexibility in the C-terminal half and region 1, as well. Thus, free alternative sigma factors may be in equilibrium between two conformations: a compact one in which the promoter interacting motifs are trapped in the wrong conformation and another less abundant one with a more open and flexible conformation. Such flexibility may be important for promoter recognition and interaction with many partner proteins.

  6. Modeling high-pressure adsorption of gas mixtures on activated carbon and coal using a simplified local-density model. (United States)

    Fitzgerald, James E; Robinson, Robert L; Gasem, Khaled A M


    The simplified local-density (SLD) theory was investigated regarding its ability to provide accurate representations and predictions of high-pressure supercritical adsorption isotherms encountered in coalbed methane (CBM) recovery and CO2 sequestration. Attention was focused on the ability of the SLD theory to predict mixed-gas adsorption solely on the basis of information from pure gas isotherms using a modified Peng-Robinson (PR) equation of state (EOS). An extensive set of high-pressure adsorption measurements was used in this evaluation. These measurements included pure and binary mixture adsorption measurements for several gas compositions up to 14 MPa for Calgon F-400 activated carbon and three water-moistened coals. Also included were ternary measurements for the activated carbon and one coal. For the adsorption of methane, nitrogen, and CO2 on dry activated carbon, the SLD-PR can predict the component mixture adsorption within about 2.2 times the experimental uncertainty on average solely on the basis of pure-component adsorption isotherms. For the adsorption of methane, nitrogen, and CO2 on two of the three wet coals, the SLD-PR model can predict the component adsorption within the experimental uncertainties on average for all feed fractions (nominally molar compositions of 20/80, 40/60, 60/40, and 80/20) of the three binary gas mixture combinations, although predictions for some specific feed fractions are outside of their experimental uncertainties.

  7. Non-electrostatic surface complexation models for protons and lead(II) sorption onto single minerals and their mixture. (United States)

    Pagnanelli, Francesca; Bornoroni, Lorena; Moscardini, Emanuela; Toro, Luigi


    Potentiometric titrations and lead sorption tests were conducted using muscovite, clinochlore, hematite, goethite, quartz, and a mixture of these same minerals. Mechanistic models were developed to represent and interpret these data. The aim was isolating the specific contribution of each mineral in proton and lead binding. Acid-base properties of each single mineral as well as their mixture were represented by discrete models, which consider the dissociation of n monoprotic sites (n-site/n-K(H) models). A one-site/one-K(H) model (logK(H1) = 10.69) was chosen for quartz (dissociation of SiOH edge hydroxyl groups). Goethite and hematite (FeOH groups) were represented by the same one-site/one-K(H) model (logK(H1) = 10.35). Three-site/three-K(H) models were used for muscovite (logK(H1) = 4.18; logK(H2) = 6.65; logK(H3) = 9.67) and clinochlore (logK(H1) = 3.84; logK(H2) = 6.57; logK(H3) = 9.71) assuming that SiOH and AlOH of the aluminosilicate matrix dissociate in the acid-neutral pH range while SiOH groups of quartz inclusions dissociate in the basic range. Similarly, the mixture of these minerals was represented by a three-site/three-K(H) model (logK(H1) = 3.39; logK(H2) = 6.72; logK(H3) = 10.82). According to crossed comparisons with single minerals, the first two sites of the mixture were associated with the aluminosilicate matrix (SiOH and AlOH respectively) and the third site with iron oxides (FeOH) and quartz groups. Additivity of proton binding in the mixture was demonstrated by simulating the mixture's titration curve. A unified model for the entire set of titration curves (single minerals and mixture) was also developed introducing a three-peak distribution function for proton affinity constants. Experimental data for lead sorption onto the mixture and individual minerals in 3-5 pH range denoted the competition between protons and metallic ions. The entire set of lead isotherms (individual mineral and mixture data) was represented adequately by a unified

  8. Study of the factors influencing the metals solubilisation from a mixture of waste batteries by response surface methodology. (United States)

    Tanong, Kulchaya; Coudert, Lucie; Chartier, Myriam; Mercier, Guy; Blais, Jean-François


    This paper presents an innovative process for the recovery of valuable metals from a mixture of spent batteries. Different types of batteries, including alkaline, zinc-carbon (Zn-C), nickel cadmium (Ni-Cd), nickel metal hydride (Ni-MH), lithium ion (Li-ion) and lithium metallic (Li-M) batteries, were mixed according to the proportion of the Canadian sales of batteries. A Box-Behnken design was applied to find the optimum leaching conditions allowing a maximum of valuable metal removals from a mixture of spent batteries in the presence of an inorganic acid and a reducing agent. The results highlighted the positive effect of sodium metabisulfite on the performance of metals removal, especially for Mn. The solid/liquid ratio and the concentration of H2SO4 were the main factors affecting the leaching behavior of valuable metals (Zn, Mn, Cd, Ni) present in spent batteries. Finally, the optimum leaching conditions were found as follows: one leaching step, solid/liquid ratio = 10.9%, [H2SO4] = 1.34 M, sodium metabisulfite (Na2S2O5) = 0.45 g/g of battery powder and retention time = 45 min. Under such conditions, the removal yields achieved were 94% for Mn, 81% for Cd, 99% for Zn, 96% for Co and 68% for Ni.

  9. Modeling Image Structure with Factorized Phase-Coupled Boltzmann Machines

    CERN Document Server

    Cadieu, Charles F


    We describe a model for capturing the statistical structure of local amplitude and local spatial phase in natural images. The model is based on a recently developed, factorized third-order Boltzmann machine that was shown to be effective at capturing higher-order structure in images by modeling dependencies among squared filter outputs (Ranzato and Hinton, 2010). Here, we extend this model to $L_p$-spherically symmetric subspaces. In order to model local amplitude and phase structure in images, we focus on the case of two dimensional subspaces, and the $L_2$-norm. When trained on natural images the model learns subspaces resembling quadrature-pair Gabor filters. We then introduce an additional set of hidden units that model the dependencies among subspace phases. These hidden units form a combinatorial mixture of phase coupling distributions, concentrated in the sum and difference of phase pairs. When adapted to natural images, these distributions capture local spatial phase structure in natural images.

  10. Confined wetting of FoCa clay powder/pellet mixtures: Experimentation and numerical modeling (United States)

    Maugis, Pascal; Imbert, Christophe

    Potential geological nuclear waste disposals must be properly sealed to prevent contamination of the biosphere by radionuclides. In the framework of the RESEAL project, the performance of a bentonite shaft seal is currently studied at Mol (Belgium). This paper focuses on the hydro-mechanical physical behavior of centimetric, unsaturated samples of the backfilling material - a mixture of FoCa-clay powder and pellets - during oedometer tests. The hydro-mechanical response of the samples is observed experimentally, and then compared to numerical simulations performed by our Cast3M Finite Element code. The generalized Darcy’s law and the Barcelona Basic Model mechanical model formed the physical basis of the numerical model and the interpretation. They are widely used in engineered barriers modeling. Vertical swelling pressure and water intake were measured throughout the test. Although water income presents a monotonous increase, the swelling pressure evolution is marked by a peak, and then a local minimum before increasing again to an asymptotic value. This unexpected behavior is explained by yielding rather than by heterogeneity. It is satisfactorily reproduced by the model after parameter calibration. Several samples with different heights ranging from 5 to 12 cm show the same hydro-mechanical response, apart from a dilatation of the time scale. The interest of the characterization of centimetric samples to predicting the efficiency of a metric sealing is discussed.

  11. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior. (United States)

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A


    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Development of toxic equivalency factors for PCB congeners and the assessment of TCDD and PCB mixtures in rainbow trout (United States)

    Newsted, John L.; Jones, Paul D.; Giesy, John P.; Crawford, Robert A.; Ankley, Gerald T.; Tillitt, Donald E.; Gooch, Jay W.; Denison, Michael S.


    This study was undertaken to evaluate the relationship between mammalian and piscine 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) toxic equivalency factors (TEFs) for PCBs, based on induction of CYP1A enzyme activity, catalytic protein, and mRNA. Rainbow trout administered a single i.p. injection of TCDD had an average (±SD) ED50 of 0.91 ± 0.14 μg TCDD/kg for induction of ethoxyresorufin O-deethylase (EROD) activity. Ortho-substituted PCB congeners 2,3,3′,4,4′-pentachlorobiphenyl (PCB 105), 2,3′,4,4′,5-pentachlorobiphenyl (PCB 118), 2,3,3′4,4′5-hexachlorobiphenyl (PCB !56), and 2,2′3,4,4′,5-hexachlorobiphenyl (PCB 138) did not induce CYP1A activity in rainbow trout. Only three non-ortho-substituted PCBs, i.e., 3,3′4,4′-tetrachlorobiphenyl (PCB 77), 3,3′,4,4′,5-pentachlorobiphenyl (PCB 126), and 3,3′4,4′5,5′-hexachlorobiphenyl (PCB 169) induced CYP1A enzyme activity, protein, and mRNA. The ED50s for induction of EROD activity were calculated as 134, 5.82, and 93.7 μg/kg for PCB 77, PCB 126, and PCB 169, respectively. The TCDD-TEFs based on EROD activity were 0.0006, 0.0014, and 0.0003 for PCB 77, PCB 126, and PCB 169, respectively. Binary mixtures of TCDD and three PCBs were also evaluated. Based on EROD activity and CYP1A protein, mixtures of TCDD and PCB 77 were slightly greater than additive. Mixtures of TCDD-PCB 156 and TCDD-PCB 126 were slightly less than additive. Results from these studies indicate that mammal-derived TEFs will underestimate the potency of planar chlorinated hydrocarbon mixtures to induce the CYP1A catalytic activity in rainbow trout. Also, while interactions among PCB congeners and TCDD were somewhat equivocal, they did not greatly differ from predicted additive responses.

  13. Model and experiments of diesel fuel HCCI combustion with external mixture formation

    Energy Technology Data Exchange (ETDEWEB)

    Canova, M.; Vosz, A.; Dumbauld, D.; Garcin, R.; Midlam-Mohler, S.; Guezennec, Y.; Rizzoni, G. [Ohio State Univ. (United States)


    Homogeneous Charge Compression Ignition represents a promising concept for achieving high efficiencies and low emissions at part-load operations. In particular, HCCI combustion can be successfully applied to conventional Direct Injection Diesel engines with very low extra costs and no modification to the DI system by performing the mixture formation in the intake manifold with a novel fuel atomizer. The present paper describes the experimental and modeling activity oriented to the control of HCCI combustion on a conventional CIDI 4-cylinder engine fitted with this external fueling device. Paralleling preliminary results obtained last year on single-cylinder engine in collaboration with FKFS at the University of Stuttgart, Diesel-fuel HCCI combustion was achieved and characterized over a range of engine speeds, loads, EGR dilution and boost pressure. Stable HCCI combustion with negligible NO{sub x} formation (10 ppm) was achieved with no modification of a high compression ratio engine (c{sub r}=18). The in-cylinder pressure traces were analyzed by performing a detailed heat release analysis while accounting for the wall heat transfer, which is substantially higher during the combustion phase than in a conventional CIDI engine. This analysis led to the joint identification of 2 sub-models: a heat transfer model, and a heat release model. It was found that under the wide range of conditions experimentally measured, the heat release can be approximated by the superposition of 3 Wiebe functions. The sub-models developed were then implemented in a combustion model based on a first-law thermodynamic analysis of in-cylinder processes, in order to identify the influence of the main control parameters on HCCI auto-ignition and to control the combustion process in a HCCI Diesel engine with external mixture formation. The model predictions were then compared to the results of a parallel experimental activity made on a 4-cylinder CIDI Diesel engine equipped with the fuel

  14. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models (United States)

    Rafal Podlaski; Francis A. Roesch


    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  15. Application of mathematical models for the prediction of adsorption isotherms in solid mixture for mango powder refreshment

    Directory of Open Access Journals (Sweden)

    Edmar Clemente


    Full Text Available Solid mixtures for refreshment are already totally integrated to the Brazilian consumers' daily routine, because of their quick preparation method, yield and reasonable price - quite lower if compared to 'ready-to-drink' products or products for prompt consumption, what makes them economically more accessible to low-income populations. Within such a context, the aim of this work was to evaluate the physicochemical and mineral composition, as well as the hygroscopic behavior of four different brands of solid mixture for mango refreshment. The BET, GAB, Oswim and Henderson mathematical models were built through the adjustment of experimental data to the isotherms of adsorption. Results from the physiochemical evaluation showed that the solid mixtures for refreshments are considerable sources of ascorbic acid and reductor sugar; and regarding mineral compounds, they are significant sources of calcium, sodium and potassium. It was also verified that the solid mixtures for refreshments of the four studied brands are considered highly hygroscopic.

  16. Maier-Saupe model for a mixture of uniaxial and biaxial molecules (United States)

    Nascimento, E. S.; Henriques, E. F.; Vieira, A. P.; Salinas, S. R.


    We introduce shape variations in a liquid-crystalline system by considering an elementary Maier-Saupe lattice model for a mixture of uniaxial and biaxial molecules. Shape variables are treated in the annealed (thermalized) limit. We analyze the thermodynamic properties of this system in terms of temperature T , concentration c of intrinsically biaxial molecules, and a parameter Δ associated with the degree of biaxiality of the molecules. At the mean-field level, we use standard techniques of statistical mechanics to draw global phase diagrams, which are shown to display a rich structure, including uniaxial and biaxial nematic phases, a reentrant ordered region, and many distinct multicritical points. Also, we use the formalism to write an expansion of the free energy in order to make contact with the Landau-de Gennes theory of nematic phase transitions.

  17. Green's function theory for the Cheng-Schick model of 3He-4He mixtures (United States)

    Siemann, R. P.; Boukahil, A.; Huber, D. L.


    In this paper, we outline a theory for the thermodynamic properties of 3He-4He mixtures in the neighborhood of the critical line and the tricritical point (TCP). The theory utilizes the Cheng-Schick (CS) lattice gas model where both the 3He and 4He atoms are treated as quantum particles on a lattice. The analysis is based on Green's function approach. Results are presented for the ordering susceptibility and the thermal averages of the occupation numbers of 3He and 4He atoms. We derive a self-consistent equation for the ordering susceptibility and use it to calculate the critical line and locate the TCP. Our findings are compared with the predictions obtained from high temperature series expansions, mean field theory and the random phase approximation (RPA).

  18. Retinal image analysis based on mixture models to detect hard exudates. (United States)

    Sánchez, Clara I; García, María; Mayo, Agustín; López, María I; Hornero, Roberto


    Diabetic Retinopathy is one of the leading causes of blindness in developed countries. Hard exudates have been found to be one of the most prevalent earliest clinical signs of retinopathy. Thus, automatic detection of hard exudates from retinal images is clinically significant. In this study, an automatic method to detect hard exudates is proposed. The algorithm is based on mixture models to dynamically threshold the images in order to separate exudates from background. A postprocessing technique, based on edge detection, is applied to distinguish hard exudates from cotton wool spots and other artefacts. We prospectively assessed the algorithm performance using a database of 80 retinal images with variable colour, brightness, and quality. The algorithm obtained a sensitivity of 90.2% and a positive predictive value of 96.8% using a lesion-based criterion. The image-based classification accuracy is also evaluated obtaining a sensitivity of 100% and a specificity of 90%.

  19. ICA if fMRI based on a convolutive mixture model

    DEFF Research Database (Denmark)

    Hansen, Lars Kai


    processing strategies. Global linear dependencies can be probed by independent component analysis (ICA) based on higher order statistics or spatio-temporal properties. With ICA we separate the different sources of the fMRI signal. ICA can be performed assuming either spatial or temporal independency. A major....... The mixing is represented by “mixture coefficient images” quantifying the local response to a given source at a certain time lag. This is the first communication to address this important issue in the context of fMRI ICA. Data: A single slice holding 128x128 pixels and passing through primary visual cortex......Modeling & Analysis Abstract The fMRI signal has many sources: Stimulus induced activation, other brain activations, confounds including several physiological signal components, the most prominent being the cardiac pulsation at about 1 Hz, and breathing induced motion (0.2-1 Hz). Most fMRI data...

  20. Na+ Cl- ion pair association in water-DMSO mixtures: Effect of ion pair model potentials

    Indian Academy of Sciences (India)



    Potentials of Mean Force (PMF) for the Na+ Cl- ion pair in water–dimethyl sulfoxide (DMSO)mixtures for three DMSO mole fractions have been computed using constrained Molecular Dynamics (MD)simulations and confirmed by dynamical trajectories and residence times of the ion pair at various inter-ionicseparations. The three ion-ion direct potentials used are 12-6-1, exp-6-1 and exp-8-6-1. The physical picturethat emerges is that there is a strong contact ion pair (CIP) and strong to moderate solvent separated ion pair(SSIP) in these solutions. Analysis of local ion clusters shows that ions are dominantly solvated by watermolecules. The 12-6-1 potential model predicts running coordination numbers closest to experimental data.


    Directory of Open Access Journals (Sweden)

    G. Dharanibai,


    Full Text Available In this paper we propose a Gaussian Mixture Model (GMM integrated level set method for automated segmentation of left ventricle (LV, right ventricle (RV and myocardium from short axis views of cardiacmagnetic resonance image. By fitting GMM to the image histogram, global pixel intensity characteristics of the blood pool, myocardium and background are estimated. GMM provides initial segmentation andthe segmentation solution is regularized using level set. Parameters for controlling the level set evolution are automatically estimated from the Bayesian inference classification of pixels. We propose a new speed function that combines edge and region information that stops the evolving level set at the myocardial boundary. Segmentation efficacy is analyzed qualitatively via visual inspection. Results show the improved performance of our of proposed speed function over the conventional Bayesian driven adaptive speed function in automatic segmentation of myocardium

  2. Optical property characterization of molten salt mixtures for thermal modeling of volumetrically absorbing solar receiver applications (United States)

    Tetreault-Friend, Melanie; McKrell, Thomas; Baglietto, Emilio; Gil, Antoni; Slocum, Alexander H.; Calvet, Nicolas


    A method for experimentally determining the attenuation coefficient of high temperature semi-transparent liquids for volumetrically absorbing solar receiver applications was developed. The method was used to measure the attenuation coefficient over a broad spectral range in a 40 wt. % KNO3:60 wt. % NaNO3 binary nitrate molten salt mixture (solar salt). The measured absorption bands extend over 98% of the re-emission spectrum of the salt, indicating that thermal redistribution within the salt itself via radiative participating media effects is negligible. In addition, the effects of the salt's purity and thermal decomposition on the optical properties were also investigated and the light penetration depth is shown to vary significantly in the presence of impurities. The implications of these results for solar receiver design and modeling are discussed.

  3. Color-texture segmentation using JSEG based on Gaussian mixture modeling

    Institute of Scientific and Technical Information of China (English)

    Wang Yuzhong; Yang Jie; Zhou Yue


    An improved approach for J-value segmentation (JSEG) is presented for unsupervised color image segmentation. Instead of color quantization algorithm, an automatic classification method based on adaptive mean shift (AMS)based clustering is used for nonparametric clustering of image data set. The clustering results are used to construct Gaussian mixture modelling (GMM) of image data for the calculation of soft J value. The region growing algorithm used in JSEG is then applied in segmenting the image based on the multiscale soft J-images. Experiments show that the synergism of JSEG and the soft classification based on AMS based clustering and GMM overcomes the limitations of JSEG successfully and is more robust.

  4. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model (United States)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin


    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  5. A Surface Tension Model for Liquid Mixtures Based on NRTL Equation

    Institute of Scientific and Technical Information of China (English)


    A new equation for predicting surface tension is proposed based on the thermodynamic definition of surface tension and the expression of the Gibbs free energy of the system. Using the NRTL equation to represent the excess Gibbs free energy, a two-parameter surface tension equation is derived. The feasibility of the new equation has been tested in terms of 124 binary and 16 multicomponent systems(13-ternary and 3-quaternary) with absolute relative deviations of 0.59% and 1.55% respectively. This model is also predictive for the temperature dependence of surface tension of liquid mixtures. It is shown that, with good accuracy, this equation is simple and reliable for practical use.

  6. A Dynamic Model of a Vapor Compression Refrigeration Cycle using Zeotropic Refrigerant Mixtures (United States)

    Unezaki, Fumitake; Matsuoka, Fumio

    In order to prove the effectiveness of the developed model, reported in the first report, about dynamics of a vapor compression refrigeration cycle with zeotropic refrigerant mixtures, simulation results are compared with the experimental results obtained for R-407C (R-32/R-125/R-134a=23/25/52wt%).The simulation results are consistent well with the experimental results. As a result of the numerical analysis of dynamic characteristics of composition changing, the variation of compositions in the refrigeration cycle is caused by the variation of the existing compositions of accumulator. The time constant of the composition is approximately equal to the time constant of the pressure and the mass distribution.

  7. Infrared small target tracking by discriminative classification based on Gaussian mixture model in compressive sensing domain (United States)

    Wang, Chuanyun; Song, Fei; Qin, Shiyin


    Addressing the problems of infrared small target tracking in forward looking infrared (FLIR) system, a new infrared small target tracking method is presented, in which features binding of both target gray intensity and spatial relationship is implemented by compressive sensing so as to construct the Gaussian mixture model of compressive appearance distribution. Subsequently, naive Bayesian classification is carried out over testing samples acquired with non-uniform sampling probability to identify the most credible location of targets from background scene. A series of experiments are carried out over four infrared small target image sequences with more than 200 images for each sequence, the results demonstrate the effectiveness and advantages of the proposed method in both success rate and precision rate.

  8. On selecting a prior for the precision parameter of Dirichlet process mixture models (United States)

    Dorazio, R.M.


    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  9. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Directory of Open Access Journals (Sweden)

    Arturo Medrano-Soto


    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  10. Modeling Change in the Presence of Non-Randomly Missing Data: Evaluating A Shared Parameter Mixture Model. (United States)

    Gottfredson, Nisha C; Bauer, Daniel J; Baldwin, Scott A


    In longitudinal research, interest often centers on individual trajectories of change over time. When there is missing data, a concern is whether data are systematically missing as a function of the individual trajectories. Such a missing data process, termed random coefficient-dependent missingness, is statistically non-ignorable and can bias parameter estimates obtained from conventional growth models that assume missing data are missing at random. This paper describes a shared-parameter mixture model (SPMM) for testing the sensitivity of growth model parameter estimates to a random coefficient-dependent missingness mechanism. Simulations show that the SPMM recovers trajectory estimates as well as or better than a standard growth model across a range of missing data conditions. The paper concludes with practical advice for longitudinal data analysts.

  11. Constitutive modeling of the behavior of a sand-bentonite mixture

    Energy Technology Data Exchange (ETDEWEB)

    Saadat, F.


    The Canadian concept for disposal of nuclear fuel waste proposes a compacted mixture of sand and bentonite (known as buffer) as one of several barriers limiting radionuclide escape to the biosphere. To ensure acceptable performance of the buffer, it is necessary to understand its stress-strain-time behavior under rising groundwater pressure up to 10 MPa in the vault. High pressure triaxial laboratory tests have been performed at mean effective pressures up to 9 MPa, and porewater pressures up to 7 MPa at ambient temperatures. The results indicate that the strength of the buffer is dominated by the bentonite, and the material exhibits strain-softening behavior in shear. Three different approaches for constitutive modeling of the buffer behavior are examined in this thesis. A three-modulus anisotropic hyperelastic model is proposed for the small strain range. This model accounts for the anisotropic nature of the buffer and permits coupling of mean pressures with shear strains, or deviator stresses with volume strains. A second three-function hypoelastic model is also developed to describe constitutive relationships for straining-to-failure. The third elastic-plastic model (belonging to the Cam clay family) accounts for non-reversibility, non-linearity and dilatancy in the plastic range. In addition to these predictive models, a conceptual model is proposed based on critical state soil mechanics to provide a coherent framework for describing the behavior of buffer compacted to different densities. Finally, the interactions between buffer, container, rock and backfill are examined in the non-linear finite element analyses using the proposed elastic-plastic model for the buffer. The preliminary results suggest that swelling of the buffer against compressive backfill could potentially produce large shear strains in the buffer.

  12. Poisson-Helmholtz-Boltzmann model of the electric double layer: analysis of monovalent ionic mixtures. (United States)

    Bohinc, Klemen; Shrestha, Ahis; Brumen, Milan; May, Sylvio


    In the classical mean-field description of the electric double layer, known as the Poisson-Boltzmann model, ions interact exclusively through their Coulomb potential. Ion specificity can arise through solvent-mediated, nonelectrostatic interactions between ions. We employ the Yukawa pair potential to model the presence of nonelectrostatic interactions. The combination of Yukawa and Coulomb potential on the mean-field level leads to the Poisson-Helmholtz-Boltzmann model, which employs two auxiliary potentials: one electrostatic and the other nonelectrostatic. In the present work we apply the Poisson-Helmholtz-Boltzmann model to ionic mixtures, consisting of monovalent cations and anions that exhibit different Yukawa interaction strengths. As a specific example we consider a single charged surface in contact with a symmetric monovalent electrolyte. From the minimization of the mean-field free energy we derive the Poisson-Boltzmann and Helmholtz-Boltzmann equations. These nonlinear equations can be solved analytically in the weak perturbation limit. This together with numerical solutions in the nonlinear regime suggests an intricate interplay between electrostatic and nonelectrostatic interactions. The structure and free energy of the electric double layer depends sensitively on the Yukawa interaction strengths between the different ion types and on the nonelectrostatic interactions of the mobile ions with the surface.

  13. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model (United States)

    Ellefsen, Karl J.; Smith, David


    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  14. A novel combined model of discrete and mixture phases for nanoparticles in convective turbulent flow (United States)

    Mahdavi, Mostafa; Sharifpur, Mohsen; Meyer, Josua P.


    In this study, a new combined model is presented to study the flow and discrete phase features of nano-size particles for turbulent convection in a horizontal tube. Due to the complexity and many phenomena involved in particle-liquid turbulent flows, the conventional models are not able to properly predict some hidden aspects of the flow. Therefore, a new form of Brownian force is implemented in the discrete phase model to predict the migration of the particles as well as energy equation has modified for particles. Then, the final results are exported to the mixture equations of the flow. The effects of the mass diffusion due to thermophoresis, Brownian motion, and turbulent dispersion are implemented as source terms in equations. The results are compared with the experimental measurements from the literature and are adequately validated. The accuracy of predicted heat transfer and friction coefficients is also discussed versus measurements. The migration of the particles toward the centre of the tube is properly captured. The results show the non-uniform distribution of particles in the turbulent flow due to strong turbulent dispersion. The proposed combined model can open new viewpoints of particle-fluid interaction flows.

  15. A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems (United States)

    Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad


    As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.

  16. Short-term traffic safety forecasting using Gaussian mixture model and Kalman filter

    Institute of Scientific and Technical Information of China (English)

    Sheng JIN; Dian-hai WANG; Cheng XU; Dong-fang MA


    In this paper; a prediction model is developed that combines a Gaussian mixture model (GMM) and a Kalman filter for online forecasting of traffic safety on expressways.Raw time-to-collision (TTC) samples are divided into two categories:those representing vehicles in risky situations and those in safe situations.Then,the GMM is used to model the bimodal distribution of the TTC samples,and the maximum likelihood (ML) estimation parameters of the TTC distribution are obtained using the expectation-maximization (EM) algorithm.We propose a new traffic safety indicator,named the proportion of exposure to traffic conflicts (PETTC),for assessing the risk and predicting the safety of expressway traffic.A Kalman filter is applied to forecast the short-term safety indicator,PETTC,and solves the online safety prediction problem.A dataset collected from four different expressway locations is used for performance estimation.The test results demonstrate the precision and robustness of the prediction model under different traffic conditions and using different datasets.These results could help decision-makers to improve their online traffic safety forecasting and enable the optimal operation of expressway traffic management systems.

  17. Phase equilibrium of liquid mixtures: Experimental and modeled data using statistical associating fluid theory for potential of variable range approach (United States)

    Giner, Beatriz; Bandrés, Isabel; Carmen López, M.; Lafuente, Carlos; Galindo, Amparo


    A study of the phase equilibrium (experimental and modeled) of mixtures formed by a cyclic ether and haloalkanes has been derived. Experimental data for the isothermal vapor liquid equilibrium of mixtures formed by tetrahydrofuran and tetrahydropyran and isomeric chlorobutanes at temperatures of 298.15, 313.15, and 328.15K are presented. Experimental results have been discussed in terms of both molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. The statistical associating fluid theory for potential of variable range (SAFT-VR) approach together with standard combining rules without adjustable parameters has been used to model the phase equilibrium. Good agreement between experiment and the prediction is found with such a model. Mean absolute deviations for pressures are of the order of 1kPa, while less than 0.013mole fraction for vapor phase compositions. In order to improve the results obtained, a new modeling has been carried out by introducing a unique transferable parameter kij, which modifies the strength of the dispersion interaction between unlike components in the mixtures, and is valid for all the studied mixtures being not temperature or pressure dependent. This parameter together with the SAFT-VR approach provides a description of the vapor-liquid equilibrium of the mixtures that is in excellent agreement with the experimental data for most cases. The absolute deviations are of the order of 0.005mole fraction for vapor phase compositions and less than 0.3kPa for pressure, excepting for mixtures containing 2-chloro-2-methylpropane which deviations for pressure are larger. Results obtained in this work in the modeling of the phase equilibrium with the SAFT-VR equation of state have been compared to the ones obtained in a previous study when the approach was used to model similar mixtures with clear differences in the thermodynamic behavior. We

  18. A Two-Factor Model of Temperament


    Evans, David E.; Rothbart, Mary K.


    The higher order structure of temperament was examined in two studies using the Adult Temperament Questionnaire. Because previous research showed robust levels of convergence between Rothbart’s constructs of temperament and the Big Five factors, we hypothesized a higher order two-factor model of temperament based on Digman’s higher order two-factor model of personality traits derived from factor analysis of the Big Five factors. Study 1 included 258 undergraduates. Digman’s model did not fit ...

  19. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA


    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  20. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists. (United States)

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel


    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often