WorldWideScience

Sample records for linear mixture model

  1. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  2. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  3. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  4. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  5. Modelling time course gene expression data with finite mixtures of linear additive models.

    Science.gov (United States)

    Grün, Bettina; Scharl, Theresa; Leisch, Friedrich

    2012-01-15

    A model class of finite mixtures of linear additive models is presented. The component-specific parameters in the regression models are estimated using regularized likelihood methods. The advantages of the regularization are that (i) the pre-specified maximum degrees of freedom for the splines is less crucial than for unregularized estimation and that (ii) for each component individually a suitable degree of freedom is selected in an automatic way. The performance is evaluated in a simulation study with artificial data as well as on a yeast cell cycle dataset of gene expression levels over time. The latest release version of the R package flexmix is available from CRAN (http://cran.r-project.org/).

  6. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression

    Directory of Open Access Journals (Sweden)

    Lemieux Sébastien

    2006-08-01

    Full Text Available Abstract Background The identification of differentially expressed genes (DEGs from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. Results On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. Conclusion The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  7. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  8. Latent log-linear models for handwritten digit classification.

    Science.gov (United States)

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  9. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  10. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  11. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  12. Stochastic radiative transfer model for mixture of discontinuous vegetation canopies

    International Nuclear Information System (INIS)

    Shabanov, Nikolay V.; Huang, D.; Knjazikhin, Y.; Dickinson, R.E.; Myneni, Ranga B.

    2007-01-01

    Modeling of the radiation regime of a mixture of vegetation species is a fundamental problem of the Earth's land remote sensing and climate applications. The major existing approaches, including the linear mixture model and the turbid medium (TM) mixture radiative transfer model, provide only an approximate solution to this problem. In this study, we developed the stochastic mixture radiative transfer (SMRT) model, a mathematically exact tool to evaluate radiation regime in a natural canopy with spatially varying optical properties, that is, canopy, which exhibits a structured mixture of vegetation species and gaps. The model solves for the radiation quantities, direct input to the remote sensing/climate applications: mean radiation fluxes over whole mixture and over individual species. The canopy structure is parameterized in the SMRT model in terms of two stochastic moments: the probability of finding species and the conditional pair-correlation of species. The second moment is responsible for the 3D radiation effects, namely, radiation streaming through gaps without interaction with vegetation and variation of the radiation fluxes between different species. We performed analytical and numerical analysis of the radiation effects, simulated with the SMRT model for the three cases of canopy structure: (a) non-ordered mixture of species and gaps (TM); (b) ordered mixture of species without gaps; and (c) ordered mixture of species with gaps. The analysis indicates that the variation of radiation fluxes between different species is proportional to the variation of species optical properties (leaf albedo, density of foliage, etc.) Gaps introduce significant disturbance to the radiation regime in the canopy as their optical properties constitute major contrast to those of any vegetation species. The SMRT model resolves deficiencies of the major existing mixture models: ignorance of species radiation coupling via multiple scattering of photons (the linear mixture model

  13. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  14. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  15. Non-linearity parameter of binary liquid mixtures at elevated pressures

    Indian Academy of Sciences (India)

    . Ultrasonic studies in liquid mixtures provide valuable information about structure and interaction in such systems. The present investigation comprises of theoretical evaluation of the acoustic non-linearity parameter / of four binary liquid ...

  16. Linear kinetic theory and particle transport in stochastic mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Pomraning, G.C. [Univ. of California, Los Angeles, CA (United States)

    1995-12-31

    We consider the formulation of linear transport and kinetic theory describing energy and particle flow in a random mixture of two or more immiscible materials. Following an introduction, we summarize early and fundamental work in this area, and we conclude with a brief discussion of recent results.

  17. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  18. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  19. On the I-V characteristic in the non-linear tenary mixture model for polycrystalline semiconductors

    International Nuclear Information System (INIS)

    Nguyen Van Lien; Nguyen Hoai Nam

    2000-08-01

    A simple expression for the voltage dependence of grain-boundary potential barrier heights is proposed and the Effective Medium Approximation is extended for calculating the I-V characteristic in tenary mixtures of highly non-linear circuit elements. Numerical calculations are performed for the case of polycrystalline semiconductors, such as ZnO-based varistors, where the thermoionic emission is believed to be the dominant mechanism for the electric conduction across double Schottky barriers at room temperature. (author)

  20. On Two Mixture-Based Clustering Approaches Used in Modeling an Insurance Portfolio

    Directory of Open Access Journals (Sweden)

    Tatjana Miljkovic

    2018-05-01

    Full Text Available We review two complementary mixture-based clustering approaches for modeling unobserved heterogeneity in an insurance portfolio: the generalized linear mixed cluster-weighted model (CWM and mixture-based clustering for an ordered stereotype model (OSM. The latter is for modeling of ordinal variables, and the former is for modeling losses as a function of mixed-type of covariates. The article extends the idea of mixture modeling to a multivariate classification for the purpose of testing unobserved heterogeneity in an insurance portfolio. The application of both methods is illustrated on a well-known French automobile portfolio, in which the model fitting is performed using the expectation-maximization (EM algorithm. Our findings show that these mixture-based clustering methods can be used to further test unobserved heterogeneity in an insurance portfolio and as such may be considered in insurance pricing, underwriting, and risk management.

  1. Technical note: A linear model for predicting δ13 Cprotein.

    Science.gov (United States)

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  2. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  3. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  4. A reciprocal theorem for a mixture theory. [development of linearized theory of interacting media

    Science.gov (United States)

    Martin, C. J.; Lee, Y. M.

    1972-01-01

    A dynamic reciprocal theorem for a linearized theory of interacting media is developed. The constituents of the mixture are a linear elastic solid and a linearly viscous fluid. In addition to Steel's field equations, boundary conditions and inequalities on the material constants that have been shown by Atkin, Chadwick and Steel to be sufficient to guarantee uniqueness of solution to initial-boundary value problems are used. The elements of the theory are given and two different boundary value problems are considered. The reciprocal theorem is derived with the aid of the Laplace transform and the divergence theorem and this section is concluded with a discussion of the special cases which arise when one of the constituents of the mixture is absent.

  5. Separation of a multicomponent mixture by gaseous diffusion: modelization of the enrichment in a capillary - application to a pilot cascade

    International Nuclear Information System (INIS)

    Doneddu, F.

    1982-01-01

    Starting from the modelization of gaseous flow in a porous medium (flow in a capillary), we generalize the law of enrichment in an infinite cylindrical capillary, established for an isotropic linear mixture, to a multicomponent mixture. A generalization is given of the notion of separation yields and characteristic pressure classically used for separations of isotropic linear mixtures. We present formulas for diagonalizing the diffusion operator, modelization of a multistage, gaseous diffusion cascade and comparison with the experimental results of a drain cascade (N 2 -SF 6 -UF 6 mixture). [fr

  6. Linear kinetic theory and particle transport in stochastic mixtures

    International Nuclear Information System (INIS)

    Pomraning, G.C.

    1994-03-01

    The primary goal in this research is to develop a comprehensive theory of linear transport/kinetic theory in a stochastic mixture of solids and immiscible fluids. The statistics considered correspond to N-state discrete random variables for the interaction coefficients and sources, with N denoting the number of components of the mixture. The mixing statistics studied are Markovian as well as more general statistics, such as renewal processes. A further goal of this work is to demonstrate the applicability of the formalism to real world engineering problems. This three year program was initiated June 15, 1993 and has been underway nine months. Many significant results have been obtained, both in the formalism development and in representative applications. These results are summarized by listing the archival publications resulting from this grant, including the abstracts taken directly from the papers

  7. Differential expression among tissues in morbidly obese individuals using a finite mixture model under BLUP approach

    DEFF Research Database (Denmark)

    Kogelman, Lisette; Trabzuni, Daniah; Bonder, Marc Jan

    effects of the interactions between tissues and probes using BLUP (Best Linear Unbiased Prediction) linear models correcting for gender, which were subsequently used in a finite mixture model to detect DE genes in each tissue. This approach evades the multiple-testing problem and is able to detect...

  8. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  9. A turbulence model in mixtures. First part: Statistical description of mixture

    International Nuclear Information System (INIS)

    Besnard, D.

    1987-03-01

    Classical theory of mixtures gives a model for molecular mixtures. This kind of model is based on a small gradient approximation for concentration, temperature, and pression. We present here a mixture model, allowing for large gradients in the flow. We also show that, with a local balance assumption between material diffusion and flow gradients evolution, we obtain a model similar to those mentioned above [fr

  10. Spinodal decomposition in a food colloid-biopolymer mixture: evidence for a linear regime

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Suresh [Department of Physics and Fribourg Center for Nanomaterials, University of Fribourg, 1700 Fribourg (Switzerland); Tuinier, Remco [Forschungszentrum Juelich, Institut fuer Festkoerperforschung, 52425 Juelich (Germany); Schurtenberger, Peter [Department of Physics and Fribourg Center for Nanomaterials, University of Fribourg, 1700 Fribourg (Switzerland)

    2006-07-05

    We investigate phase separation and structural evolution in a complex food colloid (casein micelles) and biopolymer (xanthan) mixture using small-angle light scattering. We demonstrate that phase separation is induced by a depletion mechanism, and that the resulting coexistence curve can be described by osmotic equilibrium theory for mixtures of colloids and polymer chains in a background solvent, taking into account interactions between the polymer chains in the excluded volume limit. We show that the light scattering pattern of an unstable mixture exhibits the typical behaviour of spinodal decomposition, and we are able to confirm the validity of dynamic similarity scaling. We find three distinct regimes (initial or linear, intermediate and transition stage) for the decomposition kinetics that differ in the time dependence of the peak position of the structure factor. In particular we find clear evidence for the existence of an initial linear regime, where the peak position remains constant and the amplitude grows. The existence of spinodal-like decomposition and the validity of universal scaling in the intermediate and transition stages have been found in previous studies of phase separation in attractive colloidal suspensions. However, to our knowledge the initial linear regime has never been observed in colloidal suspensions, and we attribute this at least partly to the effect of hydrodynamic interactions which are efficiently screened in our system due to the fact that the measurements were performed at high polymer concentrations, i.e. in the semi-dilute regime. (letter to the editor)

  11. Augmenting Scheffe Linear Mixture Models With Squared and/or Crossproduct Terms

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Szychowski, Jeffrey M.; Loeppky, Jason L.

    2001-01-01

    A glass composition variation study (CVS) for high-level waste (HLW) stored at the Idaho National Engineering and Environmental Laboratory (INEEL) is being statistically designed and performed in phases over several years. The purpose of the CVS is to investigate and model how HLW-glass properties depend on glass composition within a glass composition region compatible with the expected range of INEEL HLW. The resulting glass property-composition models will be used to develop desirable glass formulations and other purposes. Phases 1 and 2 of the CVS have been completed so far, and are briefly described. The main focus of this paper is the CVS Phase 3 experimental design (test matrix). The Phase 3 experimental design was chosen to augment the Phase 1 and 2 data with additional data points, as well as to account for additional glass components of interest not studied in Phases 1 and/or 2. The paper describes how these Phase 3 experimental design augmentation challenges were addressed using the previous data, preliminary property-composition models, and statistical mixture experiment and optimal experimental design methods and software. The resulting Phase 3 experimental design of 30 simulated HAW glasses is presented and discussed

  12. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  13. On modeling of structured multiphase mixtures

    International Nuclear Information System (INIS)

    Dobran, F.

    1987-01-01

    The usual modeling of multiphase mixtures involves a set of conservation and balance equations of mass, momentum, energy and entropy (the basic set) constructed by an averaging procedure or postulated. The averaged models are constructed by averaging, over space or time segments, the local macroscopic field equations of each phase, whereas the postulated models are usually motivated by the single phase multicomponent mixture models. In both situations, the resulting equations yield superimposed continua models and are closed by the constitutive equations which place restrictions on the possible material response during the motion and phase change. In modeling the structured multiphase mixtures, the modeling of intrinsic motion of grains or particles is accomplished by adjoining to the basic set of field equations the additional balance equations, thereby placing restrictions on the motion of phases only within the imposed extrinsic and intrinsic sources. The use of the additional balance equations has been primarily advocated in the postulatory theories of multiphase mixtures and are usually derived through very special assumptions of the material deformation. Nevertheless, the resulting mixture models can predict a wide variety of complex phenomena such as the Mohr-Coulomb yield criterion in granular media, Rayleigh bubble equation, wave dispersion and dilatancy. Fundamental to the construction of structured models of multiphase mixtures are the problems pertaining to the existence and number of additional balance equations to model the structural characteristics of a mixture. Utilizing a volume averaging procedure it is possible not only to derive the basic set of field equation discussed above, but also a very general set of additional balance equations for modeling of structural properties of the mixture

  14. Superlattice configurations in linear chain hydrocarbon binary mixtures

    Indian Academy of Sciences (India)

    Unknown

    Long-chain alkanes; binary mixtures; superlattices; discrete orientational changes. 1. Introduction ... tem and a model of superlattice configuration was proposed4, in terms of .... C18 system,4 the angle with value = 3⋅3° was seen to play an ...

  15. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  16. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    Science.gov (United States)

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  17. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  18. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  19. Probabilistic mixture-based image modelling

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Havlíček, Vojtěch; Grim, Jiří

    2011-01-01

    Roč. 47, č. 3 (2011), s. 482-500 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:CESNET(CZ) 387/2010; GA MŠk(CZ) 2C06019; GA ČR(CZ) GA103/11/0335 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF texture modelling * discrete distribution mixtures * Bernoulli mixture * Gaussian mixture * multi-spectral texture modelling Subject RIV: BD - Theory of Information Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/RO/haindl-0360244.pdf

  20. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  1. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte...... Carlo simulations evaluate the properties of the estimation procedures. Results show that the proposed model is viable and flexible for purposes of forecasting volatility. Model uncertainty is accounted for by employing Bayesian model averaging. Bayesian model averaging provides very competitive...... forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications....

  2. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  3. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  4. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  5. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    Science.gov (United States)

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-03-13

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  7. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  8. Non-linear models for the detection of impaired cerebral blood flow autoregulation.

    Science.gov (United States)

    Chacón, Max; Jara, José Luis; Miranda, Rodrigo; Katsogridakis, Emmanuel; Panerai, Ronney B

    2018-01-01

    The ability to discriminate between normal and impaired dynamic cerebral autoregulation (CA), based on measurements of spontaneous fluctuations in arterial blood pressure (BP) and cerebral blood flow (CBF), has considerable clinical relevance. We studied 45 normal subjects at rest and under hypercapnia induced by breathing a mixture of carbon dioxide and air. Non-linear models with BP as input and CBF velocity (CBFV) as output, were implemented with support vector machines (SVM) using separate recordings for learning and validation. Dynamic SVM implementations used either moving average or autoregressive structures. The efficiency of dynamic CA was estimated from the model's derived CBFV response to a step change in BP as an autoregulation index for both linear and non-linear models. Non-linear models with recurrences (autoregressive) showed the best results, with CA indexes of 5.9 ± 1.5 in normocapnia, and 2.5 ± 1.2 for hypercapnia with an area under the receiver-operator curve of 0.955. The high performance achieved by non-linear SVM models to detect deterioration of dynamic CA should encourage further assessment of its applicability to clinical conditions where CA might be impaired.

  9. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  10. Application of fuzzy logic to determine the odour intensity of model gas mixtures using electronic nose

    Science.gov (United States)

    Szulczyński, Bartosz; Gębicki, Jacek; Namieśnik, Jacek

    2018-01-01

    The paper presents the possibility of application of fuzzy logic to determine the odour intensity of model, ternary gas mixtures (α-pinene, toluene and triethylamine) using electronic nose prototype. The results obtained using fuzzy logic algorithms were compared with the values obtained using multiple linear regression (MLR) model and sensory analysis. As the results of the studies, it was found the electronic nose prototype along with the fuzzy logic pattern recognition system can be successfully used to estimate the odour intensity of tested gas mixtures. The correctness of the results obtained using fuzzy logic was equal to 68%.

  11. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  12. mixtools: An R Package for Analyzing Mixture Models

    Directory of Open Access Journals (Sweden)

    Tatiana Benaglia

    2009-10-01

    Full Text Available The mixtools package for R provides a set of functions for analyzing a variety of finite mixture models. These functions include both traditional methods, such as EM algorithms for univariate and multivariate normal mixtures, and newer methods that reflect some recent research in finite mixture models. In the latter category, mixtools provides algorithms for estimating parameters in a wide range of different mixture-of-regression contexts, in multinomial mixtures such as those arising from discretizing continuous multivariate data, in nonparametric situations where the multivariate component densities are completely unspecified, and in semiparametric situations such as a univariate location mixture of symmetric but otherwise unspecified densities. Many of the algorithms of the mixtools package are EM algorithms or are based on EM-like ideas, so this article includes an overview of EM algorithms for finite mixture models.

  13. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  14. A linear dynamic model for rotor-spun composite yarn spinning process

    International Nuclear Information System (INIS)

    Yang, R H; Wang, S Y

    2008-01-01

    A linear dynamic model is established for the stable rotor-spun composite yarn spinning process. Approximate oscillating frequencies in the vertical and horizontal directions are obtained. By suitable choice of certain processing parameters, the mixture construction after the convergent point can be optimally matched. The presented study is expected to provide a general pathway to understand the motion of the rotor-spun composite yarn spinning process

  15. Shear viscosity of liquid mixtures: Mass dependence

    International Nuclear Information System (INIS)

    Kaushal, Rohan; Tankeshwar, K.

    2002-06-01

    Expressions for zeroth, second, and fourth sum rules of transverse stress autocorrelation function of two component fluid have been derived. These sum rules and Mori's memory function formalism have been used to study shear viscosity of Ar-Kr and isotopic mixtures. It has been found that theoretical result is in good agreement with the computer simulation result for the Ar-Kr mixture. The mass dependence of shear viscosity for different mole fraction shows that deviation from ideal linear model comes even from mass difference in two species of fluid mixture. At higher mass ratio shear viscosity of mixture is not explained by any of the emperical model. (author)

  16. Shear viscosity of liquid mixtures Mass dependence

    CERN Document Server

    Kaushal, R

    2002-01-01

    Expressions for zeroth, second, and fourth sum rules of transverse stress autocorrelation function of two component fluid have been derived. These sum rules and Mori's memory function formalism have been used to study shear viscosity of Ar-Kr and isotopic mixtures. It has been found that theoretical result is in good agreement with the computer simulation result for the Ar-Kr mixture. The mass dependence of shear viscosity for different mole fraction shows that deviation from ideal linear model comes even from mass difference in two species of fluid mixture. At higher mass ratio shear viscosity of mixture is not explained by any of the emperical model.

  17. Deposition behaviour of model biofuel ash in mixtures with quartz sand. Part 1: Experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Mischa Theis; Christian Mueller; Bengt-Johan Skrifvars; Mikko Hupa; Honghi Tran [Aabo Akademi Process Chemistry Centre, Aabo (Finland). Combustion and Materials Chemistry

    2006-10-15

    Model biofuel ash of well-defined size and melting properties was fed into an entrained flow reactor (EFR) to simulate the deposition behaviour of commercially applied biofuel mixtures in large-scale boilers. The aim was to obtain consistent experimental data that can be used for validation of computational fluid dynamics (CFD)-based deposition models. The results showed that while up to 80 wt% of the feed was lost to the EFR wall, the composition of the model ash particles collected at the reactor exit did not change. When model ashes were fed into the reactor individually, the ash particles were found to be sticky when they contained more than 15 wt% molten phase. When model ashes were fed in mixtures with silica sand, it was found that only a small amount of sand particles was captured in the deposits; the majority rebounded upon impact. The presence of sand in the feed mixture reduced the deposit buildup by more than could be expected from linear interpolation between the model ash and the sand. The results suggested that sand addition to model ash may prevent deposit buildup through erosion. 22 refs., 6 figs., 3 tabs.

  18. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  19. A Robust Non-Gaussian Data Assimilation Method for Highly Non-Linear Models

    Directory of Open Access Journals (Sweden)

    Elias D. Nino-Ruiz

    2018-03-01

    Full Text Available In this paper, we propose an efficient EnKF implementation for non-Gaussian data assimilation based on Gaussian Mixture Models and Markov-Chain-Monte-Carlo (MCMC methods. The proposed method works as follows: based on an ensemble of model realizations, prior errors are estimated via a Gaussian Mixture density whose parameters are approximated by means of an Expectation Maximization method. Then, by using an iterative method, observation operators are linearized about current solutions and posterior modes are estimated via a MCMC implementation. The acceptance/rejection criterion is similar to that of the Metropolis-Hastings rule. Experimental tests are performed on the Lorenz 96 model. The results show that the proposed method can decrease prior errors by several order of magnitudes in a root-mean-square-error sense for nearly sparse or dense observational networks.

  20. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Science.gov (United States)

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  1. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  2. A quasi-linear formulation for chemically reacting compressible mixtures of imperfect gases

    Science.gov (United States)

    Lentini, D.

    2008-01-01

    A quasi-linear formulation is proposed for high-speed finite-rate chemically reacting mixtures of imperfect gases, i.e., thermally perfect gases with specific heat varying with temperature. It retains the same formalism of a well-tried counterpart formulation for perfect gases, which has been proven to be suited for application of accurate and fast algorithms. Equations for both quasi-monodimensional flows, and for axisymmetric viscous flows are presented. The approach is based on the definition of an appropriate function F of temperature and concentration, which allows to identify Riemann variables for the flow under consideration; the formulation also includes equations for the entropy and the mass fractions of the N chemical species present in the reacting mixture. The key function F must be computed by numerical quadrature, together with its derivatives with respect to the individual species mass fractions. An example of computation of these quantities is reported, with reference to conditions in the combustion chamber of the Vulcain engine powering the first stage of the Ariane 5 launcher. Such a computation is demonstrated to be both economic and accurate, thus proving the workability of the proposed approach. Further, an estimate of the variation of the mixture specific heat ratio with temperature is given, in order to underline the importance of the effect under consideration.

  3. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  4. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  5. Linear models with R

    CERN Document Server

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  6. Using dynamic N-mixture models to test cavity limitation on northern flying squirrel demographic parameters using experimental nest box supplementation.

    Science.gov (United States)

    Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica

    2014-06-01

    Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise

  7. Linear mixing model applied to AVHRR LAC data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  8. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  9. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  10. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    Science.gov (United States)

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  11. Spatially adaptive mixture modeling for analysis of FMRI time series.

    Science.gov (United States)

    Vincent, Thomas; Risser, Laurent; Ciuciu, Philippe

    2010-04-01

    Within-subject analysis in fMRI essentially addresses two problems, the detection of brain regions eliciting evoked activity and the estimation of the underlying dynamics. In Makni et aL, 2005 and Makni et aL, 2008, a detection-estimation framework has been proposed to tackle these problems jointly, since they are connected to one another. In the Bayesian formalism, detection is achieved by modeling activating and nonactivating voxels through independent mixture models (IMM) within each region while hemodynamic response estimation is performed at a regional scale in a nonparametric way. Instead of IMMs, in this paper we take advantage of spatial mixture models (SMM) for their nonlinear spatial regularizing properties. The proposed method is unsupervised and spatially adaptive in the sense that the amount of spatial correlation is automatically tuned from the data and this setting automatically varies across brain regions. In addition, the level of regularization is specific to each experimental condition since both the signal-to-noise ratio and the activation pattern may vary across stimulus types in a given brain region. These aspects require the precise estimation of multiple partition functions of underlying Ising fields. This is addressed efficiently using first path sampling for a small subset of fields and then using a recently developed fast extrapolation technique for the large remaining set. Simulation results emphasize that detection relying on supervised SMM outperforms its IMM counterpart and that unsupervised spatial mixture models achieve similar results without any hand-tuning of the correlation parameter. On real datasets, the gain is illustrated in a localizer fMRI experiment: brain activations appear more spatially resolved using SMM in comparison with classical general linear model (GLM)-based approaches, while estimating a specific parcel-based HRF shape. Our approach therefore validates the treatment of unsmoothed fMRI data without fixed GLM

  12. Spatial generalised linear mixed models based on distances.

    Science.gov (United States)

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  13. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  14. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  15. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  16. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    Science.gov (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  17. A primer on linear models

    CERN Document Server

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  18. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    Science.gov (United States)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  19. Ion swarm data for electrical discharge modeling in air and flue gas mixtures

    International Nuclear Information System (INIS)

    Nelson, D.; Benhenni, M.; Eichwald, O.; Yousfi, M.

    2003-01-01

    The first step of this work is the determination of the elastic and inelastic ion-molecule collision cross sections for the main ions (N 2 + , O 2 + , CO 2 + , H 2 O + and O - ) usually present either in the air or flue gas discharges. The obtained cross section sets, given for ion kinetic energies not exceeding 100 eV, correspond to the interactions of each ion with its parent molecule (symmetric case) or nonparent molecule (asymmetric case). Then by using these different cross section sets, it is possible to obtain the ion swarm data for the different gas mixtures involving N 2 , CO 2 , H 2 O and O 2 molecules whatever their relative proportions. These ion swarm data are obtained from an optimized Monte Carlo method well adapted for the ion transport in gas mixtures. This also allows us to clearly show that the classical linear approximations usually applied for the ion swarm data in mixtures such as Blanc's law are far to be valid. Then, the ion swarm data are given in three cases of gas mixtures: a dry air (80% N 2 , 20% O 2 ), a ternary gas mixture (82% N 2 , 12% CO 2 , 6% O 2 ) and a typical flue gas (76% N 2 , 12% CO 2 , 6% O 2 , 6% H 2 O). From these reliable ion swarm data, electrical discharge modeling for a wire to plane electrode configuration has been carried out in these three mixtures at the atmospheric pressure for different applied voltages. Under the same discharge conditions, large discrepancies in the streamer formation and propagation have been observed in these three mixture cases. They are due to the deviations existing not only between the different effective electron-molecule ionization rates but also between the ion transport properties mainly because of the presence of a highly polar molecule such as H 2 O. This emphasizes the necessity to properly consider the ion transport in the discharge modeling

  20. Model and measurements of linear mixing in thermal IR ground leaving radiance spectra

    Science.gov (United States)

    Balick, Lee; Clodius, William; Jeffery, Christopher; Theiler, James; McCabe, Matthew; Gillespie, Alan; Mushkin, Amit; Danilina, Iryna

    2007-10-01

    Hyperspectral thermal IR remote sensing is an effective tool for the detection and identification of gas plumes and solid materials. Virtually all remotely sensed thermal IR pixels are mixtures of different materials and temperatures. As sensors improve and hyperspectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. The contributions of the constituents to the pixel spectral ground leaving radiance are weighted by their spectral emissivities and their temperature, or more correctly, temperature distributions, because real pixels are rarely thermally homogeneous. Planck's Law defines a relationship between temperature and radiance that is strongly wavelength dependent, even for blackbodies. Spectral ground leaving radiance (GLR) from mixed pixels is temperature and wavelength dependent and the relationship between observed radiance spectra from mixed pixels and library emissivity spectra of mixtures of 'pure' materials is indirect. A simple model of linear mixing of subpixel radiance as a function of material type, the temperature distribution of each material and the abundance of the material within a pixel is presented. The model indicates that, qualitatively and given normal environmental temperature variability, spectral features remain observable in mixtures as long as the material occupies more than roughly 10% of the pixel. Field measurements of known targets made on the ground and by an airborne sensor are presented here and serve as a reality check on the model. Target spectral GLR from mixtures as a function of temperature distribution and abundance within the pixel at day and night are presented and compare well qualitatively with model output.

  1. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  2. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  3. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  4. Modeling of Multicomponent Mixture Separation Processes Using Hollow fiber Membrane

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sin-Ah; Kim, Jin-Kuk; Lee, Young Moo; Yeo, Yeong-Koo [Hanyang University, Seoul (Korea, Republic of)

    2015-02-15

    So far, most of research activities on modeling of membrane separation processes have been focused on binary feed mixture. But, in actual separation operations, binary feed is hard to find and most separation processes involve multicomponent feed mixture. In this work models for membrane separation processes treating multicomponent feed mixture are developed. Various model types are investigated and validity of proposed models are analysed based on experimental data obtained using hollowfiber membranes. The proposed separation models show quick convergence and exhibit good tracking performance.

  5. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  6. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  7. Prevalence Incidence Mixture Models

    Science.gov (United States)

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  8. From spiking neuron models to linear-nonlinear models.

    Science.gov (United States)

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  9. A Dirichlet process mixture model for brain MRI tissue classification.

    Science.gov (United States)

    Ferreira da Silva, Adelino R

    2007-04-01

    Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.

  10. Linear models for sound from supersonic reacting mixing layers

    Science.gov (United States)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  11. A Note on the Use of Mixture Models for Individual Prediction.

    Science.gov (United States)

    Cole, Veronica T; Bauer, Daniel J

    Mixture models capture heterogeneity in data by decomposing the population into latent subgroups, each of which is governed by its own subgroup-specific set of parameters. Despite the flexibility and widespread use of these models, most applications have focused solely on making inferences for whole or sub-populations, rather than individual cases. The current article presents a general framework for computing marginal and conditional predicted values for individuals using mixture model results. These predicted values can be used to characterize covariate effects, examine the fit of the model for specific individuals, or forecast future observations from previous ones. Two empirical examples are provided to demonstrate the usefulness of individual predicted values in applications of mixture models. The first example examines the relative timing of initiation of substance use using a multiple event process survival mixture model whereas the second example evaluates changes in depressive symptoms over adolescence using a growth mixture model.

  12. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  13. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  14. Dynamic Linear Models with R

    CERN Document Server

    Campagnoli, Patrizia; Petris, Giovanni

    2009-01-01

    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  15. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  16. Microwave dielectric characterization of binary mixture of formamide ...

    Indian Academy of Sciences (India)

    are fitted to the three different relaxation models [24–27] by the non-linear least squares fit method. It is observed that the Davidson–Cole model is adequate to describe major dispersion of the various solute and solvent mixtures over this fre- quency range. Static dielectric constant and dielectric relaxation time could be.

  17. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  18. Modeling of non-additive mixture properties using the Online CHEmical database and Modeling environment (OCHEM

    Directory of Open Access Journals (Sweden)

    Oprisiu Ioana

    2013-01-01

    Full Text Available Abstract The Online Chemical Modeling Environment (OCHEM, http://ochem.eu is a web-based platform that provides tools for automation of typical steps necessary to create a predictive QSAR/QSPR model. The platform consists of two major subsystems: a database of experimental measurements and a modeling framework. So far, OCHEM has been limited to the processing of individual compounds. In this work, we extended OCHEM with a new ability to store and model properties of binary non-additive mixtures. The developed system is publicly accessible, meaning that any user on the Web can store new data for binary mixtures and develop models to predict their non-additive properties. The database already contains almost 10,000 data points for the density, bubble point, and azeotropic behavior of binary mixtures. For these data, we developed models for both qualitative (azeotrope/zeotrope and quantitative endpoints (density and bubble points using different learning methods and specially developed descriptors for mixtures. The prediction performance of the models was similar to or more accurate than results reported in previous studies. Thus, we have developed and made publicly available a powerful system for modeling mixtures of chemical compounds on the Web.

  19. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  20. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  1. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  2. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    excellent agreement between the predicted and experimental IFTs at high and moderate levels of IFTs, while the agreement is reasonably accurate in the near-critical region as the used equations of state reveal classical scaling behavior. To predict accurately low IFTs (sigma ... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile....

  3. Models for the computation of opacity of mixtures

    International Nuclear Information System (INIS)

    Klapisch, Marcel; Busquet, Michel

    2013-01-01

    We compare four models for the partial densities of the components of mixtures. These models yield different opacities as shown on polystyrene, acrylic and polyimide in local thermodynamical equilibrium (LTE). Two of these models, the ‘whole volume partial pressure’ model (M1) and its modification (M2) are not thermodynamically consistent (TC). The other two models are TC and minimize free energy. M3, the ‘partial volume equal pressure’ model, uses equality of chemical potential. M4 uses commonality of free electron density. The latter two give essentially identical results in LTE, but M4’s convergence is slower. M4 is easily generalized to non-LTE conditions. Non-LTE effects are shown by the variation of the Planck mean opacity of the mixtures with temperature and density. (paper)

  4. New models for predicting thermophysical properties of ionic liquid mixtures.

    Science.gov (United States)

    Huang, Ying; Zhang, Xiangping; Zhao, Yongsheng; Zeng, Shaojuan; Dong, Haifeng; Zhang, Suojiang

    2015-10-28

    Potential applications of ILs require the knowledge of the physicochemical properties of ionic liquid (IL) mixtures. In this work, a series of semi-empirical models were developed to predict the density, surface tension, heat capacity and thermal conductivity of IL mixtures. Each semi-empirical model only contains one new characteristic parameter, which can be determined using one experimental data point. In addition, as another effective tool, artificial neural network (ANN) models were also established. The two kinds of models were verified by a total of 2304 experimental data points for binary mixtures of ILs and molecular compounds. The overall average absolute deviations (AARDs) of both the semi-empirical and ANN models are less than 2%. Compared to previously reported models, these new semi-empirical models require fewer adjustable parameters and can be applied in a wider range of applications.

  5. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    International Nuclear Information System (INIS)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio

    2013-01-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO 4 (NIS-inhibitor) dosed at a fixed ratio of EC 10 that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of

  6. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  7. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  8. Modeling the effects of binary mixtures on survival in time.

    NARCIS (Netherlands)

    Baas, J.; van Houte, B.P.P.; van Gestel, C.A.M.; Kooijman, S.A.L.M.

    2007-01-01

    In general, effects of mixtures are difficult to describe, and most of the models in use are descriptive in nature and lack a strong mechanistic basis. The aim of this experiment was to develop a process-based model for the interpretation of mixture toxicity measurements, with effects of binary

  9. Modeling the formation of ordered nano-assemblies comprised by dendrimers and linear polyelectrolytes: The role of Coulombic interactions

    Science.gov (United States)

    Eleftheriou, E.; Karatasos, K.

    2012-10-01

    Models of mixtures of peripherally charged dendrimers with oppositely charged linear polyelectrolytes in the presence of explicit solvent are studied by means of molecular dynamics simulations. Under the influence of varying strength of electrostatic interactions, these systems appear to form dynamically arrested film-like interconnected structures in the polymer-rich phase. Acting like a pseudo-thermodynamic inverse temperature, the increase of the strength of the Coulombic interactions drive the polymeric constituents of the mixture to a gradual dynamic freezing-in. The timescale of the average density fluctuations of the formed complexes initially increases in the weak electrostatic regime reaching a finite limit as the strength of electrostatic interactions grow. Although the models are overall electrically neutral, during this process the dendrimer/linear complexes develop a polar character with an excess charge mainly close to the periphery of the dendrimers. The morphological characteristics of the resulted pattern are found to depend on the size of the polymer chains on account of the distinct conformational features assumed by the complexed linear polyelectrolytes of different length. In addition, the length of the polymer chain appears to affect the dynamics of the counterions, thus affecting the ionic transport properties of the system. It appears, therefore, that the strength of electrostatic interactions together with the length of the linear polyelectrolytes are parameters to which these systems are particularly responsive, offering thus the possibility for a better control of the resulted structure and the electric properties of these soft-colloidal systems.

  10. Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing

    Science.gov (United States)

    Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis

    2017-04-01

    The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced

  11. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  12. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  13. Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers

    Energy Technology Data Exchange (ETDEWEB)

    Theodorakis, P E [Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina (Greece); Avgeropoulos, A [Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina (Greece); Freire, J J [Departamento de Ciencias y Tecnicas FisicoquImicas, Universidad Nacional de Educacion a Distancia, Facultad de Ciencias, Senda del Rey 9, 28040 Madrid (Spain); Kosmas, M [Department of Chemistry, University of Ioannina, 45110 Ioannina (Greece); Vlahos, C [Department of Chemistry, University of Ioannina, 45110 Ioannina (Greece)

    2007-11-21

    The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results.

  14. Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers

    Science.gov (United States)

    Theodorakis, P. E.; Avgeropoulos, A.; Freire, J. J.; Kosmas, M.; Vlahos, C.

    2007-11-01

    The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results.

  15. Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers

    International Nuclear Information System (INIS)

    Theodorakis, P E; Avgeropoulos, A; Freire, J J; Kosmas, M; Vlahos, C

    2007-01-01

    The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results

  16. Ordinal Log-Linear Models for Contingency Tables

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  17. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  18. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  19. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  20. Linear and non-linear autoregressive models for short-term wind speed forecasting

    International Nuclear Information System (INIS)

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.

    2016-01-01

    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  1. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  2. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  3. Hydrogenic ionization model for mixtures in non-LTE plasmas

    International Nuclear Information System (INIS)

    Djaoui, A.

    1999-01-01

    The Hydrogenic Ionization Model for Mixtures (HIMM) is a non-Local Thermodynamic Equilibrium (non-LTE), time-dependent ionization model for laser-produced plasmas containing mixtures of elements (species). In this version, both collisional and radiative rates are taken into account. An ionization distribution for each species which is consistent with the ambient electron density is obtained by use of an iterative procedure in a single calculation for all species. Energy levels for each shell having a given principal quantum number and for each ion stage of each species in the mixture are calculated using screening constants. Steady-state non-LTE as well as LTE solutions are also provided. The non-LTE rate equations converge to the LTE solution at sufficiently high densities or as the radiation temperature approaches the electron temperature. The model is particularly useful at low temperatures where convergence problems are usually encountered in our previous models. We apply our model to typical situation in x-ray laser research, laser-produced plasmas and inertial confinement fusion. Our results compare well with previously published results for a selenium plasma. (author)

  4. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  5. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  6. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  7. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    Science.gov (United States)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  8. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  9. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  10. Modeling patterns in data using linear and related models

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  11. A nonparametric random coefficient approach for life expectancy growth using a hierarchical mixture likelihood model with application to regional data from North Rhine-Westphalia (Germany).

    Science.gov (United States)

    Böhning, Dankmar; Karasek, Sarah; Terschüren, Claudia; Annuß, Rolf; Fehr, Rainer

    2013-03-09

    Life expectancy is of increasing prime interest for a variety of reasons. In many countries, life expectancy is growing linearly, without any indication of reaching a limit. The state of North Rhine-Westphalia (NRW) in Germany with its 54 districts is considered here where the above mentioned growth in life expectancy is occurring as well. However, there is also empirical evidence that life expectancy is not growing linearly at the same level for different regions. To explore this situation further a likelihood-based cluster analysis is suggested and performed. The modelling uses a nonparametric mixture approach for the latent random effect. Maximum likelihood estimates are determined by means of the EM algorithm and the number of components in the mixture model are found on the basis of the Bayesian Information Criterion. Regions are classified into the mixture components (clusters) using the maximum posterior allocation rule. For the data analyzed here, 7 components are found with a spatial concentration of lower life expectancy levels in a centre of NRW, formerly an enormous conglomerate of heavy industry, still the most densely populated area with Gelsenkirchen having the lowest level of life expectancy growth for both genders. The paper offers some explanations for this fact including demographic and socio-economic sources. This case study shows that life expectancy growth is widely linear, but it might occur on different levels.

  12. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  13. Sound speed models for a noncondensible gas-steam-water mixture

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1984-01-01

    An analytical expression is derived for the homogeneous equilibrium speed of sound in a mixture of noncondensible gas, steam, and water. The expression is based on the Gibbs free energy interphase equilibrium condition for a Gibbs-Dalton mixture in contact with a pure liquid phase. Several simplified models are discussed including the homogeneous frozen model. These idealized models can be used as a reference for data comparison and also serve as a basis for empirically corrected nonhomogeneous and nonequilibrium models

  14. Linear kinetic theory and particle transport in stochastic mixtures. Third year and final report, June 15, 1993--December 14, 1996

    International Nuclear Information System (INIS)

    Pomraning, G.C.

    1997-05-01

    The goal in this research was to continue the development of a comprehensive theory of linear transport/kinetic theory in a stochastic mixture of solids and immiscible fluids. Such a theory should predict the ensemble average and higher moments, such as the variance, of the particle or energy density described by the underlying transport/kinetic equation. The statistics studied correspond to N-state discrete random variables for the interaction coefficients and sources, with N denoting the number of components in the mixture. The mixing statistics considered were Markovian as well as more general statistics. In the absence of time dependence and scattering, the theory is well developed and described exactly by the master (Liouville) equation for Markovian mixing, and by renewal equations for non-Markovian mixing. The intent of this research was to generalize these treatments to include both time dependence and scattering. A further goal of this research was to develop approximate, but simpler, models from any comprehensive theory. In particular, a specific goal was to formulate a renormalized transport/kinetic theory of the usual nonstochastic form, but with effective interaction coefficients and sources to account for the stochastic nature of the problem. In the three and one-half year period of research summarized in this final report, they have made substantial progress in the development of a comprehensive theory of kinetic processes in stochastic mixtures. This progress is summarized in 16 archival journal articles, 7 published proceedings papers, and 2 comprehensive review articles. In addition, 17 oral presentations were made describing these research results

  15. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Science.gov (United States)

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  16. Spectral mixture analyses of hyperspectral data acquired using a tethered balloon

    Science.gov (United States)

    Chen, Xuexia; Vierling, Lee

    2006-01-01

    Tethered balloon remote sensing platforms can be used to study radiometric issues in terrestrial ecosystems by effectively bridging the spatial gap between measurements made on the ground and those acquired via airplane or satellite. In this study, the Short Wave Aerostat-Mounted Imager (SWAMI) tethered balloon-mounted platform was utilized to evaluate linear and nonlinear spectral mixture analysis (SMA) for a grassland-conifer forest ecotone during the summer of 2003. Hyperspectral measurement of a 74-m diameter ground instantaneous field of view (GIFOV) attained by the SWAMI was studied. Hyperspectral spectra of four common endmembers, bare soil, grass, tree, and shadow, were collected in situ, and images captured via video camera were interpreted into accurate areal ground cover fractions for evaluating the mixture models. The comparison between the SWAMI spectrum and the spectrum derived by combining in situ spectral data with video-derived areal fractions indicated that nonlinear effects occurred in the near infrared (NIR) region, while nonlinear influences were minimal in the visible region. The evaluation of hyperspectral and multispectral mixture models indicated that nonlinear mixture model-derived areal fractions were sensitive to the model input data, while the linear mixture model performed more stably. Areal fractions of bare soil were overestimated in all models due to the increased radiance of bare soil resulting from side scattering of NIR radiation by adjacent grass and trees. Unmixing errors occurred mainly due to multiple scattering as well as close endmember spectral correlation. In addition, though an apparent endmember assemblage could be derived using linear approaches to yield low residual error, the tree and shade endmember fractions calculated using this technique were erroneous and therefore separate treatment of endmembers subject to high amounts of multiple scattering (i.e. shadows and trees) must be done with caution. Including the

  17. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid–liqu...

  18. Mixture estimation with state-space components and Markov model of switching

    Czech Academy of Sciences Publication Activity Database

    Nagy, Ivan; Suzdaleva, Evgenia

    2013-01-01

    Roč. 37, č. 24 (2013), s. 9970-9984 ISSN 0307-904X R&D Projects: GA TA ČR TA01030123 Institutional support: RVO:67985556 Keywords : probabilistic dynamic mixtures, * probability density function * state-space models * recursive mixture estimation * Bayesian dynamic decision making under uncertainty * Kerridge inaccuracy Subject RIV: BC - Control Systems Theory Impact factor: 2.158, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/nagy-mixture estimation with state-space components and markov model of switching.pdf

  19. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  20. Core seismic behaviour: linear and non-linear models

    International Nuclear Information System (INIS)

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  1. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  2. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  3. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste)

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture

  4. On Bayesian Inference under Sampling from Scale Mixtures of Normals

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1996-01-01

    This paper considers a Bayesian analysis of the linear regression model under independent sampling from general scale mixtures of Normals.Using a common reference prior, we investigate the validity of Bayesian inference and the existence of posterior moments of the regression and precision

  5. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  6. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  7. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  8. Equilibrium moisture content of waste mixtures from post-consumer carton packaging.

    Science.gov (United States)

    Bacelos, M S; Freire, J T

    2012-01-01

    The manufacturing of boards and roof tiles is one of the routes to reuse waste from the recycled-carton-packaging process. Such a process requires knowledge of the hygroscopic behaviour of these carton-packaging waste mixtures in order to guarantee the quality of the final product (e.g. boards and roof tiles). Thus, with four carton-packaging waste mixtures of selected compositions (A, B, C and D), the sorption isotherms were obtained at air temperature of 20, 40 and 60 degrees C by using the static method. This permits one to investigate which model can relate the equilibrium moisture content of the mixture with that of a pure component through the mass fraction of each component in the mixtures. The results show that the experimental data can be well described by the weighted harmonic mean model. This suggests that the mean equilibrium moisture content of the carton-packaging mixture presents a non-linear relationship with each single, pure compound.

  9. A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions

    International Nuclear Information System (INIS)

    Griffond, J; Soulard, O; Souffland, D

    2010-01-01

    To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.

  10. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary......The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... and three quaternary mixtures are considered. It is shown that overall excellent correlation for binary, mixtures and satisfactory prediction results for multicomponent systems are obtained. There are significant differences between the various modeling approaches and the best results are obtained when...

  11. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  12. Modeling intensive longitudinal data with mixtures of nonparametric trajectories and time-varying effects.

    Science.gov (United States)

    Dziak, John J; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P

    2015-12-01

    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, interindividual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semiparametric regression modeling, to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. (c) 2015 APA, all rights reserved).

  13. Solution thermodynamics and preferential solvation of sulfamethazine in (methanol + water) mixtures

    International Nuclear Information System (INIS)

    Delgado, Daniel R.; Almanza, Ovidio A.; Martínez, Fleming; Peña, María A.; Jouyban, Abolghasem; Acree, William E.

    2016-01-01

    Highlights: • Solubility of sulfamethazine (SMT) was measured in (methanol + water) mixtures. • SMT solubility was correlated with Jouyban–Acree model. • Gibbs energy, enthalpy, and entropy of dissolution of SMT were calculated. • Non-linear enthalpy–entropy relationship was observed for SMT. • Preferential solvation of SMT by methanol was analyzed by using the IKBI method. - Abstract: The solubility of sulfamethazine (SMT) in {methanol (1) + water (2)} co-solvent mixtures was determined at five different temperatures from (293.15 to 313.15) K. The sulfonamide exhibited its highest mole fraction solubility in pure methanol (δ 1 = 29.6 MPa 1/2 ) and its lowest mole fraction solubility in water (δ 2 = 47.8 MPa 1/2 ) at each of the five temperatures studied. The Jouyban–Acree model was used to correlate/predict the solubility values. The respective apparent thermodynamic functions Gibbs energy, enthalpy, and entropy of solution were obtained from the solubility data through the van’t Hoff and Gibbs equations. Apparent thermodynamic quantities of mixing were also calculated for this drug using values of the ideal solubility reported in the literature. A non-linear enthalpy–entropy relationship was noted for SMT in plots of both the enthalpy vs. Gibbs energy of mixing and the enthalpy vs. entropy of mixing. These plots suggest two different trends according to the slopes obtained when the composition of the mixtures changes. Accordingly, the mechanism for SMT transfer processes in water-rich mixtures from water to the mixture with 0.70 in mass fraction of methanol is entropy driven. Conversely, the mechanism is enthalpy driven in mixtures whenever the methanol composition exceeds 0.70 mol fraction. An inverse Kirkwood–Buff integral analysis of the preferential solvation of SMT indicated that the drug is preferentially solvated by water in water-rich mixtures but is preferentially solvated by methanol in methanol-rich mixtures.

  14. Measuring two-phase and two-component mixtures by radiometric technique

    International Nuclear Information System (INIS)

    Mackuliak, D.; Rajniak, I.

    1984-01-01

    The possibility was tried of the application of the radiometric method in measuring steam water content. The experiments were carried out in model conditions where steam was replaced with the two-component mixture of water and air. The beta radiation source was isotope 204 Tl (Esub(max)=0.765 MeV) with an activity of 19.35 MBq. Measurements were carried out within the range of the surface density of the mixture from 0.119 kg.m -2 to 0.130 kg.m -2 . Mixture speed was 5.1 m.s -1 to 7.1 m.s -1 . The observed dependence of relative pulse frequency on the specific water content in the mixture was approximated by a linear regression. (B.S.)

  15. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    Science.gov (United States)

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  16. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  17. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  18. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    International Nuclear Information System (INIS)

    Almasi, Mohammad

    2014-01-01

    Experimental and calculated partial molar volumes (V ¯ m,1 ) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V m E , isobaric thermal expansivity α p , partial molar volumes V ¯ m,i and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures

  19. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    association between prices that can be attributed, among others, to unobserved neighborhood effects. In this paper, a model of spatial association for housing markets is introduced. Spatial association is treated in the context of spatial heterogeneity, which is explicitly modeled in both a global and a local....... The identified mixtures are considered as the different spatial housing submarkets. The main advantage of the approach is that submarkets are recovered by the housing prices data compared to submarkets imposed by administrative or geographical criteria. The Finite Mixture Model is estimated using the Figueiredo...

  20. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  1. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  2. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  3. Linear mixed models for longitudinal data

    CERN Document Server

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  4. A predictive model of natural gas mixture combustion in internal combustion engines

    Directory of Open Access Journals (Sweden)

    Henry Espinoza

    2007-05-01

    Full Text Available This study shows the development of a predictive natural gas mixture combustion model for conventional com-bustion (ignition engines. The model was based on resolving two areas; one having unburned combustion mixture and another having combustion products. Energy and matter conservation equations were solved for each crankshaft turn angle for each area. Nonlinear differential equations for each phase’s energy (considering compression, combustion and expansion were solved by applying the fourth-order Runge-Kutta method. The model also enabled studying different natural gas components’ composition and evaluating combustion in the presence of dry and humid air. Validation results are shown with experimental data, demonstrating the software’s precision and accuracy in the results so produced. The results showed cylinder pressure, unburned and burned mixture temperature, burned mass fraction and combustion reaction heat for the engine being modelled using a natural gas mixture.

  5. Non-linear finite element modeling

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard

    The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...

  6. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  7. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  8. Color Texture Segmentation by Decomposition of Gaussian Mixture Model

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 287-296 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA MŠk 2C06019 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * gaussian mixture model * EM algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/grim-color texture segmentation by decomposition of gaussian mixture model.pdf

  9. Measurement and Evaluation of Finger Tapping Movements Using Log-linearized Gaussian Mixture Networks

    Directory of Open Access Journals (Sweden)

    Masaru Yokoe

    2009-03-01

    Full Text Available This paper proposes a method to quantitatively measure and evaluate finger tapping movements for the assessment of motor function using log-linearized Gaussian mixture networks (LLGMNs. First, finger tapping movements are measured using magnetic sensors, and eleven indices are computed for evaluation. After standardizing these indices based on those of normal subjects, they are input to LLGMNs to assess motor function. Then, motor ability is probabilistically discriminated to determine whether it is normal or not using a classifier combined with the output of multiple LLGMNs based on bagging and entropy. This paper reports on evaluation and discrimination experiments performed on finger tapping movements in 33 Parkinson’s disease (PD patients and 32 normal elderly subjects. The results showed that the patients could be classified correctly in terms of their impairment status with a high degree of accuracy (average rate: 93:1 § 3:69% using 12 LLGMNs, which was about 5% higher than the results obtained using a single LLGMN.

  10. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  11. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...

  12. PhyloBayes MPI: phylogenetic reconstruction with infinite mixtures of profiles in a parallel environment.

    Science.gov (United States)

    Lartillot, Nicolas; Rodrigue, Nicolas; Stubbs, Daniel; Richer, Jacques

    2013-07-01

    Modeling across site variation of the substitution process is increasingly recognized as important for obtaining more accurate phylogenetic reconstructions. Both finite and infinite mixture models have been proposed and have been shown to significantly improve on classical single-matrix models. Compared with their finite counterparts, infinite mixtures have a greater expressivity. However, they are computationally more challenging. This has resulted in practical compromises in the design of infinite mixture models. In particular, a fast but simplified version of a Dirichlet process model over equilibrium frequency profiles implemented in PhyloBayes has often been used in recent phylogenomics studies, while more refined model structures, more realistic and empirically more fit, have been practically out of reach. We introduce a message passing interface version of PhyloBayes, implementing the Dirichlet process mixture models as well as more classical empirical matrices and finite mixtures. The parallelization is made efficient thanks to the combination of two algorithmic strategies: a partial Gibbs sampling update of the tree topology and the use of a truncated stick-breaking representation for the Dirichlet process prior. The implementation shows close to linear gains in computational speed for up to 64 cores, thus allowing faster phylogenetic reconstruction under complex mixture models. PhyloBayes MPI is freely available from our website www.phylobayes.org.

  13. Experimental and modeling study on effects of N2 and CO2 on ignition characteristics of methane/air mixture

    Directory of Open Access Journals (Sweden)

    Wen Zeng

    2015-03-01

    Full Text Available The ignition delay times of methane/air mixture diluted by N2 and CO2 were experimentally measured in a chemical shock tube. The experiments were performed over the temperature range of 1300–2100 K, pressure range of 0.1–1.0 MPa, equivalence ratio range of 0.5–2.0 and for the dilution coefficients of 0%, 20% and 50%. The results suggest that a linear relationship exists between the reciprocal of temperature and the logarithm of the ignition delay times. Meanwhile, with ignition temperature and pressure increasing, the measured ignition delay times of methane/air mixture are decreasing. Furthermore, an increase in the dilution coefficient of N2 or CO2 results in increasing ignition delays and the inhibition effect of CO2 on methane/air mixture ignition is stronger than that of N2. Simulated ignition delays of methane/air mixture using three kinetic models were compared to the experimental data. Results show that GRI_3.0 mechanism gives the best prediction on ignition delays of methane/air mixture and it was selected to identify the effects of N2 and CO2 on ignition delays and the key elementary reactions in the ignition chemistry of methane/air mixture. Comparisons of the calculated ignition delays with the experimental data of methane/air mixture diluted by N2 and CO2 show excellent agreement, and sensitivity coefficients of chain branching reactions which promote mixture ignition decrease with increasing dilution coefficient of N2 or CO2.

  14. Maximum likelihood pixel labeling using a spatially variant finite mixture model

    International Nuclear Information System (INIS)

    Gopal, S.S.; Hebert, T.J.

    1996-01-01

    We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain

  15. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  16. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    Energy Technology Data Exchange (ETDEWEB)

    Almasi, Mohammad, E-mail: m.almasi@khouzestan.srbiau.ac.ir

    2014-09-10

    Experimental and calculated partial molar volumes (V{sup ¯}{sub m,1}) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V{sub m}{sup E}, isobaric thermal expansivity α{sub p}, partial molar volumes V{sup ¯}{sub m,i} and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures.

  17. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-01

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  18. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  19. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  20. Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region

    Science.gov (United States)

    Mazurek, Grzegorz; Iwański, Marek

    2017-10-01

    Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105 controlled strain mode. The fixed strain level was set at 25με to guarantee that the stiffness modulus of the asphalt concrete would be tested in a linear viscoelasticity range. The master curve was formed using the time-temperature superposition principle (TTSP). The stiffness modulus of asphalt concrete was determined at temperatures 10°C, 20°C and 40°C and at loading times (frequency) of 0.1, 0.3, 1, 3, 10, 20 Hz. The model parameters were fitted to the rheological models using the original programs based on the nonlinear least squares sum method. All the rheological models under analysis were found to be capable of predicting changes in the stiffness modulus of the reference asphalt concrete to satisfactory accuracy. In the cases of the fractional model and the generalized Maxwell model, their accuracy depends on a number of elements in series. The best fit was registered for Bahia and co-workers model, generalized Maxwell model and fractional model. As for predicting the phase angle parameter, the largest discrepancies between experimental and modelled results were obtained using the fractional model. Except the Burgers model, the model matching quality was

  1. linear-quadratic-linear model

    Directory of Open Access Journals (Sweden)

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  2. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  3. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  4. Matrix Tricks for Linear Statistical Models

    CERN Document Server

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  5. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

  6. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  7. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  8. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    Science.gov (United States)

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  9. Applicability of linear and non-linear potential flow models on a Wavestar float

    DEFF Research Database (Denmark)

    Bozonnet, Pauline; Dupin, Victor; Tona, Paolino

    2017-01-01

    as a model based on non-linear potential flow theory and weakscatterer hypothesis are successively considered. Simple tests, such as dip tests, decay tests and captive tests enable to highlight the improvements obtained with the introduction of nonlinearities. Float motion under wave actions and without...... control action, limited to small amplitude motion with a single float, is well predicted by the numerical models, including the linear one. Still, float velocity is better predicted by accounting for non-linear hydrostatic and Froude-Krylov forces.......Numerical models based on potential flow theory, including different types of nonlinearities are compared and validated against experimental data for the Wavestar wave energy converter technology. Exact resolution of the rotational motion, non-linear hydrostatic and Froude-Krylov forces as well...

  10. Forecasting Volatility of Dhaka Stock Exchange: Linear Vs Non-linear models

    Directory of Open Access Journals (Sweden)

    Masudul Islam

    2012-10-01

    Full Text Available Prior information about a financial market is very essential for investor to invest money on parches share from the stock market which can strengthen the economy. The study examines the relative ability of various models to forecast daily stock indexes future volatility. The forecasting models that employed from simple to relatively complex ARCH-class models. It is found that among linear models of stock indexes volatility, the moving average model ranks first using root mean square error, mean absolute percent error, Theil-U and Linex loss function  criteria. We also examine five nonlinear models. These models are ARCH, GARCH, EGARCH, TGARCH and restricted GARCH models. We find that nonlinear models failed to dominate linear models utilizing different error measurement criteria and moving average model appears to be the best. Then we forecast the next two months future stock index price volatility by the best (moving average model.

  11. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yen, E-mail: yen.liu@nasa.gov; Vinokur, Marcel [NASA Ames Research Center, Moffett Field, California 94035 (United States); Panesi, Marco; Sahai, Amal [University of Illinois, Urbana-Champaign, Illinois 61801 (United States)

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  12. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    International Nuclear Information System (INIS)

    Liu, Yen; Vinokur, Marcel; Panesi, Marco; Sahai, Amal

    2015-01-01

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations

  13. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  14. Semiparametric accelerated failure time cure rate mixture models with competing risks.

    Science.gov (United States)

    Choi, Sangbum; Zhu, Liang; Huang, Xuelin

    2018-01-15

    Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non-ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause-specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause-conditional survival function that are combined through a multinomial logistic model within the cure-mixture modeling framework. The cure-mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel-based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel-smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Preferential solvation of fluorenone and 4-hydroxyfluorenone in binary solvent mixtures

    International Nuclear Information System (INIS)

    Jozefowicz, Marek; Heldt, Janina R.

    2003-01-01

    Preferential solvation of fluorenone and 4-hydroxyfluorenone in binary solvent mixtures has been studied using steady-state spectroscopic measurements. This study concerns the solvent-induced shift of the absorption and fluorescence spectra of both molecules in two solvent mixtures, i.e., cyclohexane-tetrahydrofuran and cyclohexane-ethanol. The first system contains polar solute molecules, fluorenone and 4-hydroxyfluorenone, in a mixture of polar aprotic (tetrahydrofuran) and non-polar (cyclohexane) solvents. In the second solvents mixture, hydrogen bonding with solute molecules (ethanol) may occur. The results of spectroscopic measurements are analysed using theoretical models of Bakshiev, Mazurenko and Suppan which describe preferential solvation phenomena. In the case of cyclohexane-tetrahydrofuran mixtures, the deviation from linearity in the absorption and fluorescence solvatochromic shifts vs. the solution polarity is due to non-specific dipolar solvent-solute interactions. For cyclohexane-ethanol binary mixtures, both non-specific and specific (hydrogen bond and proton-relay tautomerization) interactions contribute to the observed solvatochromism

  16. The Modified Enskog Equation for Mixtures

    NARCIS (Netherlands)

    Beijeren, H. van; Ernst, M.H.

    1973-01-01

    In a previous paper it was shown that a modified form of the Enskog equation, applied to mixtures of hard spheres, should be considered as the correct extension of the usual Enskog equation to the case of mixtures. The main argument was that the modified Enskog equation leads to linear transport

  17. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  18. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  19. An Image-Based Finite Element Approach for Simulating Viscoelastic Response of Asphalt Mixture

    Directory of Open Access Journals (Sweden)

    Wenke Huang

    2016-01-01

    Full Text Available This paper presents an image-based micromechanical modeling approach to predict the viscoelastic behavior of asphalt mixture. An improved image analysis technique based on the OTSU thresholding operation was employed to reduce the beam hardening effect in X-ray CT images. We developed a voxel-based 3D digital reconstruction model of asphalt mixture with the CT images after being processed. In this 3D model, the aggregate phase and air void were considered as elastic materials while the asphalt mastic phase was considered as linear viscoelastic material. The viscoelastic constitutive model of asphalt mastic was implemented in a finite element code using the ABAQUS user material subroutine (UMAT. An experimental procedure for determining the parameters of the viscoelastic constitutive model at a given temperature was proposed. To examine the capability of the model and the accuracy of the parameter, comparisons between the numerical predictions and the observed laboratory results of bending and compression tests were conducted. Finally, the verified digital sample of asphalt mixture was used to predict the asphalt mixture viscoelastic behavior under dynamic loading and creep-recovery loading. Simulation results showed that the presented image-based digital sample may be appropriate for predicting the mechanical behavior of asphalt mixture when all the mechanical properties for different phases became available.

  20. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  1. Model-based experimental design for assessing effects of mixtures of chemicals

    NARCIS (Netherlands)

    Baas, J.; Stefanowicz, A.M.; Klimek, B.; Laskowski, R.; Kooijman, S.A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for

  2. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Directory of Open Access Journals (Sweden)

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  3. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes....

  4. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  5. Composite Linear Models | Division of Cancer Prevention

    Science.gov (United States)

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  6. Quantiles for Finite Mixtures of Normal Distributions

    Science.gov (United States)

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  7. Modelling of phase equilibria for associating mixtures using an equation of state

    International Nuclear Information System (INIS)

    Ferreira, Olga; Brignole, Esteban A.; Macedo, Eugenia A.

    2004-01-01

    In the present work, the group contribution with association equation of state (GCA-EoS) is extended to represent phase equilibria in mixtures containing acids, esters, and ketones, with water, alcohols, and any number of inert components. Association effects are represented by a group-contribution approach. Self- and cross-association between the associating groups present in these mixtures are considered. The GCA-EoS model is compared to the group-contribution method MHV2, which does not take into account explicitly association effects. The results obtained with the GCA-EoS model are, in general, more accurate when compared to the ones achieved by the MHV2 equation with less number of parameters. Model predictions are presented for binary self- and cross-associating mixtures

  8. Imitation learning of Non-Linear Point-to-Point Robot Motions using Dirichlet Processes

    DEFF Research Database (Denmark)

    Krüger, Volker; Tikhanoff, Vadim; Natale, Lorenzo

    2012-01-01

    In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations....... The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach...... our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human...

  9. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    Science.gov (United States)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation

  10. Mixture models with entropy regularization for community detection in networks

    Science.gov (United States)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  11. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  12. Model-based experimental design for assessing effects of mixtures of chemicals

    International Nuclear Information System (INIS)

    Baas, Jan; Stefanowicz, Anna M.; Klimek, Beata; Laskowski, Ryszard; Kooijman, Sebastiaan A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  13. Prediction of the solubility in lipidic solvent mixture: Investigation of the modeling approach and thermodynamic analysis of solubility.

    Science.gov (United States)

    Patel, Shruti V; Patel, Sarsvatkumar

    2015-09-18

    Self-micro emulsifying drug delivery system (SMEDDS) is one of the methods to improve solubility and bioavailability of poorly soluble drug(s). The knowledge of the solubility of pharmaceuticals in pure lipidic solvents and solvent mixtures is crucial for designing the SMEDDS of poorly soluble drug substances. Since, experiments are very time consuming, a model, which allows for solubility predictions in solvent mixtures based on less experimental data is desirable for efficiency. Solvents employed were Labrafil® M1944CS and Labrasol® as lipidic solvents; Capryol-90®, Capryol-PGMC® and Tween®-80 as surfactants; Transcutol® and PEG-400 as co-solvents. Solubilities of both drugs were determined in single solvent systems at temperature (T) range of 283-333K. In present study, we investigated the applicability of the thermodynamic model to understand the solubility behavior of drugs in the lipiodic solvents. By using the Van't Hoff and general solubility theory, the thermodynamic functions like Gibbs free energy, enthalpy and entropy of solution, mixing and solvation for drug in single and mixed solvents were understood. The thermodynamic parameters were understood in the framework of drug-solvent interaction based on their chemical similarity and dissimilarity. Clotrimazole and Fluconazole were used as active ingredients whose solubility was measured in single solvent as a function of temperature and the data obtained were used to derive mathematical models which can predict solubility in multi-component solvent mixtures. Model dependent parameters for each drug were calculated at each temperature. The experimental solubility data of solute in mixed solvent system were measured experimentally and further correlated with the calculates values obtained from exponent model and log-linear model of Yalkowsky. The good correlation was observed between experimental solubility and predicted solubility. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  15. Determining of migraine prognosis using latent growth mixture models.

    Science.gov (United States)

    Tasdelen, Bahar; Ozge, Aynur; Kaleagasi, Hakan; Erdogan, Semra; Mengi, Tufan

    2011-04-01

    This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies, participants are classified with respect to baseline status and followed within a certain time period. However, latent growth mixture model is the most suitable method, which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence, we planned this comprehensive study to identify prognostic factors in migraine. The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity, frequency, and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures, semiparametric and group-based mixture modeling approach, were applied to define the developmental trajectories. While the three-group model for the severity (mild, moderate, severe) and frequency (low, medium, high) of headache appeared to be appropriate, the four-group model for the duration (low, medium, high, extremely high) was more suitable. The severity of headache increased in the patients with nausea, vomiting, photophobia and phonophobia. The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration. Nausea, vomiting and photophobia were the most significant factors to identify developmental trajectories. The remission time was not the same for the severity, frequency, and duration of headache.

  16. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  17. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  18. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  19. Coupling Modified Linear Spectral Mixture Analysis and Soil Conservation Service Curve Number (SCS-CN Models to Simulate Surface Runoff: Application to the Main Urban Area of Guangzhou, China

    Directory of Open Access Journals (Sweden)

    Jianhui Xu

    2016-11-01

    Full Text Available Land surface characteristics, including soil type, terrain slope, and antecedent soil moisture, have significant impacts on surface runoff during heavy precipitation in highly urbanized areas. In this study, a Linear Spectral Mixture Analysis (LSMA method is modified to extract high-precision impervious surface, vegetation, and soil fractions. In the modified LSMA method, the representative endmembers are first selected by combining a high-resolution image from Google Earth; the unmixing results of the LSMA are then post-processed to reduce errors of misclassification with Normalized Difference Built-up Index (NDBI and Normalized Difference Vegetation Index (NDVI. The modified LSMA is applied to the Landsat 8 Operational Land Imager (OLI image from 18 October 2015 of the main urban area of Guangzhou city. The experimental result indicates that the modified LSMA shows improved extraction performance compared with the conventional LSMA, as it can significantly reduce the bias and root-mean-square error (RMSE. The improved impervious surface, vegetation, and soil fractions are used to calculate the composite curve number (CN for each pixel according to the Soil Conservation Service curve number (SCS-CN model. The composite CN is then adjusted with regional data of the terrain slope and total 5-day antecedent precipitation. Finally, the surface runoff is simulated with the SCS-CN model by combining the adjusted CN and real precipitation data at 1 p.m., 4 May 2015.

  20. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  1. Verifying reciprocal relations for experimental diffusion coefficients in multicomponent mixtures

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2003-01-01

    The goal of the present study is to verify the agreement of the available data on diffusion in ternary mixtures with the theoretical requirement of linear non-equilibrium thermodynamics consisting in symmetry of the matrix of the phenomenological coefficients. A common set of measured diffusion...... coefficients for a three-component mixture consists of four Fickian diffusion coefficients, each being reported separately. However, the Onsager theory predicts the existence of only three independent coefficients, as one of them disappears due to the symmetry requirement. Re-calculation of the Fickian...... extended sets of experimental data and reliable thermodynamic models were available. The sensitivity of the symmetry property to different thermodynamic parameters of the models was also checked. (C) 2003 Elsevier Science B.V. All rights reserved....

  2. Natural convection in ternary mixtures

    International Nuclear Information System (INIS)

    Kremer, G.M.; Kai, L.

    1981-01-01

    The field equations for a mixture of a viscous fluid, a deformable solid and a non-viscous fluid are studied, based on a linearized theory proposed by Bowen. The fields of density of each constituent, temperature, velocity of each fluid and displacement of the solid are determined, for steady states flow of the mixture between two parallel planes and between two concentric cylinders which are maintained at diferent temperatures. (Author) [pt

  3. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  4. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    Science.gov (United States)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  5. Text document classification based on mixture models

    Czech Academy of Sciences Publication Activity Database

    Novovičová, Jana; Malík, Antonín

    2004-01-01

    Roč. 40, č. 3 (2004), s. 293-304 ISSN 0023-5954 R&D Projects: GA AV ČR IAA2075302; GA ČR GA102/03/0049; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : text classification * text categorization * multinomial mixture model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.224, year: 2004

  6. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  7. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    Science.gov (United States)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  8. Heterotic sigma models and non-linear strings

    International Nuclear Information System (INIS)

    Hull, C.M.

    1986-01-01

    The two-dimensional supersymmetric non-linear sigma models are examined with respect to the heterotic string. The paper was presented at the workshop on :Supersymmetry and its applications', Cambridge, United Kingdom, 1985. The non-linear sigma model with Wess-Zumino-type term, the coupling of the fermionic superfields to the sigma model, super-conformal invariance, and the supersymmetric string, are all discussed. (U.K.)

  9. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    Science.gov (United States)

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  10. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  11. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    Science.gov (United States)

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  12. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  13. Non linear viscoelastic models

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  14. Large eddy simulation of the low temperature ignition and combustion processes on spray flame with the linear eddy model

    Science.gov (United States)

    Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn

    2018-03-01

    Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer

  15. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Science.gov (United States)

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  16. Generalised linear models for correlated pseudo-observations, with applications to multi-state models

    DEFF Research Database (Denmark)

    Andersen, Per Kragh; Klein, John P.; Rosthøj, Susanne

    2003-01-01

    Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model......Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model...

  17. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  18. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    Science.gov (United States)

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Linear causal modeling with structural equations

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  20. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  1. Investigating Individual Differences in Toddler Search with Mixture Models

    Science.gov (United States)

    Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina

    2015-01-01

    Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…

  2. Application of association models to mixtures containing alkanolamines

    DEFF Research Database (Denmark)

    Avlund, Ane Søgaard; Eriksen, Daniel Kunisch; Kontogeorgis, Georgios

    2011-01-01

    Two association models,the CPA and sPC-SAFT equations of state, are applied to binarymixtures containing alkanolamines and hydrocarbons or water. CPA is applied to mixtures of MEA and DEA, while sPC-SAFT is applied to MEA–n-heptane liquid–liquid equilibria and MEA–water vapor–liquid equilibria. T...

  3. On D-branes from gauged linear sigma models

    International Nuclear Information System (INIS)

    Govindarajan, S.; Jayaraman, T.; Sarkar, T.

    2001-01-01

    We study both A-type and B-type D-branes in the gauged linear sigma model by considering worldsheets with boundary. The boundary conditions on the matter and vector multiplet fields are first considered in the large-volume phase/non-linear sigma model limit of the corresponding Calabi-Yau manifold, where we find that we need to add a contact term on the boundary. These considerations enable to us to derive the boundary conditions in the full gauged linear sigma model, including the addition of the appropriate boundary contact terms, such that these boundary conditions have the correct non-linear sigma model limit. Most of the analysis is for the case of Calabi-Yau manifolds with one Kaehler modulus (including those corresponding to hypersurfaces in weighted projective space), though we comment on possible generalisations

  4. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  5. Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models

    OpenAIRE

    Martin Burda; Artem Prokhorov

    2012-01-01

    Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...

  6. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.; Zaiane, O.R.; Srivastav, J.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  7. Non-linearity parameter of binary liquid mixtures at elevated pressures

    Indian Academy of Sciences (India)

    parameter B/A of four binary liquid mixtures using Tong and Dong equation at high pressures and .... in general as regular or ideal as no recognized association takes place between the unlike molecules. In this case ... Using the definition and.

  8. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...

  9. Decomposable log-linear models

    DEFF Research Database (Denmark)

    Eriksen, Poul Svante

    can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...

  10. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time...... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  11. Modeling digital switching circuits with linear algebra

    CERN Document Server

    Thornton, Mitchell A

    2014-01-01

    Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transf

  12. Linear Regression on Sparse Features for Single-Channel Speech Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Olsson, Rasmus Kongsgaard

    2007-01-01

    In this work we address the problem of separating multiple speakers from a single microphone recording. We formulate a linear regression model for estimating each speaker based on features derived from the mixture. The employed feature representation is a sparse, non-negative encoding of the speech...... mixture in terms of pre-learned speaker-dependent dictionaries. Previous work has shown that this feature representation by itself provides some degree of separation. We show that the performance is significantly improved when regression analysis is performed on the sparse, non-negative features, both...

  13. Non-linear Growth Models in Mplus and SAS

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  14. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  15. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  16. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict......The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... multicomponent adsorption equilibria on the basis of single-component adsorption data. We compare the MPTA and IAST models to a large set of experimental data, obtaining reasonable good agreement with experimental data and high degree of predictability. Some limitations of both models are also discussed....

  17. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    International Nuclear Information System (INIS)

    Teng, S.; Tebby, C.; Barcellini-Couget, S.; De Sousa, G.; Brochot, C.; Rahmani, R.; Pery, A.R.R.

    2016-01-01

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  18. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    Energy Technology Data Exchange (ETDEWEB)

    Teng, S.; Tebby, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Barcellini-Couget, S. [ODESIA Neosciences, Sophia Antipolis, 400 route des chappes, 06903 Sophia Antipolis (France); De Sousa, G. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Brochot, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Rahmani, R. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Pery, A.R.R., E-mail: alexandre.pery@agroparistech.fr [AgroParisTech, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France); INRA, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France)

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  19. Linear minimax estimation for random vectors with parametric uncertainty

    KAUST Repository

    Bitar, E

    2010-06-01

    In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).

  20. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    Science.gov (United States)

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Effects of Test Conditions on APA Rutting and Prediction Modeling for Asphalt Mixtures

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-01-01

    Full Text Available APA rutting tests were conducted for six kinds of asphalt mixtures under air-dry and immersing conditions. The influences of test conditions, including load, temperature, air voids, and moisture, on APA rutting depth were analyzed by using grey correlation method, and the APA rutting depth prediction model was established. Results show that the modified asphalt mixtures have bigger rutting depth ratios of air-dry to immersing conditions, indicating that the modified asphalt mixtures have better antirutting properties and water stability than the matrix asphalt mixtures. The grey correlation degrees of temperature, load, air void, and immersing conditions on APA rutting depth decrease successively, which means that temperature is the most significant influencing factor. The proposed indoor APA rutting prediction model has good prediction accuracy, and the correlation coefficient between the predicted and the measured rutting depths is 96.3%.

  2. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  3. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  4. Linear programming model can explain respiration of fermentation products

    Science.gov (United States)

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan

    2018-01-01

    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the “Warburg effect”. The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited. PMID:29415045

  5. Linear programming model can explain respiration of fermentation products.

    Science.gov (United States)

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan; Boley, Daniel

    2018-01-01

    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the "Warburg effect". The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited.

  6. Electron thermalization in rare gases and their mixtures

    International Nuclear Information System (INIS)

    Bronic, I.K.; Kimura, M.

    1996-01-01

    The time evolution and temperature dependence of electron energy distribution functions (EDFs) are studied in pure rare gases (He, Ne, Ar, Kr, Xe) as well as in their mixtures by using solutions of the Boltzmann equation. A clear difference between the gases having the Ramsauer endash Townsend (RT) minimum in the momentum-transfer cross section, (RT gases: Ar, Kr, and Xe), and those without the RT minimum (non-RT gases: He and Ne) is pointed out. The influence of the position and the depth of the RT minimum on the EDF and time evolution is studied for three different initial electron energies. A formula proposed for describing thermalization time in a mixture is tested on (i) a non-RT endash non-RT gas mixture, (ii) a RT endash non-RT mixture and (iii) a RT endash RT gas mixture. The linear combination of the reciprocal thermalization times in gas mixture with the component concentrations as weighting factors is found to be valid for gases with a similar energy dependence of the momentum-transfer cross section, σ m , and also for all rare-gas binary mixtures if the initial electron energy is sufficiently below the RT minimum. Conspicuous deviations from the linear relationship are observed in mixtures of gases whose energy dependence of σ m (or the stopping cross section) are different, and theoretical rationales for these findings are provided. copyright 1996 American Institute of Physics

  7. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro.

    Directory of Open Access Journals (Sweden)

    Niels Hadrup

    Full Text Available Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA, independent action (IA and generalized concentration addition (GCA models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot

  8. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  9. Excess Properties of Aqueous Mixtures of Methanol: Simple Models Versus Experiment

    Czech Academy of Sciences Publication Activity Database

    Vlček, Lukáš; Nezbeda, Ivo

    roč. 131-132, - (2007), s. 158-162 ISSN 0167-7322. [International Conference on Solution Chemistry /29./. Portorož, 21.08.2005-25.08.2005] R&D Projects: GA AV ČR(CZ) IAA4072303; GA AV ČR(CZ) 1ET400720409 Institutional research plan: CEZ:AV0Z40720504 Keywords : aqueous mixtures * primitive models * water-alcohol mixtures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.982, year: 2007

  10. Quadratic Blind Linear Unmixing: A Graphical User Interface for Tissue Characterization

    OpenAIRE

    Gutierrez-Navarro, O.; Campos-Delgado, D.U.; Arce-Santana, E. R.; Jo, Javier A.

    2015-01-01

    Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which imple...

  11. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    Science.gov (United States)

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  13. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures

    Science.gov (United States)

    Youssef, Rasha M.; Maher, Hadir M.

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  14. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  15. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    -parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture

  16. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...

  17. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  18. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    Thermodynamic properties and phase equilibria of associating mixtures cannot often be satisfactorily modelled using conventional models such as cubic equations of state. CPA (cubic-plus-association) is an equation of state (EoS), which combines the SRK EoS with the association term of SAFT. For non......-alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... thermodynamic models especially those combining cubic EoS with local composition activity coefficient models are included. (C) 2007 Elsevier B.V. All rights reserved....

  19. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  20. X-ray-induced thinning of 3He and 3He/4He mixture films

    International Nuclear Information System (INIS)

    Penanen, Konstantin; Fukuto, Masafumi; Silvera, Isaac F.; Pershan, Peter

    2000-01-01

    Films of isotopic mixtures of helium have been studied using x-ray specular reflectivity techniques. In contrast with superfluid 4 He films, x-ray exposure causes a reduction in the thickness of 4 He films above the superfluid transition as well as films of pure 3 He and 3 He/ 4 He mixtures. One proposed model that could account for this effect is a charging model, in which thinning is caused by electrostatic pressure of free charges that accumulate on the helium surface. Unfortunately, this model is not fully consistent with all of the experimental observations. A localized heating model, in which indirect heating of the film causes it to thin would explain the data if there were dissipative film flow in the 3 He/ 4 He mixtures at temperatures where the bulk is superfluid. We argue that various published experimental results suggest such an effect. In this model, film thinning data for dilute 3 He/ 4 He films indicates dissipation that is linear in 3 He content of the film over two orders of magnitude

  1. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  2. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  3. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    Science.gov (United States)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  4. Thermodynamics of (1-alkanol + linear monoether) systems

    International Nuclear Information System (INIS)

    Gonzalez, Juan Antonio; Mozo, Ismael; Garcia de la Fuente, Isaias; Cobos, Jose Carlos; Riesco, Nicolas

    2008-01-01

    Densities, ρ, and speeds of sound, u, of systems formed by 1-heptanol, or 1-octanol, or 1-decanol and dibutylether have been measured at a temperature of (293.15, 298.15, and 303.15) K and atmospheric pressure using a vibrating tube densimeter and sound analyser Anton Paar model DSA-5000. The ρ and u values were used to calculate excess molar volumes, V E , and deviations from the ideal behaviour of the thermal expansion coefficient, Δα p and of the isentropic compressibilities, Δκ S . The available database on molar excess enthalpies, H E , and V E for (1-alkanol + linear monoether) systems was used to investigate interactional and structural effects in such mixtures. The enthalpy of the OH...O bonds is lower for methanol solutions, and for the remainder systems, it is practically independent of the mixture compounds. The V E variation with the chain length of the 1-alkanol points out the existence of structural effects for systems including longer 1-alkanols. The ERAS model is applied to the studied mixtures. ERAS represents quite accurately H E and V E data using parameters which consistently depend on the molecular structure

  5. Modelling Loudspeaker Non-Linearities

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  6. Solvatochromism of naringenin in aqueous alcoholic mixtures

    Directory of Open Access Journals (Sweden)

    Faraji Mohammad

    2016-01-01

    Full Text Available The spectral change of naringenin was studied by Uv-vis spectrophotometric method in binary mixtures of water with methanol, ethanol and 1-propanol at 25°C. The effect of solvent was investigated by analysis of electron transition energy at the maximum absorption wavelength as a function of Kamlet and Taft parameters of mixtures by means of linear solvation energy relationships. The nonlinear response of solvatochromism was explained based on solute-solvent and solvent-solvent interactions. The possible preferential solvation of naringenin by each of solvents was studied through a modified preferential solvation model which considers the hydrogen bonding interactions between the prior solvents due to solvent-solvent interactions. The preferential solvation parameters and local mole fraction distribution around the solute were calculated. Results indicate that naringenin prefers to be more solvated by the complex solvating species and organic solvents than water.

  7. Modelagem não linear da extração de zinco de um solo tratado com lodo de esgoto = Non-linear modeling of zinc extracted from a sewage sludge-treated soil

    Directory of Open Access Journals (Sweden)

    Edcarlos Miranda de Souza

    2010-07-01

    Full Text Available A predição da disponibilidade de elementos-traço para plantas com o uso de extratores é dificilmente obtida. Por essa razão, um método de extração sucessiva que simulou a solução da rizosfera com misturas de ácidos orgânicos sintéticos para acessar a disponibilidade potencial desses elementos em longo prazo foi anteriormente proposto. Com o emprego deste método, dados de extrações sucessivas de zinco pelas misturas de ácidos orgânicos, bem como por extratores usados na rotina (Mehlich I, DTPA e NH4OAc foram ajustados por meio de um modelo não-linear baseado nos mecanismos cinéticos de primeira ordem. A partir do ajuste deste modelo, o presente trabalho realizou um estudo rigoroso sobre as propriedades estatísticas dos estimadores, fazendo um esboço da nãolinearidade do modelo, tendo como referência medidas de curvatura. O objetivo foi verificar se as propriedades assintóticas dos estimadores seriam válidas, considerando o tamanho daamostra realizada, para o ajuste do modelo aos dados da extração sucessiva de zinco. A inferência clássica, baseada na aproximação linear, foi considerada, tendo, como técnica alternativa, a metodologia bootstrap. No geral, a metodologia da aproximação linear, a técnica bootstrap e as medidas de curvatura validaram a precisão dos resultados inferenciais obtidos na predição da disponibilidade de zinco.The prediction of trace element availability to plants using chemical extractants is hard to achieve. For this reason, a successive extraction method which simulated a rhizosphere solution employing synthetic organic acids was previously proposed. The method wasdeveloped in order to access the long term potential availability of trace elements to plants. Data from successive zinc extractions with organic acid mixtures, as well as with routine extractants (Mehlich I, DTPA, and NH4OAc adjusted to a non-linear model based in firstorder kinetic mechanisms. The aim of the present study

  8. Data Requirements and Modeling for Gas Hydrate-Related Mixtures and a Comparison of Two Association Models

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Aloupis, Georgios; Kontogeorgis, Georgios M.

    2017-01-01

    the performance of the CPA and sPC-SAFT EOS for modeling the fluid-phase equilibria of gas hydrate-related systems and will try to explore how the models can help in suggesting experimental measurements. These systems contain water, hydrocarbon (alkane or aromatic), and either methanol or monoethylene glycol...... parameter sets have been chosen for the sPC-SAFT EOS for a fair comparison. The comparisons are made for pure fluid properties, vapor liquid-equilibria, and liquid liquid equilibria of binary and ternary mixtures as well as vapor liquid liquid equilibria of quaternary mixtures. The results show, from...

  9. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Linearized models for a new magnetic control in MAST

    International Nuclear Information System (INIS)

    Artaserse, G.; Maviglia, F.; Albanese, R.; McArdle, G.J.; Pangione, L.

    2013-01-01

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops

  11. Linearized models for a new magnetic control in MAST

    Energy Technology Data Exchange (ETDEWEB)

    Artaserse, G., E-mail: giovanni.artaserse@enea.it [Associazione Euratom-ENEA sulla Fusione, Via Enrico Fermi 45, I-00044 Frascati (RM) (Italy); Maviglia, F.; Albanese, R. [Associazione Euratom-ENEA-CREATE sulla Fusione, Via Claudio 21, I-80125 Napoli (Italy); McArdle, G.J.; Pangione, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom)

    2013-10-15

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops.

  12. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2 Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture’s components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3 Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). RESULTS Specific Aim 1 Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10−4, and 13% of all participants had risk levels above 10−2. Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated

  13. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  14. Scale and shape mixtures of multivariate skew-normal distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.

    2018-02-26

    We introduce a broad and flexible class of multivariate distributions obtained by both scale and shape mixtures of multivariate skew-normal distributions. We present the probabilistic properties of this family of distributions in detail and lay down the theoretical foundations for subsequent inference with this model. In particular, we study linear transformations, marginal distributions, selection representations, stochastic representations and hierarchical representations. We also describe an EM-type algorithm for maximum likelihood estimation of the parameters of the model and demonstrate its implementation on a wind dataset. Our family of multivariate distributions unifies and extends many existing models of the literature that can be seen as submodels of our proposal.

  15. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank; Sun, Ke

    2017-01-01

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified

  16. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    Science.gov (United States)

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  17. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  18. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available Models for Sub- pixel Coastal Land Cover Classification M. Ritchie Dr. M. Lück-Vogel Dr. P. Debba Dr. V. Goodall ISRSE - 37 Tshwane, South Africa 10 May 2017 2Study Area Africa South Africa FALSE BAY 3Strand Gordon’s Bay Study Area WorldView-2 Image.../Urban 1 10 10 Herbaceous Vegetation 1 5 5 Shadow 1 8 8 Sparse Vegetation 1 3 3 Water 1 10 10 Woody Vegetation 1 5 5 11 Maximum Likelihood Classification (MLC) 12 Gaussian Mixture Discriminant Analysis (GMDA) 13 A B C t-distribution Mixture Discriminant...

  19. Modeling Non-Linear Material Properties in Composite Materials

    Science.gov (United States)

    2016-06-28

    Technical Report ARWSB-TR-16013 MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS Michael F. Macri Andrew G...REPORT TYPE Technical 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS ...systems are increasingly incorporating composite materials into their design. Many of these systems subject the composites to environmental conditions

  20. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Directory of Open Access Journals (Sweden)

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  1. Non-linear Loudspeaker Unit Modelling

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde; Agerkvist, Finn T.

    2008-01-01

    Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of thr...... frequencies and different displacement levels. The model errors are discussed and analysed including a test with loudspeaker unit where the diaphragm is removed....

  2. Modelling phase equilibria for acid gas mixtures using the CPA equation of state. Part V: Multicomponent mixtures containing CO2 and alcohols

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios M.

    2015-01-01

    of CPA for ternary and multicomponent CO2 mixtures containing alcohols (methanol, ethanol or propanol) water and hydrocarbons. This work belongs to a series of studies aiming to arrive in a single "engineering approach" for applying CPA to acid gas mixtures, without introducing significant changes...... to the model. In this direction, CPA results were obtained using various approaches, i.e. different association schemes for pure CO2 (assuming that it is a non-associating compound, or that it is a self-associating fluid with two, three or four association sites) and different possibilities for modelling...... mixtures of CO2 with water and alcohols (only use of one interaction parameter kij or assuming cross-association interactions and obtaining the relevant parameters either via a combining rule or using an experimental value for the cross-association energy). It is concluded that CPA is a powerful model...

  3. Detecting Math Anxiety with a Mixture Partial Credit Model

    Science.gov (United States)

    Ölmez, Ibrahim Burak; Cohen, Allan S.

    2017-01-01

    The purpose of this study was to investigate a new methodology for detection of differences in middle grades students' math anxiety. A mixture partial credit model analysis revealed two distinct latent classes based on homogeneities in response patterns within each latent class. Students in Class 1 had less anxiety about apprehension of math…

  4. Comparison between linear quadratic and early time dose models

    International Nuclear Information System (INIS)

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  5. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  6. Monaural ICA of white noise mixtures is hard

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Petersen, Kaare Brandt

    2003-01-01

    Separation of monaural linear mixtures of `white' source signals is fundamentally ill-posed. In some situations it is not possible to find the mixing coefficients for the full `blind' problem. If the mixing coefficients are known, the structure of the source prior distribution determines the sour...... of white noise signals and give a set of `no go' cases.......Separation of monaural linear mixtures of `white' source signals is fundamentally ill-posed. In some situations it is not possible to find the mixing coefficients for the full `blind' problem. If the mixing coefficients are known, the structure of the source prior distribution determines the source...

  7. A non-linear state space approach to model groundwater fluctuations

    NARCIS (Netherlands)

    Berendrecht, W.L.; Heemink, A.W.; Geer, F.C. van; Gehrels, J.C.

    2006-01-01

    A non-linear state space model is developed for describing groundwater fluctuations. Non-linearity is introduced by modeling the (unobserved) degree of water saturation of the root zone. The non-linear relations are based on physical concepts describing the dependence of both the actual

  8. A general mixture theory. I. Mixtures of spherical molecules

    Science.gov (United States)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  9. A globally accurate theory for a class of binary mixture models

    Science.gov (United States)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  10. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    Science.gov (United States)

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia

    2012-04-01

    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with

  11. Linear Equating for the NEAT Design: Parameter Substitution Models and Chained Linear Relationship Models

    Science.gov (United States)

    Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.

    2009-01-01

    This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…

  12. T-matrix modeling of linear depolarization by morphologically complex soot and soot-containing aerosols

    International Nuclear Information System (INIS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-01-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with ground-based, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders. -- Highlights: ► New superposition T-matrix code is applied to soot aerosols. ► Quasi-Rayleigh side-scattering peak in linear depolarization (LD) is explained. ► LD measurements can be used for morphological characterization of soot aerosols

  13. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  14. A numerical study of blood flow using mixture theory

    OpenAIRE

    Wu, Wei-Tao; Aubry, Nadine; Massoudi, Mehrdad; Kim, Jeongho; Antaki, James F.

    2014-01-01

    In this paper, we consider the two dimensional flow of blood in a rectangular microfluidic channel. We use Mixture Theory to treat this problem as a two-component system: One component is the red blood cells (RBCs) modeled as a generalized Reiner–Rivlin type fluid, which considers the effects of volume fraction (hematocrit) and influence of shear rate upon viscosity. The other component, plasma, is assumed to behave as a linear viscous fluid. A CFD solver based on OpenFOAM® was developed and ...

  15. Recent Updates to the GEOS-5 Linear Model

    Science.gov (United States)

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul

    2014-01-01

    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  16. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part IV. Applications to mixtures of CO2 with alkanes

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Ali, Shahid; Kontogeorgis, Georgios

    2015-01-01

    The thermodynamic properties of pure gaseous, liquid or supercritical CO2 and CO2 mixtures with hydrocarbons and other compounds such as water, alcohols, and glycols are very important in many processes in the oil and gas industry. Design of such processes requires use of accurate thermodynamic...... models, capable of predicting the complex phase behavior of multicomponent mixtures as well as their volumetric properties. In this direction, over the last several years, the cubic-plus-association (CPA) thermodynamic model has been successfully used for describing volumetric properties and phase...

  17. I-optimal mixture designs

    OpenAIRE

    GOOS, Peter; JONES, Bradley; SYAFITRI, Utami

    2013-01-01

    In mixture experiments, the factors under study are proportions of the ingredients of a mixture. The special nature of the factors in a mixture experiment necessitates specific types of regression models, and specific types of experimental designs. Although mixture experiments usually are intended to predict the response(s) for all possible formulations of the mixture and to identify optimal proportions for each of the ingredients, little research has been done concerning their I-optimal desi...

  18. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.

    2016-12-15

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  19. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    International Nuclear Information System (INIS)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar

    2016-01-01

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  20. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    International Nuclear Information System (INIS)

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  1. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    Science.gov (United States)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  2. Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.

    Science.gov (United States)

    Tan, Yin-Yin Sarah; Mueller, Shane T

    2016-09-01

    Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.

  3. Non-linear calibration models for near infrared spectroscopy

    DEFF Research Database (Denmark)

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  4. Effective dielectric mixture model for characterization of diesel contaminated soil

    International Nuclear Information System (INIS)

    Al-Mattarneh, H.M.A.

    2007-01-01

    Human exposure to contaminated soil by diesel isomers can have serious health consequences like neurological diseases or cancer. The potential of dielectric measuring techniques for electromagnetic characterization of contaminated soils was investigated in this paper. The purpose of the research was to develop an empirical dielectric mixture model for soil hydrocarbon contamination application. The paper described the basic theory and elaborated in dielectric mixture theory. The analytical and empirical models were explained in simple algebraic formulas. The experimental study was then described with reference to materials, properties and experimental results. The results of the analytical models were also mathematically explained. The proposed semi-empirical model was also presented. According to the result of the electromagnetic properties of dry soil contaminated with diesel, the diesel presence had no significant effect on the electromagnetic properties of dry soil. It was concluded that diesel had no contribution to the soil electrical conductivity, which confirmed the nonconductive character of diesel. The results of diesel-contaminated soil at saturation condition indicated that both dielectric constant and loss factors of soil were decreased with increasing diesel content. 15 refs., 2 tabs., 9 figs

  5. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  6. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  7. Estimating animal abundance with N-mixture models using the R-INLA package for R

    KAUST Repository

    Meehan, Timothy D.

    2017-05-03

    Successful management of wildlife populations requires accurate estimates of abundance. Abundance estimates can be confounded by imperfect detection during wildlife surveys. N-mixture models enable quantification of detection probability and often produce abundance estimates that are less biased. The purpose of this study was to demonstrate the use of the R-INLA package to analyze N-mixture models and to compare performance of R-INLA to two other common approaches -- JAGS (via the runjags package), which uses Markov chain Monte Carlo and allows Bayesian inference, and unmarked, which uses Maximum Likelihood and allows frequentist inference. We show that R-INLA is an attractive option for analyzing N-mixture models when (1) familiar model syntax and data format (relative to other R packages) are desired, (2) survey level covariates of detection are not essential, (3) fast computing times are necessary (R-INLA is 10 times faster than unmarked, 300 times faster than JAGS), and (4) Bayesian inference is preferred.

  8. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  9. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  10. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    International Nuclear Information System (INIS)

    Liu, Chang; Fiol, Núria; Villaescusa, Isabel; Poch, Jordi

    2016-01-01

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data

  11. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    Science.gov (United States)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  12. Forecasting the EMU inflation rate: Linear econometric vs. non-linear computational models using genetic neural fuzzy systems

    DEFF Research Database (Denmark)

    Kooths, Stefan; Mitze, Timo Friedel; Ringhut, Eric

    2004-01-01

    This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according...

  13. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  14. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  15. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...

  16. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  17. Modeling bioaccumulation in humans using poly-parameter linear free energy relationships (PPLFERS)

    Energy Technology Data Exchange (ETDEWEB)

    Undeman, Emma, E-mail: emma.undeman@itm.su.se; Czub, Gertje; McLachlan, Michael S.

    2011-04-01

    Chemical partition coefficients between environmental media and biological tissues are a key component of bioaccumulation models. The single-parameter linear free energy relationships (spLFERs) commonly used for predicting partitioning are often derived using apolar chemicals and may not accurately capture polar chemicals. In this study, a poly-parameter LFER (ppLFER) based model of organic chemical bioaccumulation in humans is presented. Chemical partitioning was described by an air-body partition coefficient that was a volume weighted average of ppLFER based partition coefficients for the major organs and tissues constituting the human body. This model was compared to a spLFER model treating the body as a mixture of lipid ({approx} octanol) and water. Although model agreement was good for hydrophobic chemicals (average difference 15% for log K{sub OW} > 4 and log K{sub OA} > 8), the ppLFER model predicted {approx} 90% lower body burdens for hydrophilic chemicals (log K{sub OW} < 0). This was mainly due to lower predictions of muscle and adipose tissue sorption capacity for these chemicals. A comparison of the predicted muscle and adipose tissue sorption capacities of hydrophilic chemicals with measurements indicated that the ppLFER and spLFER models' uncertainties were similar. Consequently, little benefit from the implementation of ppLFERs in this model was identified. - Research Highlights: {yields}Implementation of ppLFERs resulted in on average 90% lower predicted body burdens. {yields}Uncertainties in spLFER and ppLFER predictions were similar. {yields}The benefit from implementation of ppLFERs in bioaccumulation models was limited.

  18. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  19. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into

  20. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  1. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  2. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Preisach hysteresis model for non-linear 2D heat diffusion

    International Nuclear Information System (INIS)

    Jancskar, Ildiko; Ivanyi, Amalia

    2006-01-01

    This paper analyzes a non-linear heat diffusion process when the thermal diffusivity behaviour is a hysteretic function of the temperature. Modelling this temperature dependence, the discrete Preisach algorithm as general hysteresis model has been integrated into a non-linear multigrid solver. The hysteretic diffusion shows a heating-cooling asymmetry in character. The presented type of hysteresis speeds up the thermal processes in the modelled systems by a very interesting non-linear way

  4. Evaluation of a linear spectral mixture model and vegetation indices (NDVI and EVI) in a study of schistosomiasis mansoni and Biomphalaria glabrata distribution in the state of Minas Gerais, Brazil.

    Science.gov (United States)

    Guimarães, Ricardo J P S; Freitas, Corina C; Dutra, Luciano V; Scholte, Ronaldo G C; Amaral, Ronaldo S; Drummond, Sandra C; Shimabukuro, Yosio E; Oliveira, Guilherme C; Carvalho, Omar S

    2010-07-01

    This paper analyses the associations between Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) on the prevalence of schistosomiasis and the presence of Biomphalaria glabrata in the state of Minas Gerais (MG), Brazil. Additionally, vegetation, soil and shade fraction images were created using a Linear Spectral Mixture Model (LSMM) from the blue, red and infrared channels of the Moderate Resolution Imaging Spectroradiometer spaceborne sensor and the relationship between these images and the prevalence of schistosomiasis and the presence of B. glabrata was analysed. First, we found a high correlation between the vegetation fraction image and EVI and second, a high correlation between soil fraction image and NDVI. The results also indicate that there was a positive correlation between prevalence and the vegetation fraction image (July 2002), a negative correlation between prevalence and the soil fraction image (July 2002) and a positive correlation between B. glabrata and the shade fraction image (July 2002). This paper demonstrates that the LSMM variables can be used as a substitute for the standard vegetation indices (EVI and NDVI) to determine and delimit risk areas for B. glabrata and schistosomiasis in MG, which can be used to improve the allocation of resources for disease control.

  5. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Science.gov (United States)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  6. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values

    Science.gov (United States)

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-01

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.

  7. Theory of synergistic effects: Hill-type response surfaces as 'null-interaction' models for mixtures.

    Science.gov (United States)

    Schindler, Michael

    2017-08-02

    The classification of effects caused by mixtures of agents as synergistic, antagonistic or additive depends critically on the reference model of 'null interaction'. Two main approaches are currently in use, the Additive Dose (ADM) or concentration addition (CA) and the Multiplicative Survival (MSM) or independent action (IA) models. We compare several response surface models to a newly developed Hill response surface, obtained by solving a logistic partial differential equation (PDE). Assuming that a mixture of chemicals with individual Hill-type dose-response curves can be described by an n-dimensional logistic function, Hill's differential equation for pure agents is replaced by a PDE for mixtures whose solution provides Hill surfaces as 'null-interaction' models and relies neither on Bliss independence or Loewe additivity nor uses Chou's unified general theory. An n-dimensional logistic PDE decribing the Hill-type response of n-component mixtures is solved. Appropriate boundary conditions ensure the correct asymptotic behaviour. Mathematica 11 (Wolfram, Mathematica Version 11.0, 2016) is used for the mathematics and graphics presented in this article. The Hill response surface ansatz can be applied to mixtures of compounds with arbitrary Hill parameters. Restrictions which are required when deriving analytical expressions for response surfaces from other principles, are unnecessary. Many approaches based on Loewe additivity turn out be special cases of the Hill approach whose increased flexibility permits a better description of 'null-effect' responses. Missing sham-compliance of Bliss IA, known as Colby's model in agrochemistry, leads to incompatibility with the Hill surface ansatz. Examples of binary and ternary mixtures illustrate the differences between the approaches. For Hill-slopes close to one and doses below the half-maximum effect doses MSM (Colby, Bliss, Finney, Abbott) predicts synergistic effects where the Hill model indicates 'null

  8. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  9. Damage Detection of Refractory Based on Principle Component Analysis and Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Changming Liu

    2018-01-01

    Full Text Available Acoustic emission (AE technique is a common approach to identify the damage of the refractories; however, there is a complex problem since there are as many as fifteen involved parameters, which calls for effective data processing and classification algorithms to reduce the level of complexity. In this paper, experiments involving three-point bending tests of refractories were conducted and AE signals were collected. A new data processing method of merging the similar parameters in the description of the damage and reducing the dimension was developed. By means of the principle component analysis (PCA for dimension reduction, the fifteen related parameters can be reduced to two parameters. The parameters were the linear combinations of the fifteen original parameters and taken as the indexes for damage classification. Based on the proposed approach, the Gaussian mixture model was integrated with the Bayesian information criterion to group the AE signals into two damage categories, which accounted for 99% of all damage. Electronic microscope scanning of the refractories verified the two types of damage.

  10. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  11. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Science.gov (United States)

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  12. Identification of an Equivalent Linear Model for a Non-Linear Time-Variant RC-Structure

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Andersen, P.; Brincker, Rune

    are investigated and compared with ARMAX models used on a running window. The techniques are evaluated using simulated data generated by the non-linear finite element program SARCOF modeling a 10-storey 3-bay concrete structure subjected to amplitude modulated Gaussian white noise filtered through a Kanai......This paper considers estimation of the maximum softening for a RC-structure subjected to earthquake excitation. The so-called Maximum Softening damage indicator relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfrequency in an equivalent linear...

  13. On-line control models for the Stanford Linear Collider

    International Nuclear Information System (INIS)

    Sheppard, J.C.; Helm, R.H.; Lee, M.J.; Woodley, M.D.

    1983-03-01

    Models for computer control of the SLAC three-kilometer linear accelerator and damping rings have been developed as part of the control system for the Stanford Linear Collider. Some of these models have been tested experimentally and implemented in the control program for routine linac operations. This paper will describe the development and implementation of these models, as well as some of the operational results

  14. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  15. Vortices, semi-local vortices in gauged linear sigma model

    International Nuclear Information System (INIS)

    Kim, Namkwon

    1998-11-01

    We consider the static (2+1)D gauged linear sigma model. By analyzing the governing system of partial differential equations, we investigate various aspects of the model. We show the existence of energy finite vortices under a partially broken symmetry on R 2 with the necessary condition suggested by Y. Yang. We also introduce generalized semi-local vortices and show the existence of energy finite semi-local vortices under a certain condition. The vacuum manifold for the semi-local vortices turns out to be graded. Besides, with a special choice of a representation, we show that the O(3) sigma model of which target space is nonlinear is a singular limit of the gauged linear sigma model of which target space is linear. (author)

  16. An odor interaction model of binary odorant mixtures by a partial differential equation method.

    Science.gov (United States)

    Yan, Luchun; Liu, Jiemin; Wang, Guihua; Wu, Chuandong

    2014-07-09

    A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE) method. Based on the measurement method (tangent-intercept method) of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture's odor intensity to the individual odorant's relative odor activity value (OAV). Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors) also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  17. Swelling characteristics of sand-bentonite mixtures under one-dimensional stress

    International Nuclear Information System (INIS)

    Cui, Hongbin; Sun, De'an; Matsuoka, Hajime; Xu Yongfu

    2004-01-01

    Based on the concept that the maximum water volume absorbed by unit volume of montmorillonite is constant, the swelling deformation of sand-bentonite mixtures is uniquely characterized using the void ratio of montmorillonite, which is defined by the ratio of water volume to montomorillonite volume. The relationship between the montmorillonite void ratio and overburden pressure at fully swelling is independent of the initial compaction condition and the sand-bentonite mixture ratio, and is a linear line in their log scale. When overburden pressure is large enough and/or the bentonite ratio of the mixture is small, the measured plots deviate from the line. A method for predicting the limited overburden pressure which is linearly correlated with the montmorillonite void ratio is proposed and verified using the concept of the skeleton void ratio. (author)

  18. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    Science.gov (United States)

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  19. Measurement of linear attenuation coefficient of different materials

    International Nuclear Information System (INIS)

    Ali, M. M.

    2013-07-01

    In this research we study the linear attenuation coefficient from the materials concrete, brick, mixture concrete and iron. In the secondary standard dosimetry laboratory in Atomic Energy from different distance by use Cs-137 sours, chamber farmer 2675 A-600 cc-S/N 0511, and electrometer 2670 A-S/N 114. Found the value of linear attenuation coefficient of concert in the range 0.167 cm -1 , the brick in the range 0.063 -1 and mixture concrete and iron in the range 0.253cm -1 .(Author)

  20. Evaluation of Physical Properties of Wax Mixtures Obtained From Recycling of Patterns Used in Precision Casting

    Directory of Open Access Journals (Sweden)

    Biernacki R.

    2015-04-01

    Full Text Available The study investigated the properties of selected certified mixtures used to make wax patterns for the production of precision castings for the aerospace industry. In addition, an assessment of the recycled mixtures consisting of certified wax materials recovered during autoclaving was carried out. Hardness was tested via a proposed method based on penetration, creep related deformation, bending strength and linear contraction. The hardness was studied on laboratory specimens and patterns made with the use of injection molding equipment. For these patterns, linear contraction was estimated at variable pressure and for different temperature injection parameters. Deformations connected with creep and resistance were evaluated on cylindrical specimens. Differences in creep resistance in relation to the hardness were observed depending on the type of pattern mixtures. Recycled mixture has a greater resistance and smaller linear contraction than certified mixtures used for making sprue, raisers and other parts of filler system.

  1. Understanding Effect of Constraint Release Environment on End-to-End Vector Relaxation of Linear Polymer Chains

    KAUST Repository

    Shivokhin, Maksim E.; Read, Daniel J.; Kouloumasis, Dimitris; Kocen, Rok; Zhuge, Flanco; Bailly, Christian; Hadjichristidis, Nikolaos; Likhtman, Alexei E.

    2017-01-01

    of a linear probe chain. For this purpose we first validate the ability of the model to consistently predict both the viscoelastic and dielectric response of monodisperse and binary mixtures of type A polymers, based on published experimental data. We

  2. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    Science.gov (United States)

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  3. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  4. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only f...

  5. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  6. Application of linearized model to the stability analysis of the pressurized water reactor

    International Nuclear Information System (INIS)

    Li Haipeng; Huang Xiaojin; Zhang Liangju

    2008-01-01

    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  7. A Non-linear Stochastic Model for an Office Building with Air Infiltration

    DEFF Research Database (Denmark)

    Thavlov, Anders; Madsen, Henrik

    2015-01-01

    This paper presents a non-linear heat dynamic model for a multi-room office building with air infiltration. Several linear and non-linear models, with and without air infiltration, are investigated and compared. The models are formulated using stochastic differential equations and the model...

  8. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  9. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  10. Extraction of trivalent lanthanides and actinides by a synergistic mixture of thenoyltrifluoroacetone and a linear polyether

    International Nuclear Information System (INIS)

    Ensor, D.D.; Shah, A.H.

    1984-01-01

    Mixtures of a two component system, a linear polyether, 1,13-bis[8-quinolyl]-1,4,7,10,13-pentaoxatridecane, K-5, and thenoyltrifluoroacetone, HTTA, have been shown to exhibit synergistic character in the extraction of trivalent lanthanides and actinides. The effect of the addition of K-5 to the organic phase on the extractions of Ce(III), Eu(III), Tm(III), Am(III), Cm(III), Bk(III), and Cf(III) by HTTA in chloroform from 0.5M NaNO 3 at 25 0 C has been measured. These results indicate the extraction is enhanced by the formation of M(TTA) 3 K-5 adduct in the organic phase. The organic phase stability constants for the formation of these synergistic species have been calculated for all the metals studied. The magnitude of these organic phase stability constants for K-5 are similar to other common neutral donors. The order of stability does not follow the normal trend based on charge-to-radius ratio, but follows a pattern based on size, with Am(III) being the most stable

  11. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  12. Modelling a linear PM motor including magnetic saturation

    NARCIS (Netherlands)

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.

    2002-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  13. Reduced-order modellin for high-pressure transient flow of hydrogen-natural gas mixture

    Science.gov (United States)

    Agaie, Baba G.; Khan, Ilyas; Alshomrani, Ali Saleh; Alqahtani, Aisha M.

    2017-05-01

    In this paper the transient flow of hydrogen compressed-natural gas (HCNG) mixture which is also referred to as hydrogen-natural gas mixture in a pipeline is numerically computed using the reduced-order modelling technique. The study on transient conditions is important because the pipeline flows are normally in the unsteady state due to the sudden opening and closure of control valves, but most of the existing studies only analyse the flow in the steady-state conditions. The mathematical model consists in a set of non-linear conservation forms of partial differential equations. The objective of this paper is to improve the accuracy in the prediction of the HCNG transient flow parameters using the Reduced-Order Modelling (ROM). The ROM technique has been successfully used in single-gas and aerodynamic flow problems, the gas mixture has not been done using the ROM. The study is based on the velocity change created by the operation of the valves upstream and downstream the pipeline. Results on the flow characteristics, namely the pressure, density, celerity and mass flux are based on variations of the mixing ratio and valve reaction and actuation time; the ROM computational time cost advantage are also presented.

  14. Genomic prediction based on data from three layer lines using non-linear regression models.

    Science.gov (United States)

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  15. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  16. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  17. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  18. General mirror pairs for gauged linear sigma models

    Energy Technology Data Exchange (ETDEWEB)

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)

    2015-11-05

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  19. General mirror pairs for gauged linear sigma models

    International Nuclear Information System (INIS)

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-01-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  20. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2012-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  1. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2011-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  2. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    Science.gov (United States)

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often

  3. The role of chemometrics in single and sequential extraction assays: a review. Part II. Cluster analysis, multiple linear regression, mixture resolution, experimental design and other techniques.

    Science.gov (United States)

    Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo

    2011-03-04

    Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Density of molten salt Mixtures of eutectic LiCl-KCl containing UCl{sub 3}, CeCl{sub 3}, or LaCl{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, C.; Simpson, M. F. [Dept. of Metallurgical Engineering, University of Utah, Salt Lake City (United States)

    2017-06-15

    Densities of molten salt mixtures of eutectic LiCl-KCl with UCl{sub 3}, CeCl{sub 3}, or LaCl{sub 3} at various concentrations (up to 13 wt%) were measured using a liquid surface displacement probe. Linear relationships between the mixture density and the concentration of the added salt were observed. For LaCl{sub 3} and CeCl{sub 3}, the measured densities were signifcantly higher than those previously reported from Archimedes’ method. In the case of LiCl-KCl-UCl{sub 3}, the data ft the ideal mixture density model very well. For the other salts, the measured densities exceeded the ideal model prediction by about 2%.

  5. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  6. Smoothed particle hydrodynamics model for phase separating fluid mixtures. I. General equations

    NARCIS (Netherlands)

    Thieulot, C; Janssen, LPBM; Espanol, P

    We present a thermodynamically consistent discrete fluid particle model for the simulation of a recently proposed set of hydrodynamic equations for a phase separating van der Waals fluid mixture [P. Espanol and C.A.P. Thieulot, J. Chem. Phys. 118, 9109 (2003)]. The discrete model is formulated by

  7. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    and the reliability of which have been validated. The dynamic modulus of asphalt mixtures were predicted by conducting Discrete Element simulation under dynamic strain control loading. In order to reduce the calculation time, a method based on frequency–temperature superposition principle has been implemented......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties....... The ball density effect on the internal stress distribution of the asphalt mixture model has been studied when using this method. Furthermore, the internal stresses under dynamic loading have been studied. The agreement between the predicted and the laboratory test results of the complex modulus shows...

  8. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  9. A penalized framework for distributed lag non-linear models.

    Science.gov (United States)

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  10. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.

    Science.gov (United States)

    Kawashima, Issaku; Kumano, Hiroaki

    2017-01-01

    Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  11. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Issaku Kawashima

    2017-07-01

    Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  12. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-03-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License

  13. Game Theory and its Relationship with Linear Programming Models ...

    African Journals Online (AJOL)

    Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.

  14. CONSOLIDATION AND COMPACTION OF POWDER MIXTURES .2. BINARY-MIXTURES OF DIFFERENT PARTICLE-SIZE FRACTIONS OF ALPHA-LACTOSE MONOHYDRATE

    NARCIS (Netherlands)

    RIEPMA, KA; VEENSTRA, J; DEBOER, AH; BOLHUIS, GK; ZUURMAN, K; LERK, CF; VROMANS, H

    1991-01-01

    Binary mixtures of different particle size fractions of alpha-lactose monohydrate were compacted into tablets. The results showed decreased crushing strengths and decreased internal specific surface areas of the tablets as compared with the values calculated by linear interpolation of the data

  15. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  16. Improved models for the prediction of activity coefficients in nearly athermal mixtures: Part I. Empirical modifications of free-volume models

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios M.; Coutsikos, Philipos; Tassios, Dimitrios

    1994-01-01

    Mixtures containing exclusively normal, branched and cyclic alkanes, as well as saturated hydrocarbon polymers (e.g. poly(ethylene) and poly(isobutylene)), are known to exhibit almost athermal behavior. Several new activity coefficient models containing both combinatorial and free-volume contribu......Mixtures containing exclusively normal, branched and cyclic alkanes, as well as saturated hydrocarbon polymers (e.g. poly(ethylene) and poly(isobutylene)), are known to exhibit almost athermal behavior. Several new activity coefficient models containing both combinatorial and free...

  17. Double generalized linear compound poisson models to insurance claims data

    DEFF Research Database (Denmark)

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  18. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite

  19. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  20. Interfacial tensions of binary mixtures of ethanol with octane, decane, dodecane, and tetradecane

    International Nuclear Information System (INIS)

    Mejia, Andres; Cartes, Marcela; Segura, Hugo

    2011-01-01

    Highlights: → Experimental interfacial tensions in binary mixtures with aneotropic behavior. → Experimental interfacial tensions for ethanol + hydrocarbon mixtures. → Aneotropic displacement in ethanol mixtures. - Abstract: This contribution is devoted to the experimental characterization of interfacial tensions of a representative group of binary mixtures pertaining to the (ethanol + linear hydrocarbon) series (i.e. octane, decane, dodecane, and tetradecane). Experimental measurements were isothermically performed using a maximum differential bubble pressure technique, which was applied over the whole mole fraction range and over the temperature range 298.15 K < T/K < 318.15 K. Experimental results show that the interfacial tensions of (ethanol + octane or decane) negatively deviate from the linear behavior and that sharp minimum points on concentration, or aneotropes, are observed for each isotherm. The interfacial tensions of (ethanol + dodecane or tetradecane), in turn, are characterized by combined deviations from the linear behavior, and inflecting behavior observed on concentration for each isotherm. The experimental evidence also shows that these latter mixtures are close to exhibit aneotropy. For the case of (ethanol + octane or decane) mixtures, aneotropy was clearly induced by the similarity of the interfacial tension values of the constituents. The inflecting behavior of the interfacial tensions of (ethanol + dodecane or tetradecane), in turn, was observed in the vicinity of the coordinates of the critical point of these mixtures, thus pointing to the fact that the quasi-aneotropic singularity that affects these mixtures was provoked by the proximity of an immiscibility gap of the liquid phase. Finally, the experimental data of interfacial tensions were smoothed with the Scott-Myers expansion, from which it is possible to conclude that the observed aneotropic concentrations weakly depend on temperature for all the analyzed mixtures.

  1. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. A fuzzy Bi-linear management model in reverse logistic chains

    Directory of Open Access Journals (Sweden)

    Tadić Danijela

    2016-01-01

    Full Text Available The management of the electrical and electronic waste (WEEE problem in the uncertain environment has a critical effect on the economy and environmental protection of each region. The considered problem can be stated as a fuzzy non-convex optimization problem with linear objective function and a set of linear and non-linear constraints. The original problem is reformulated by using linear relaxation into a fuzzy linear programming problem. The fuzzy rating of collecting point capacities and fix costs of recycling centers are modeled by triangular fuzzy numbers. The optimal solution of the reformulation model is found by using optimality concept. The proposed model is verified through an illustrative example with real-life data. The obtained results represent an input for future research which should include a good benchmark base for tested reverse logistic chains and their continuous improvement. [Projekat Ministarstva nauke Republike Srbije, br. 035033: Sustainable development technology and equipment for the recycling of motor vehicles

  3. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  4. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    Directory of Open Access Journals (Sweden)

    Orlov Alexey

    2016-01-01

    Full Text Available This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge cascades for separation of multicomponent isotope mixtures.

  5. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  6. Matrix model and time-like linear dila ton matter

    International Nuclear Information System (INIS)

    Takayanagi, Tadashi

    2004-01-01

    We consider a matrix model description of the 2d string theory whose matter part is given by a time-like linear dilaton CFT. This is equivalent to the c=1 matrix model with a deformed, but very simple Fermi surface. Indeed, after a Lorentz transformation, the corresponding 2d spacetime is a conventional linear dila ton background with a time-dependent tachyon field. We show that the tree level scattering amplitudes in the matrix model perfectly agree with those computed in the world-sheet theory. The classical trajectories of fermions correspond to the decaying D-boranes in the time-like linear dilaton CFT. We also discuss the ground ring structure. Furthermore, we study the properties of the time-like Liouville theory by applying this matrix model description. We find that its ground ring structure is very similar to that of the minimal string. (author)

  7. A mixture model for robust registration in Kinect sensor

    Science.gov (United States)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  8. Statistical analysis of differential potentiometric precipitation titration data for a mixture of three heterovalent ions using linear characteristics

    International Nuclear Information System (INIS)

    Mar'yanov, B.M.; Zarubin, A.G.; Shumar, S.V.

    2003-01-01

    A computer-based method is proposed for data processing in the successive potentiometric precipitation titration of a three-component mixture of differently charged ions. The method was tested on the data of the titration of an artificial solution containing a mixture of Cd(II), Hg(II), and Te(IV) with diethyldithiocarbamate using a chalcogenide Cd-selective or a glassy-carbon indicator electrodes and was used for an analysis of the main components in a semiconductor material, namely, a solid solution of mercury and cadmium tellurides. For concentrations of test ions in the model solution from 7.5 x 10 -3 to 3 x 10 -4 M, the RSD value varies from 0.01 to 0.3; the determination error does not exceed 6 %. The RSD value for the semiconductor alloy lies in the range from 0.002 to 0.01 [ru

  9. A mixture model-based approach to the clustering of microarray expression data.

    Science.gov (United States)

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  10. Amplification of DNA mixtures--Missing data approach

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2008-01-01

    This paper presents a model for the interpretation of results of STR typing of DNA mixtures based on a multivariate normal distribution of peak areas. From previous analyses of controlled experiments with mixed DNA samples, we exploit the linear relationship between peak heights and peak areas...... DNA samples, it is only possible to observe the cumulative peak heights and areas. Complying with this latent structure, we use the EM-algorithm to impute the missing variables based on a compound symmetry model. That is the measurements are subject to intra- and inter-loci correlations not depending...... on the actual alleles of the DNA profiles. Due to factorization of the likelihood, properties of the normal distribution and use of auxiliary variables, an ordinary implementation of the EM-algorithm solves the missing data problem. We estimate the parameters in the model based on a training data set. In order...

  11. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-09

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  12. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    International Nuclear Information System (INIS)

    Chang, Chong

    2016-01-01

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  13. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bernstein, Andrey; Dall' Anese, Emiliano

    2017-05-26

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.

  14. Three-dimensional modeling and simulation of asphalt concrete mixtures based on X-ray CT microstructure images

    Directory of Open Access Journals (Sweden)

    Hainian Wang

    2014-02-01

    Full Text Available X-ray CT (computed tomography was used to scan asphalt mixture specimen to obtain high resolution continuous cross-section images and the meso-structure. According to the theory of three-dimensional (3D reconstruction, the 3D reconstruction algorithm was investigated in this paper. The key to the reconstruction technique is the acquisition of the voxel positions and the relationship between the pixel element and node. Three-dimensional numerical model of asphalt mixture specimen was created by a self-developed program. A splitting test was conducted to predict the stress distributions of the asphalt mixture and verify the rationality of the 3D model.

  15. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  16. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    Science.gov (United States)

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  17. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    Science.gov (United States)

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  18. Equation of state of strongly coupled plasma mixtures

    International Nuclear Information System (INIS)

    DeWitt, H.E.

    1984-01-01

    Thermodynamic properties of strongly coupled (high density) plasmas of mixtures of light elements have been obtained by Monte Carlo simulations. For an assumed uniform charge background the equation of state of ionic mixtures is a simple extension of the one-component plasma EOS. More realistic electron screening effects are treated in linear response theory and with an appropriate electron dielectric function. Results have been obtained for the ionic pair distribution functions, and for the electric microfield distribution

  19. Measurement and modelling of high pressure density and interfacial tension of (gas + n-alkane) binary mixtures

    International Nuclear Information System (INIS)

    Pereira, Luís M.C.; Chapoy, Antonin; Burgass, Rod; Tohidi, Bahman

    2016-01-01

    Highlights: • (Density + IFT) measurements are performed in synthetic reservoir fluids. • Measured systems include CO_2, CH_4 and N_2 with n-decane. • Novel data are reported for temperatures up to 443 K and pressures up to 69 MPa. • Predictive models are tested in 16 (gas + n-alkane) systems. • Best modelling results are achieved with the Density Gradient Theory. - Abstract: The deployment of more efficient and economical extraction methods and processing facilities of oil and gas requires the accurate knowledge of the interfacial tension (IFT) of fluid phases in contact. In this work, the capillary constant a of binary mixtures containing n-decane and common gases such as carbon dioxide, methane and nitrogen was measured. Experimental measurements were carried at four temperatures (313, 343, 393 and 442 K) and pressures up to 69 MPa, or near the complete vaporisation of the organic phase into the gas-rich phase. To determine accurate IFT values, the capillary constants were combined with saturated phase density data measured with an Anton Paar densitometer and correlated with a model based on the Peng–Robinson 1978 equation of state (PR78 EoS). Correlated density showed an overall percentage absolute deviation (%AAD) to measured data of (0.2 to 0.5)% for the liquid phase and (1.5 to 2.5)% for the vapour phase of the studied systems and P–T conditions. The predictive capability of models to accurately describe both the temperature and pressure dependence of the saturated phase density and IFT of 16 (gas + n-alkane) binary mixtures was assessed in this work by comparison with data gathered from the literature and measured in this work. The IFT models considered include the Parachor, the Linear Gradient Theory (LGT) and the Density Gradient Theory (DGT) approaches combined with the Volume-Translated Predictive Peng–Robinson 1978 EoS (VT-PPR78 EoS). With no adjustable parameters, the VT-PPR78 EoS allowed a good description of both solubility and

  20. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  1. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  2. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  3. A new decomposition-based computer-aided molecular/mixture design methodology for the design of optimal solvents and solvent mixtures

    DEFF Research Database (Denmark)

    Karunanithi, A.T.; Achenie, L.E.K.; Gani, Rafiqul

    2005-01-01

    This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective is to be optim......This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective...... is to be optimized subject to structural, property, and process constraints. The general molecular/mixture design problem is divided into two parts. For optimal single-compound design, the first part is solved. For mixture design, the single-compound design is first carried out to identify candidates...... and then the second part is solved to determine the optimal mixture. The decomposition of the CAMD MINLP model into relatively easy to solve subproblems is essentially a partitioning of the constraints from the original set. This approach is illustrated through two case studies. The first case study involves...

  4. Easy and flexible mixture distributions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Mabit, Stefan L.

    2013-01-01

    We propose a method to generate flexible mixture distributions that are useful for estimating models such as the mixed logit model using simulation. The method is easy to implement, yet it can approximate essentially any mixture distribution. We test it with good results in a simulation study...

  5. Renormalization a la BRS of the non-linear σ-model

    International Nuclear Information System (INIS)

    Blasi, A.; Collina, R.

    1987-01-01

    We characterize the non-linear O(N+1) σ-model in an arbitrary parametrization with a nihilpotent BRS operator obtained from the symmetry transformation by the use of anticommuting parameters. The identity can be made compatible with the presence of a mass term in the model, so we can analyze its stability and prove that the model is anomaly free. This procedure avoids many problems encountered in the conventional analysis; in particular the introduction of an infinite number of sources coupled to the successive variations of the field is not necessary and the linear O(N) symmetry is respected as a consequence of the identity. The approach may provide useful in discussing the renormalizability of a wider class of models with non-linear symmetries. (orig.)

  6. Robust Comparison of the Linear Model Structures in Self-tuning Adaptive Control

    DEFF Research Database (Denmark)

    Zhou, Jianjun; Conrad, Finn

    1989-01-01

    The Generalized Predictive Controller (GPC) is extended to the systems with a generalized linear model structure which contains a number of choices of linear model structures. The Recursive Prediction Error Method (RPEM) is used to estimate the unknown parameters of the linear model structures...... to constitute a GPC self-tuner. Different linear model structures commonly used are compared and evaluated by applying them to the extended GPC self-tuner as well as to the special cases of the GPC, the GMV and MV self-tuners. The simulation results show how the choice of model structure affects the input......-output behaviour of self-tuning controllers....

  7. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  8. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  10. Rheology of petrolatum-paraffin oil mixtures : Applications to analogue modelling of geological processes

    NARCIS (Netherlands)

    Duarte, João C.; Schellart, Wouter P.; Cruden, Alexander R.

    2014-01-01

    Paraffins have been widely used in analogue modelling of geological processes. Petrolatum and paraffin oil are commonly used to lubricate model boundaries and to simulate weak layers. In this paper, we present rheological tests of petrolatum, paraffin oil and several homogeneous mixtures of the two.

  11. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  12. Determination of phenacetin and salophen analgetics in solid binary mixtures with caffeine by infrared linear dichroic and Raman spectroscopy.

    Science.gov (United States)

    Koleva, Bojidarka B; Kolev, Tsonko M; Tsalev, Dimiter L; Spiteller, Michael

    2008-01-22

    Quantitative infrared (IR) and Raman spectroscopic approach for determination of phenacetin (Phen) and salophen (Salo) in binary solid mixtures with caffeine: phenacetin/caffeine (System 1) and salophen/caffeine (System 2) is presented. Absorbance ratios of 746 cm(-1) or 721 cm(-1) peaks (characteristic for each of determined compounds in the Systems 1 and 2) to 1509 cm(-1) and 1616 cm(-1) (attributed to Phen and Salo, respectively) were used. The IR spectroscopy gives confidence of 98.9% (System 1) and 98.3% (System 2), while the Raman spectroscopic data are with slightly higher confidence of 99.1% for both systems. The limits of detection for the compounds studied were 0.013 and 0.012 mole fraction for IR and Raman methods, respectively. Solid-state linear dichroic infrared (IR-LD) spectral analysis of solid mixtures was carried out with a view to obtaining experimental IR spectroscopic assignment of the characteristic IR bands of both determined compounds. The orientation technique as a nematic liquid crystal suspension was used, combined with the so-called reducing-difference procedure for polarized spectra interpretation. The possibility for obtaining supramolecular stereo structural information for Phen and Salo by comparing spectroscopic and crystallographic data has also been shown. An independent high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) analysis was performed for comparison and validation of vibrational spectroscopy data. Applications to 10 tablets of commercial products APC and Sedalgin are given.

  13. Low-energy limit of the extended Linear Sigma Model

    Energy Technology Data Exchange (ETDEWEB)

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)

    2018-01-15

    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  14. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  15. T-matrix modeling of linear depolarization by morphologically complex soot and soot-containing aerosols

    Science.gov (United States)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-07-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with ground-based, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders.

  16. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  17. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures.

    Science.gov (United States)

    Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P; Alamdari, Houshang

    2016-05-04

    Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger's model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger's model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297-0.595 mm (-30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  18. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  19. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    Science.gov (United States)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  20. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  1. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  2. Collective dynamics in dense fluid mixtures

    International Nuclear Information System (INIS)

    Sinha, S.

    1992-01-01

    This thesis deals with the short wavelength collective dynamics of dense binary fluid mixtures. The analysis shows that at the level of linearized generalized hydrodynamics, the longitudinal modes of the system separates essentially into two parts - one involves the coupling of partial density fluctuations of the two species and the other involves coupling of longitudinal momentum and temperature fluctuations. The authors have shown that the coupling of longitudinal momentum and temperature fluctuations leads to an adequate description of sound propagation in such systems. In particular, they show that structural disorder controls the trapping of sound waves in dense mixtures. The coupling of the partial density fluctuations of the two species leads to a simple description of the partial dynamic structure factors. The results are in agreement with the molecular dynamics simulations of soft sphere mixtures. The partial density fluctuations are the slowest decaying fluctuations on molecular length scales and it turns out that nonlinear coupling of these slow modes leads to important corrections to the long time behavior of the time correlation functions determining the shear viscosity in dense mixtures

  3. Fluorescence quenching of newly synthesized biologically active coumarin derivative by aniline in binary solvent mixtures

    International Nuclear Information System (INIS)

    Evale, Basavaraj G.; Hanagodimath, S.M.

    2009-01-01

    The fluorescence quenching of newly synthesized coumarin (chromen-2-one) derivative, 4-(5-methyl-3-furan-2-yl-benzofuran-2-yl)-7-methyl-chromen-2-one (MFBMC) by aniline in different solvent mixtures of benzene and acetonitrile was determined at room temperature (296 K) by steady-state fluorescence measurements. The quenching is found to be appreciable and positive deviation from linearity was observed in the Stern-Volmer (S-V) plots in all the solvent mixtures. This could be explained by static and dynamic quenching models. The positive deviation in the S-V plot is interpreted in terms of ground-state complex formation model and sphere of action static quenching model. Various rate parameters for the fluorescence quenching process have been determined by using the modified Stern-Volmer equation. The sphere of action static quenching model agrees very well with experimental results. The dependence of Stern-Volmer constant K SV , on dielectric constant ε of the solvent mixture suggests that the fluorescence quenching is diffusion-limited. Further with the use of finite sink approximation model, it is concluded that these bimolecular quenching reactions are diffusion-limited. Using lifetime (τ o ) data, the distance parameter R' and mutual diffusion coefficient D are estimated independently.

  4. Nonlinear Modeling by Assembling Piecewise Linear Models

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  5. Metal mixture toxicity to aquatic biota in laboratory experiments: Application of the WHAM-F{sub TOX} model

    Energy Technology Data Exchange (ETDEWEB)

    Tipping, E., E-mail: et@ceh.ac.uk; Lofts, S.

    2013-10-15

    Highlights: •Metal accumulation by living organisms is successfully simulated with WHAM. •Modelled organism-bound metal provides a measure of toxic exposure. •The toxic potency of individual bound metals is quantified by fitting toxicity data. •Eleven laboratory mixture toxicity data sets were parameterised. •Relatively little variability amongst individual test organisms is indicated. -- Abstract: The WHAM-F{sub TOX} model describes the combined toxic effects of protons and metal cations towards aquatic organisms through the toxicity function (F{sub TOX}), a linear combination of the products of organism-bound cation and a toxic potency coefficient (α{sub i}) for each cation. Organism-bound, metabolically-active, cation is quantified by the proxy variable, amount bound by humic acid (HA), as predicted by the WHAM chemical speciation model. We compared published measured accumulations of metals by living organisms (bacteria, algae, invertebrates) in different solutions, with WHAM predictions of metal binding to humic acid in the same solutions. After adjustment for differences in binding site density, the predictions were in reasonable line with observations (for logarithmic variables, r{sup 2} = 0.89, root mean squared deviation = 0.44), supporting the use of HA binding as a proxy. Calculated loadings of H{sup +}, Al, Cu, Zn, Cd, Pb and UO{sub 2} were used to fit observed toxic effects in 11 published mixture toxicity experiments involving bacteria, macrophytes, invertebrates and fish. Overall, WHAM-F{sub TOX} gave slightly better fits than a conventional additive model based on solution concentrations. From the derived values of α{sub i}, the toxicity of bound cations can tentatively be ranked in the order: H < Al < (Zn–Cu–Pb–UO{sub 2}) < Cd. The WHAM-F{sub TOX} analysis indicates much narrower ranges of differences amongst individual organisms in metal toxicity tests than was previously thought. The model potentially provides a means to

  6. Spontaneous symmetry breaking of Bose-Fermi mixtures in double-well potentials

    International Nuclear Information System (INIS)

    Adhikari, S. K.; Malomed, B. A.; Salasnich, L.; Toigo, F.

    2010-01-01

    We study the spontaneous symmetry breaking (SSB) of a superfluid Bose-Fermi (BF) mixture in a double-well potential (DWP). The mixture is described by the Gross-Pitaevskii equation (GPE) for the bosons, coupled to an equation for the order parameter of the Fermi superfluid, which is derived from the respective density functional in the unitarity limit (a similar model applies to the BCS regime, too). Straightforward SSB in the degenerate Fermi gas loaded into a DWP is impossible, as it requires an attractive self-interaction, and the intrinsic nonlinearity in the Fermi gas is repulsive. Nonetheless, we demonstrate that the symmetry breaking is possible in the mixture with attraction between fermions and bosons, like 40 K and 87 Rb. Numerical results are represented by dependencies of asymmetry parameters for both components on particle numbers of the mixture, N F and N B , and by phase diagrams in the (N F ,N B ) plane, which displays regions of symmetric and asymmetric ground states. The dynamical picture of the SSB, induced by a gradual transformation of the single-well potential into the DWP, is reported too. An analytical approximation is proposed for the case when the GPE for the boson wave function may be treated by means of the Thomas-Fermi (TF) approximation. Under a special linear relationship between N F and N B , the TF approximation allows us to reduce the model to a single equation for the fermionic function, which includes competing repulsive and attractive nonlinear terms. The latter one directly displays the mechanism of the generation of the effective attraction in the Fermi superfluid, mediated by the bosonic component of the mixture.

  7. Portfolio optimization by using linear programing models based on genetic algorithm

    Science.gov (United States)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  8. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    Science.gov (United States)

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  9. Quantitative analysis of the near-wall mixture formation process in a passenger car direct-injection diesel engine by using linear raman spectroscopy.

    Science.gov (United States)

    Taschek, Marco; Egermann, Jan; Schwarz, Sabrina; Leipertz, Alfred

    2005-11-01

    Optimum fuel preparation and mixture formation are core issues in the development of modern direct-injection (DI) Diesel engines, as these are crucial for defining the border conditions for the subsequent combustion and pollutant formation process. The local fuel/air ratio can be seen as one of the key parameters for this optimization process, as it allows the characterization and comparison of the mixture formation quality. For what is the first time to the best of our knowledge, linear Raman spectroscopy is used to detect the fuel/air ratio and its change along a line of a few millimeters directly and nonintrusively inside the combustion bowl of a DI Diesel engine. By a careful optimization of the measurement setup, the weak Raman signals could be separated successfully from disturbing interferences. A simultaneous measurement of the densities of air and fuel was possible along a line of about 10 mm length, allowing a time- and space-resolved measurement of the local fuel/air ratio. This could be performed in a nonreacting atmosphere as well as during fired operating conditions. The positioning of the measurement volume next to the interaction point of one of the spray jets with the wall of the combustion bowl allowed a near-wall analysis of the mixture formation process for a six-hole nozzle under varying injection and engine conditions. The results clearly show the influence of the nozzle geometry and preinjection on the mixing process. In contrast, modulation of the intake air temperature merely led to minor changes of the fuel concentration in the measurement volume.

  10. A Mixture Model of Consumers' Intended Purchase Decisions for Genetically Modified Foods

    OpenAIRE

    Kristine M. Grimsrud; Robert P. Berrens; Ron C. Mittelhammer

    2006-01-01

    A finite probability mixture model is used to analyze the existence of multiple market segments for a pre-market good. The approach has at least two principal benefits. First, the model is capable of identifying likely market segments and their differentiating characteristics. Second, the model can be used to estimate the discount different consumer groups require to purchase the good. The model is illustrated using stated preference survey data collected on consumer responses to the potentia...

  11. Calculation of the interfacial tension of the methane-water system with the linear gradient theory

    DEFF Research Database (Denmark)

    Schmidt, Kurt A. G.; Folas, Georgios; Kvamme, Bjørn

    2007-01-01

    The linear gradient theory (LGT) combined with the Soave-Redlich-Kwong (SRK EoS) and the Peng-Robinson (PR EoS) equations of state has been used to correlate the interfacial tension data of the methane-water system. The pure component influence parameters and the binary interaction coefficient...... for the mixture influence parameter have been obtained for this system. The model was successfully applied to correlate the interfacial tension data set to within 2.3% for the linear gradient theory and the SRK EoS (LGT-SRK) and 2.5% for the linear gradient theory and PE EoS (LGT-PR). A posteriori comparison...... of data not used in the parameterisation were to within 3.2% for the LGT-SRK model and 2.7% for the LGT-PR model. An exhaustive literature review resulted in a large database for the investigation which covers a wide range of temperature and pressures. The results support the success of the linear...

  12. A binary mixture operated heat pump

    International Nuclear Information System (INIS)

    Hihara, E.; Saito, T.

    1991-01-01

    This paper evaluates the performance of possible binary mixtures as working fluids in high- temperature heat pump applications. The binary mixtures, which are potential alternatives of fully halogenated hydrocarbons, include HCFC142b/HCFC22, HFC152a/HCFC22, HFC134a/HCFC22. The performance of the mixtures is estimated by a thermodynamic model and a practical model in which the heat transfer is considered in heat exchangers. One of the advantages of binary mixtures is a higher coefficient of performance, which is caused by the small temperature difference between the heat-sink/-source fluid and the refrigerant. The mixture HCFC142b/HCFC22 is promising from the stand point of thermodynamic performance

  13. Plane answers to complex questions the theory of linear models

    CERN Document Server

    Christensen, Ronald

    1987-01-01

    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  14. Induced polarization of clay-sand mixtures: experiments and modeling

    International Nuclear Information System (INIS)

    Okay, G.; Leroy, P.; Tournassat, C.; Ghorbani, A.; Jougnot, D.; Cosenza, P.; Camerlynck, C.; Cabrera, J.; Florsch, N.; Revil, A.

    2012-01-01

    were performed with a cylindrical four-electrode sample-holder (cylinder made of PVC with 30 cm in length and 19 cm in diameter) associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO 4 electrodes. These electrodes were installed at 10 cm from the base of the sample holder and regularly spaced (each 90 degree). The results illustrate the strong impact of the Cationic Exchange Capacity (CEC) of the clay minerals upon the complex conductivity. The amplitude of the in-phase conductivity of the kaolinite-clay samples is strongly dependent to saturating fluid salinity for all volumetric clay fractions, whereas the in-phase conductivity of the smectite-clay samples is quite independent on the salinity, except at the low clay content (5% and 1% of clay in volume). This is due to the strong and constant surface conductivity of smectite associated with its very high CEC. The quadrature conductivity increases steadily with the CEC and the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents (1% in volume of kaolinite-clay). This is due to the constant surface site density of Na counter-ions in the Stern layer of clay materials. At the lowest clay content (1%), the magnitude of the quadrature conductivity increases with the salinity, as expected for silica sands. In this case, the surface site density of Na counter-ions in the Stern layer increases with salinity. The experimental data show good agreement with predicted values given by our Spectral Induced Polarization (SIP) model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex

  15. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    Science.gov (United States)

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  16. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  17. The phase behavior of a hard sphere chain model of a binary n-alkane mixture

    International Nuclear Information System (INIS)

    Malanoski, A. P.; Monson, P. A.

    2000-01-01

    Monte Carlo computer simulations have been used to study the solid and fluid phase properties as well as phase equilibrium in a flexible, united atom, hard sphere chain model of n-heptane/n-octane mixtures. We describe a methodology for calculating the chemical potentials for the components in the mixture based on a technique used previously for atomic mixtures. The mixture was found to conform accurately to ideal solution behavior in the fluid phase. However, much greater nonidealities were seen in the solid phase. Phase equilibrium calculations indicate a phase diagram with solid-fluid phase equilibrium and a eutectic point. The components are only miscible in the solid phase for dilute solutions of the shorter chains in the longer chains. (c) 2000 American Institute of Physics

  18. Approximating chiral quark models with linear σ-models

    International Nuclear Information System (INIS)

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  19. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  20. Modeling of the flame propagation in coal-dust- methane air mixture in an enclosed sphere volume

    International Nuclear Information System (INIS)

    Krainov, A Yu; Moiseeva, K M

    2016-01-01

    The results of the numerical simulation of the flame front propagation in coal-dust- methane-air mixture in an enclosed volume with the ignition source in the center of the volume are presented. The mathematical model is based on a dual-velocity two-phase model of the reacting gas-dispersion medium. The system of equations includes the mass-conversation equation, the impulse-conversation equation, the total energy-conversation equation of the gas and particles taking into account the thermal conductivity and chemical reactions in the gas and on the particle surface, mass-conversation equation of the mixture gas components considering the diffusion and the burn-out and the particle burn-out equation. The influence of the coal particle mass on the pressure in the volume after the mixture burn out and on the burn-out time has been investigated. It has been shown that the burning rate of the coal-dust methane air mixtures depends on the coal particle size. (paper)

  1. Generalization of two-phase model with topology microstructure of mixture to Lagrange-Euler methodology

    International Nuclear Information System (INIS)

    Vladimir V Chudanov; Alexei A Leonov

    2005-01-01

    Full text of publication follows: One of the mathematical models (hyperbolic type) for describing evolution of compressible two-phase mixtures was offered in [1] to deal with the following applications: interfaces between compressible materials; shock waves in multiphase mixtures; evolution of homogeneous two-phase flows; cavitation in liquids. The basic difficulties of this model was connected to discretization of the non-conservative equation terms. As result, the class of problems concerned with passage of shock waves through fields with a discontinuing profile of a volume fraction was not described by means of this model. A class of schemes that are able to converge to the correct solution of such problems was received in [2] due to a deeper analysis of two-phase model. The technique offered in [2] was implemented on a Eulerian grid via the Godunov scheme. In present paper the additional analysis of two-phase model in view of microstructure of an mixture topology is carried out in Lagrange mass coordinates. As result, the equations averaged over the set of all possible realizations for two-phase mixture are received. The numerical solution is carried out with use of PPM method [3] in two steps: at first - the equations averaged over mass variable are solved; on the second - the solution, found on the previous step, is re-mapped to a fixed Eulerian grid. Such approach allows to expand the proposed technique on two-dimensional (three-dimensional) case, as in the Lagrange variables the Euler equations system is split on two (three) identical subsystems, each of which describes evolution of considered medium in the given direction. The accuracy and robustness of the described procedure are demonstrated on a sequence of the numerical problems. References: (1). R. Saurel, R. Abgrall, A multiphase Godunov method for compressible multi-fluid and multiphase flows, J. Comput. Phys. 150 (1999) 425-467; (2). R. Saurel, R. Abgrall, Discrete equations for physical and

  2. Finiteness of Ricci flat supersymmetric non-linear sigma-models

    International Nuclear Information System (INIS)

    Alvarez-Gaume, L.; Ginsparg, P.

    1985-01-01

    Combining the constraints of Kaehler differential geometry with the universality of the normal coordinate expansion in the background field method, we study the ultraviolet behavior of 2-dimensional supersymmetric non-linear sigma-models with target space an arbitrary riemannian manifold M. We show that the constraint of N=2 supersymmetry requires that all counterterms to the metric beyond one-loop order are cohomologically trivial. It follows that such supersymmetric non-linear sigma-models defined on locally symmetric spaces are super-renormalizable and that N=4 models are on-shell ultraviolet finite to all orders of perturbation theory. (orig.)

  3. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  4. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  5. A Two-Stage Layered Mixture Experiment Design for a Nuclear Waste Glass Application-Part 2

    International Nuclear Information System (INIS)

    Cooley, Scott K.; Piepel, Gregory F.; Gan, Hao; Kot, Wing; Pegg, Ian L.

    2003-01-01

    Part 1 (Cooley and Piepel, 2003a) describes the first stage of a two-stage experimental design to support property-composition modeling for high-level waste (HLW) glass to be produced at the Hanford Site in Washington state. Each stage used a layered design having an outer layer, an inner layer, a center point, and some replicates. However, the design variables and constraints defining the layers of the experimental glass composition region (EGCR) were defined differently for the second stage than for the first. The first-stage initial design involved 15 components, all treated as mixture variables. The second-stage augmentation design involved 19 components, with 14 treated as mixture variables and 5 treated as non-mixture variables. For each second-stage layer, vertices were generated and optimal design software was used to select alternative subsets of vertices for the design and calculate design optimality measures. A model containing 29 partial quadratic mixture terms plus 5 linear terms for the non-mixture variables was the basis for the optimal design calculations. Predicted property values were plotted for the alternative subsets of second-stage vertices and the first-stage design points. Based on the optimality measures and the predicted property distributions, a ''best'' subset of vertices was selected for each layer of the second-stage to augment the first-stage design

  6. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  7. Phase equilibria for mixtures containing nonionic surfactant systems: Modeling and experiments

    International Nuclear Information System (INIS)

    Shin, Moon Sam; Kim, Hwayong

    2008-01-01

    Surfactants are important materials with numerous applications in the cosmetic, pharmaceutical, and food industries due to inter-associating and intra-associating bond. We present a lattice fluid equation-of-state that combines the quasi-chemical nonrandom lattice fluid model with Veytsman statistics for (intra + inter) molecular association to calculate phase behavior for mixtures containing nonionic surfactants. We also measured binary (vapor + liquid) equilibrium data for {2-butoxyethanol (C 4 E 1 ) + n-hexane} and {2-butoxyethanol (C 4 E 1 ) + n-heptane} systems at temperatures ranging from (303.15 to 323.15) K. A static apparatus was used in this study. The presented equation-of-state correlated well with the measured and published data for mixtures containing nonionic surfactant systems

  8. A linear model of ductile plastic damage

    International Nuclear Information System (INIS)

    Lemaitre, J.

    1983-01-01

    A three-dimensional model of isotropic ductile plastic damage based on a continuum damage variable on the effective stress concept and on thermodynamics is derived. As shown by experiments on several metals and alloys, the model, integrated in the case of proportional loading, is linear with respect to the accumulated plastic strain and shows a large influence of stress triaxiality [fr

  9. Sphaleron in a non-linear sigma model

    International Nuclear Information System (INIS)

    Sogo, Kiyoshi; Fujimoto, Yasushi.

    1989-08-01

    We present an exact classical saddle point solution in a non-linear sigma model. It has a topological charge 1/2 and mediates the vacuum transition. The quantum fluctuations and the transition rate are also examined. (author)

  10. A Generic Model for Prediction of Separation Performance of Olefin/Paraffin Mixture by Glassy Polymer Membranes

    Directory of Open Access Journals (Sweden)

    A.A. Ghoreyshi

    2008-02-01

    Full Text Available The separation of olefin/paraffin mixtures is an important process in petrochemical industries, which is traditionally performed by low temperature distillation with a high-energy consumption, or complex extractive distillationand adsorption techniques. Membrane separation process is emerging as an alternative for traditional separation processes with respect to low energy and simple operation. Investigations made by various researchers on polymeric membranes it is found that special glassy polymers render them as suitable materials for olefin/paraffin mixture separation. In this regard, having some knowledge on the possible transport mechanism of these processes would play a significant role in their design and applications. In this study, separation behavior of olefin/paraffin mixtures through glassy polymers was modeled by three different approaches: the so-called dual transport model, the basic adsorption-diffusion theory and the general Maxwell-Stefan formulation. The systems chosen to validate the developed transport models are separation of ethane-ethylene mixture by 6FDA-6FpDA polyimide membrane and propane-propylene mixture by 6FDA-TrMPD polyimide membrane for which the individual sorption and permeation data are available in the literature. Acritical examination of dual transport model shows that this model fails clearly to predict even the proper trend for selectivities. The adjustment of pemeabilities by accounting for the contribution of non-selective bulk flow in the transport model introduced no improvement in the predictability of the model. The modeling results based on the basic adsorption-diffusion theory revealed that in this approach only using mixed permeability data, an acceptable result is attainable which fades out the advantages of predictibility of multicomponent separation performance from pure component data. Finally, the results obtained from the model developed based on Maxwell-Stefan formulation approach show a

  11. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  12. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    International Nuclear Information System (INIS)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-01

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest which leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum

  13. Ground Motion Models for Future Linear Colliders

    International Nuclear Information System (INIS)

    Seryi, Andrei

    2000-01-01

    Optimization of the parameters of a future linear collider requires comprehensive models of ground motion. Both general models of ground motion and specific models of the particular site and local conditions are essential. Existing models are not completely adequate, either because they are too general, or because they omit important peculiarities of ground motion. The model considered in this paper is based on recent ground motion measurements performed at SLAC and at other accelerator laboratories, as well as on historical data. The issues to be studied for the models to become more predictive are also discussed

  14. Generalized concentration addition: a method for examining mixtures containing partial agonists.

    Science.gov (United States)

    Howard, Gregory J; Webster, Thomas F

    2009-08-07

    Environmentally relevant toxic exposures often consist of simultaneous exposure to multiple agents. Methods to predict the expected outcome of such combinations are critical both to risk assessment and to an accurate judgment of whether combinations are synergistic or antagonistic. Concentration addition (CA) has commonly been used to assess the presence of synergy or antagonism in combinations of similarly acting chemicals, and to predict effects of combinations of such agents. CA has the advantage of clear graphical interpretation: Curves of constant joint effect (isoboles) must be negatively sloped straight lines if the mixture is concentration additive. However, CA cannot be directly used to assess combinations that include partial agonists, although such agents are of considerable interest. Here, we propose a natural extension of CA to a functional form that may be applied to mixtures including full agonists and partial agonists. This extended definition, for which we suggest the term "generalized concentration addition," encompasses linear isoboles with slopes of any sign. We apply this approach to the simple example of agents with dose-response relationships described by Hill functions with slope parameter n=1. The resulting isoboles are in all cases linear, with negative, zero and positive slopes. Using simple mechanistic models of ligand-receptor systems, we show that the same isobole pattern and joint effects are generated by modeled combinations of full and partial agonists. Special cases include combinations of two full agonists and a full agonist plus a competitive antagonist.

  15. Soot modeling of counterflow diffusion flames of ethylene-based binary mixture fuels

    KAUST Repository

    Wang, Yu; Raj, Abhijeet Dhayal; Chung, Suk-Ho

    2015-01-01

    of ethylene and its binary mixtures with methane, ethane and propane based on the method of moments. The soot model has 36 soot nucleation reactions from 8 PAH molecules including pyrene and larger PAHs. Soot surface growth reactions were based on a modified

  16. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi

    2016-05-01

    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  17. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  18. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  19. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  20. Modeling exposure–lag–response associations with distributed lag non-linear models

    Science.gov (United States)

    Gasparrini, Antonio

    2014-01-01

    In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094

  1. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    Science.gov (United States)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  2. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    Science.gov (United States)

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  4. An R2 statistic for fixed effects in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  5. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  6. Are separate-phase thermal-hydraulic models better than mixture-fluid approaches? It depends. Rather not

    International Nuclear Information System (INIS)

    Hoeld, A.

    2004-01-01

    The thermal-hydraulic theory of single- and especially two-phase flow systems used for plant transient analysis is dominated by separate-phase models. The corresponding mostly very comprehensive codes (TRAC, RELAP, CATHARE, ATHLET etc.) are looked as to be by far more efficient than a 3 eq. mixture-fluid approach and code also if they show deficiencies in describing flow situations within inner loops as for example the distribution into parallel channels (and thus the simulation of 3D thermal-hydraulic phenomena). This may be justified if comparing them to the very simple 'homogeneous equilibrium models (HEM)', but not if looking to the more refined non-homogeneous 'separate-region' mixture-fluid approaches based on appropriate drift-flux correlation packages which can have, on the contrary, enormous advantages with respect to such separate-phase models. Especially if comparing the basic (and starting) eqs. of such theoretical models of both types the differences are remarkable. Single-phase and mixture-fluid models start from genuine conservation eqs. for mass, energy and momentum, demanding (in case of two-phase flow) additionally an adequate drift flux package (in order to get a relation for a fourth independent variable), a heat transfer coefficients package (over the whole range of the possible fields of application) and correlations for single- and two-phase friction. The other types of models are looking at each phase separately with corresponding 'field' eqs. for each phase, connected by exchange (=closure) terms which substitute the classical constitutive packages for drift, heat transfer and friction. That the drift-flux, heat transfer into a coolant channel and friction along a wall and between the phases is described better by a separate-phase approach is at least doubtful. The corresponding mixture-fluid correlations are based over a wide range on a treasure of experience and measurements, their pseudo-stationary treatment can (due to their small time

  7. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  8. Methodology for predicting oily mixture properties in the mathematical modeling of molecular distillation

    Directory of Open Access Journals (Sweden)

    M. F. Gayol

    2017-06-01

    Full Text Available A methodology for predicting the thermodynamic and transport properties of a multi-component oily mixture, in which the different mixture components are grouped into a small number of pseudo components is shown. This prediction of properties is used in the mathematical modeling of molecular distillation, which consists of a system of differential equations in partial derivatives, according to the principles of the Transport Phenomena and is solved by an implicit finite difference method using a computer code. The mathematical model was validated with experimental data, specifically the molecular distillation of a deodorizer distillate (DD of sunflower oil. The results obtained were satisfactory, with errors less than 10% with respect to the experimental data in a temperature range in which it is possible to apply the proposed method.

  9. Methodology for predicting oily mixture properties in the mathematical modeling of molecular distillation

    International Nuclear Information System (INIS)

    Gayol, M.F.; Pramparo, M.C.; Miró Erdmann, S.M.

    2017-01-01

    A methodology for predicting the thermodynamic and transport properties of a multi-component oily mixture, in which the different mixture components are grouped into a small number of pseudo components is shown. This prediction of properties is used in the mathematical modeling of molecular distillation, which consists of a system of differential equations in partial derivatives, according to the principles of the Transport Phenomena and is solved by an implicit finite difference method using a computer code. The mathematical model was validated with experimental data, specifically the molecular distillation of a deodorizer distillate (DD) of sunflower oil. The results obtained were satisfactory, with errors less than 10% with respect to the experimental data in a temperature range in which it is possible to apply the proposed method. [es

  10. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Identifiability Results for Several Classes of Linear Compartment Models.

    Science.gov (United States)

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  12. The use of an MHV-2 equation of state for modeling the thermodynamic properties of refrigerant mixtures

    International Nuclear Information System (INIS)

    Morrison, J.D.; Barley, M.H.; Parker, I.B.

    1995-01-01

    This paper reports on the development and application of a thermodynamic model based on the second-order Modified Huron Vidal equation of state (MHV-2) to predict the properties of ternary mixtures of the refrigerants R32, R125, and R134a. The mixing rules of this equation of state have been used to incorporate directly an activity-coefficient model for the excess Gibbs free energy. The parameters for the activity-coefficient model have been derived from experimental VLE data for binary mixtures. This methodology has enabled the production of a thermodynamically consistent model which can be used to predict the phase equilibria of R32/R125/R134a mixtures. The input data used in the model are presented in the paper and the predictions of the model are compared with available experimental data. The model has been used to predict the behavior of ternary refrigerant blends of R32/R125/R134a in fractionation scenarios, such as liquid charging and vapor leakage, which are of direct interest to the refrigeration industry. Details of these applications and comparisons with experimental data are discussed, along with other general uses of the thermodynamic model

  13. A non-linear model of economic production processes

    Science.gov (United States)

    Ponzi, A.; Yasutomi, A.; Kaneko, K.

    2003-06-01

    We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.

  14. Dirichlet Process Parsimonious Mixtures for clustering

    OpenAIRE

    Chamroukhi, Faicel; Bartcus, Marius; Glotin, Hervé

    2015-01-01

    The parsimonious Gaussian mixture models, which exploit an eigenvalue decomposition of the group covariance matrices of the Gaussian mixture, have shown their success in particular in cluster analysis. Their estimation is in general performed by maximum likelihood estimation and has also been considered from a parametric Bayesian prospective. We propose new Dirichlet Process Parsimonious mixtures (DPPM) which represent a Bayesian nonparametric formulation of these parsimonious Gaussian mixtur...

  15. Validated HPTLC methods for determination of some selected antihypertensive mixtures in their combined dosage forms

    Directory of Open Access Journals (Sweden)

    Rasha A. Shaalan

    2014-12-01

    Full Text Available Simple and selective HPTLC methods were developed for the simultaneous determination of the antihypertensive drugs; carvedilol and hydrochlorothiazide in their binary mixture (Mixture I and amlodipine besylate, valsartan, and hydrochlorothiazide in their combined ternary formulation (Mixture II. Effective chromatographic separation was achieved on Fluka TLC plates 20 × 20 cm aluminum cards, 0.2 mm thickness through linear ascending development. For Mixture I, the mobile phase composed of chloroform–methanol in the ratio 8:2 v/v. Detection was performed at 254 nm for both carvedilol and hydrochlorothiazide. For Mixture II, the mobile phase was chloroform–methanol–ammonia in the volume ratio 8:2:0.1. Detection was performed at 254 nm for valsartan and hydrochlorothiazide, and at 365 nm for amlodipine. Quantification was based on spectrodensitometric analysis. Analytical performance of the proposed HPTLC procedures was statistically validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. The linearity ranges were 0.05–1.0 and 0.1–2.0 μg/spot for carvedilol and hydrochlorothiazide, respectively in Mixture I, 0.1–2.0, 0.1–2.0 and 0.2–4.0 μg/spot for amlodipine, hydrochlorothiazide and valsartan, respectively in Mixture II, with correlation coefficients >0.9992. The validated HPTLC methods were applied to the analysis of the cited antihypertensive drugs in their combined pharmaceutical tablets. The proposed methods confirmed peak identity and purity.

  16. N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models

    Science.gov (United States)

    Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.

    2018-01-01

    Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.

  17. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    International Nuclear Information System (INIS)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L.; Fouque, A.L.; Ciuciu, Ph.; Risser, L.

    2009-01-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  18. Effect Displays in R for Generalised Linear Models

    Directory of Open Access Journals (Sweden)

    John Fox

    2003-07-01

    Full Text Available This paper describes the implementation in R of a method for tabular or graphical display of terms in a complex generalised linear model. By complex, I mean a model that contains terms related by marginality or hierarchy, such as polynomial terms, or main effects and interactions. I call these tables or graphs effect displays. Effect displays are constructed by identifying high-order terms in a generalised linear model. Fitted values under the model are computed for each such term. The lower-order "relatives" of a high-order term (e.g., main effects marginal to an interaction are absorbed into the term, allowing the predictors appearing in the high-order term to range over their values. The values of other predictors are fixed at typical values: for example, a covariate could be fixed at its mean or median, a factor at its proportional distribution in the data, or to equal proportions in its several levels. Variations of effect displays are also described, including representation of terms higher-order to any appearing in the model.

  19. H∞ /H2 model reduction through dilated linear matrix inequalities

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2012-01-01

    This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field{N}$. Arb......This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field...

  20. DISQUAC Characterization of the CarbonylůChlorine Interactions in Binary Mixtures of Linear Ketone with Chloroalkane

    Czech Academy of Sciences Publication Activity Database

    Dragoescu, D.; Teodorescu, M.; Barhala, A.; Wichterle, Ivan

    2003-01-01

    Roč. 68, č. 7 (2003), s. 1175-1192 ISSN 0010-0765 R&D Projects: GA ČR GA104/03/1555 Institutional research plan: CEZ:AV0Z4072921 Keywords : group contribution model * thermodynamics * chloroalkanes-linear ketones Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.041, year: 2003