WorldWideScience

Sample records for models introduce mixtures

  1. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  2. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  3. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  4. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  5. Introducing and modeling inefficiency contributions

    DEFF Research Database (Denmark)

    Asmild, Mette; Kronborg, Dorte; Matthews, Kent

    2016-01-01

    Whilst Data Envelopment Analysis (DEA) is the most commonly used non-parametric benchmarking approach, the interpretation and application of DEA results can be limited by the fact that radial improvement potentials are identified across variables. In contrast, Multi-directional Efficiency Analysis......-called inefficiency contributions, which are defined as the relative contributions from specific variables to the overall levels of inefficiencies. A statistical model for distinguishing the inefficiency contributions between subgroups is proposed and the method is illustrated on a data set on Chinese banks....

  6. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    Knowledge of the thermodynamic properties and phase equilibria of mixtures containing carbon dioxide (CO2) is important in several industrial processes such as enhanced oil recovery, carbon capture and storage, and supercritical extractions, where CO2 is used as a solvent. Despite this importance......, accurate predictions of the thermodynamic properties and phase equilibria of mixtures containing CO2 are challenging with classical models such as the Soave-Redlich-Kwong (SRK) equation of state (EoS). This is believed to be due to the fact, that CO2 has a large quadrupole moment which the classical models...... and with or without introducing an additional pure compound parameter. In the absence of quadrupolar compounds qCPA reduces to CPA, which itself reduces toSRK in the absence of association. As the number of adjustable parameters in thermodynamic models increase, the parameter estimation problem becomes increasingly...

  7. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  8. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  9. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  10. Introducing risk modeling in corporate finance

    Directory of Open Access Journals (Sweden)

    Domingo Castelo Joaquin

    2013-12-01

    Full Text Available This paper aims to introduce a simulation modeling in the context of a simplified capital budgeting problem. It walks the reader from creating and running a simulation in a spreadsheet environment to interpreting simulation results to gain insight and understanding about the problem. The uncertainty lies primarily in the level of sales in the first year of the project and in the growth rate of sales thereafter, manufacturing cost as a percentage of sales, and the salvage value of fixed assets. The simulation is carried out within a spreadsheet environment using @Risk.

  11. Modeling amplitude SAR image with the Cauchy-Rayleigh mixture

    Science.gov (United States)

    Peng, Qiangqiang; Du, Qingyu; Yao, Yinwei; Huang, Huang

    2017-10-01

    In this paper, we introduce a novel mixture model of the SAR amplitude image, which is proposed as an approximation to the heavy-tailed Rayleigh model. The limitation of the heavy-tailed Rayleigh model in SAR image application is discussed. We also present an expectation-maximization (EM) algorithm based parameter estimation method for the Cauchy-Rayleigh mixture. We test the new model on some simulated data in order to confirm that is appropriate to the heavy-tailed Rayleigh model. The performance is evaluated by some statistic values (cumulative square errors (CSE) 0.99 and Kolmogorov-Smirnov distance (K-S) the performance of the proposed mixture model is tested on some real SAR images and compared with other models, including the heavy-tailed Rayleigh and Nakagami mixture models. The result indicates that the proposed model can be an optional statistical model for amplitude SAR images.

  12. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    OpenAIRE

    Ruseckaite, Aiste

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture models in the choice context and develops new optimal design construction algorithms for choice experiments involving mixtures. Building further, varying the total amount of a mixture, and not only its i...

  13. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  14. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste)

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture

  15. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models......The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...

  16. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading....... The suggested models are intended for incorporation into an existing analysis tool a.k.a. CyNC based on the MATLAB/SimuLink framework for graphical system analysis and design....

  17. Probabilistic mixture-based image modelling

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Havlíček, Vojtěch; Grim, Jiří

    2011-01-01

    Roč. 47, č. 3 (2011), s. 482-500 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:CESNET(CZ) 387/2010; GA MŠk(CZ) 2C06019; GA ČR(CZ) GA103/11/0335 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF texture modelling * discrete distribution mixtures * Bernoulli mixture * Gaussian mixture * multi-spectral texture modelling Subject RIV: BD - Theory of Information Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/RO/haindl-0360244.pdf

  18. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  19. A Gaussian Mixture Model for Nulling Pulsars

    Science.gov (United States)

    Kaplan, D. L.; Swiggum, J. K.; Fichtenbauer, T. D. J.; Vallisneri, M.

    2018-03-01

    The phenomenon of pulsar nulling—where pulsars occasionally turn off for one or more pulses—provides insight into pulsar-emission mechanisms and the processes by which pulsars turn off when they cross the “death line.” However, while ever more pulsars are found that exhibit nulling behavior, the statistical techniques used to measure nulling are biased, with limited utility and precision. In this paper, we introduce an improved algorithm, based on Gaussian mixture models, for measuring pulsar nulling behavior. We demonstrate this algorithm on a number of pulsars observed as part of a larger sample of nulling pulsars, and show that it performs considerably better than existing techniques, yielding better precision and no bias. We further validate our algorithm on simulated data. Our algorithm is widely applicable to a large number of pulsars even if they do not show obvious nulls. Moreover, it can be used to derive nulling probabilities of nulling for individual pulses, which can be used for in-depth studies.

  20. Residual-based model diagnosis methods for mixture cure models.

    Science.gov (United States)

    Peng, Yingwei; Taylor, Jeremy M G

    2017-06-01

    Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.

  1. RIM: A Random Item Mixture Model to Detect Differential Item Functioning

    Science.gov (United States)

    Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…

  2. RIM: A random item mixture model to detect Differential Item Functioning

    NARCIS (Netherlands)

    Frederickx, S.; Tuerlinckx, T.; de Boeck, P.; Magis, D.

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is

  3. Exact Fit of Simple Finite Mixture Models

    Directory of Open Access Journals (Sweden)

    Dirk Tasche

    2014-11-01

    Full Text Available How to forecast next year’s portfolio-wide credit default rate based on last year’s default observations and the current score distribution? A classical approach to this problem consists of fitting a mixture of the conditional score distributions observed last year to the current score distribution. This is a special (simple case of a finite mixture model where the mixture components are fixed and only the weights of the components are estimated. The optimum weights provide a forecast of next year’s portfolio-wide default rate. We point out that the maximum-likelihood (ML approach to fitting the mixture distribution not only gives an optimum but even an exact fit if we allow the mixture components to vary but keep their density ratio fixed. From this observation we can conclude that the standard default rate forecast based on last year’s conditional default rates will always be located between last year’s portfolio-wide default rate and the ML forecast for next year. As an application example, cost quantification is then discussed. We also discuss how the mixture model based estimation methods can be used to forecast total loss. This involves the reinterpretation of an individual classification problem as a collective quantification problem.

  4. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss ...... the relation between supervised and unsupervised learning in the test data. Finally, we implement a novelty detector based on the density model....

  5. Bayesian mixture models for partially verified data

    DEFF Research Database (Denmark)

    Kostoulas, Polychronis; Browne, William J.; Nielsen, Søren Saxmose

    2013-01-01

    Bayesian mixture models can be used to discriminate between the distributions of continuous test responses for different infection stages. These models are particularly useful in case of chronic infections with a long latent period, like Mycobacterium avium subsp. paratuberculosis (MAP) infection...

  6. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  7. A Skew-Normal Mixture Regression Model

    Science.gov (United States)

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  8. Mixture model analysis of complex samples

    NARCIS (Netherlands)

    Wedel, M; ter Hofstede, F; Steenkamp, JBEM

    1998-01-01

    We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample

  9. Probabilistic Discrete Mixtures Colour Texture Models

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Havlíček, Vojtěch; Grim, Jiří

    2008-01-01

    Roč. 2008, č. 5197 (2008), s. 675-682 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition /13./. Havana, 09.092008-12.09.2008] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA ČR GA102/07/1594; GA ČR GA102/08/0593 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Discrete distribution mixtures * EM algorithm * texture modeling Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2008/RO/haindl-havlicek-grim-probabilistic%20discrete%20mixtures%20colour%20texture%20models.pdf

  10. Texture modelling by discrete distribution mixtures

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Haindl, Michal

    2003-01-01

    Roč. 41, 3-4 (2003), s. 603-615 ISSN 0167-9473 R&D Projects: GA ČR GA102/00/0030; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : discrete distribution mixtures * EM algorithm * texture modelling Subject RIV: JC - Computer Hardware ; Software Impact factor: 0.711, year: 2003

  11. Text document classification based on mixture models

    Czech Academy of Sciences Publication Activity Database

    Novovičová, Jana; Malík, Antonín

    2004-01-01

    Roč. 40, č. 3 (2004), s. 293-304 ISSN 0023-5954 R&D Projects: GA AV ČR IAA2075302; GA ČR GA102/03/0049; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : text classification * text categorization * multinomial mixture model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.224, year: 2004

  12. Computational aspects of N-mixture models.

    Science.gov (United States)

    Dennis, Emily B; Morgan, Byron J T; Ridout, Martin S

    2015-03-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105-115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. © 2014 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  13. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  14. Investigation of a Gamma model for mixture STR samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Lauritzen, Steffen L.

    The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis.......The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis....

  15. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  16. Introducing Recovery Style for Modeling and Analyzing System Recovery

    NARCIS (Netherlands)

    Sözer, Hasan; Tekinerdogan, B.; Kruchten, P.; Garlan, D.; Woods, E.

    An analysis of the existing approaches for representing architectural views reveals that they focus mainly on functional concerns and are limited when considering quality concerns. We introduce the recovery style for modeling the structure of the system related to the recovery concern. The recovery

  17. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    Science.gov (United States)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered

  18. Modeling and analysis of personal exposures to VOC mixtures using copulas

    Science.gov (United States)

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-01-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver

  19. Perfect posterior simulation for mixture and hidden Marko models

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Breyer, Laird A.; Roberts, Gareth O.

    2010-01-01

    In this paper we present an application of the read-once coupling from the past algorithm to problems in Bayesian inference for latent statistical models. We describe a method for perfect simulation from the posterior distribution of the unknown mixture weights in a mixture model. Our method...... is extended to a more general mixture problem, where unknown parameters exist for the mixture components, and to a hidden Markov model....

  20. Processing tree point clouds using Gaussian Mixture Models

    Directory of Open Access Journals (Sweden)

    D. Belton

    2013-10-01

    Full Text Available While traditionally used for surveying and photogrammetric fields, laser scanning is increasingly being used for a wider range of more general applications. In addition to the issues typically associated with processing point data, such applications raise a number of new complications, such as the complexity of the scenes scanned, along with the sheer volume of data. Consequently, automated procedures are required for processing, and analysing such data. This paper introduces a method for modelling multi-modal, geometrically complex objects in terrestrial laser scanning point data; specifically, the modelling of trees. The model method comprises a number of geometric features in conjunction with a multi-modal machine learning technique. The model can then be used for contextually dependent region growing through separating the tree into its component part at the point level. Subsequently object analysis can be performed, for example, performing volumetric analysis of a tree by removing points associated with leaves. The workflow for this process is as follows: isolate individual trees within the scanned scene, train a Gaussian mixture model (GMM, separate clusters within the mixture model according to exemplar points determined by the GMM, grow the structure of the tree, and then perform volumetric analysis on the structure.

  1. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... implementation consisting of a distributed PI controller structure, both in terms of minimising the overall cost but also in terms of the ability to minimise deviation, which is the classical objective....

  2. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  3. Introducing A Hybrid Data Mining Model to Evaluate Customer Loyalty

    Directory of Open Access Journals (Sweden)

    H. Alizadeh

    2016-12-01

    Full Text Available The main aim of this study was introducing a comprehensive model of bank customers᾽ loyalty evaluation based on the assessment and comparison of different clustering methods᾽ performance. This study also pursues the following specific objectives: a using different clustering methods and comparing them for customer classification, b finding the effective variables in determining the customer loyalty, and c using different collective classification methods to increase the modeling accuracy and comparing the results with the basic methods. Since loyal customers generate more profit, this study aims at introducing a two-step model for classification of customers and their loyalty. For this purpose, various methods of clustering such as K-medoids, X-means and K-means were used, the last of which outperformed the other two through comparing with Davis-Bouldin index. Customers were clustered by using K-means and members of these four clusters were analyzed and labeled. Then, a predictive model was run based on demographic variables of customers using various classification methods such as DT (Decision Tree, ANN (Artificial Neural Networks, NB (Naive Bayes, KNN (K-Nearest Neighbors and SVM (Support Vector Machine, as well as their bagging and boosting to predict the class of loyal customers. The results showed that the bagging-ANN was the most accurate method in predicting loyal customers. This two-stage model can be used in banks and financial institutions with similar data to identify the type of future customers.

  4. GLIMMIX : Software for estimating mixtures and mixtures of generalized linear models

    NARCIS (Netherlands)

    Wedel, M

    2001-01-01

    GLIMMIX is a commercial WINDOWS-based computer program that implements the EM algorithm (Dempster, Laird and Rubin 1977) for the estimation of finite mixtures and mixtures of generalized linear models. The program allows for the specification of a number of distributions in the exponential family,

  5. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  6. mixtools: An R Package for Analyzing Mixture Models

    Directory of Open Access Journals (Sweden)

    Tatiana Benaglia

    2009-10-01

    Full Text Available The mixtools package for R provides a set of functions for analyzing a variety of finite mixture models. These functions include both traditional methods, such as EM algorithms for univariate and multivariate normal mixtures, and newer methods that reflect some recent research in finite mixture models. In the latter category, mixtools provides algorithms for estimating parameters in a wide range of different mixture-of-regression contexts, in multinomial mixtures such as those arising from discretizing continuous multivariate data, in nonparametric situations where the multivariate component densities are completely unspecified, and in semiparametric situations such as a univariate location mixture of symmetric but otherwise unspecified densities. Many of the algorithms of the mixtools package are EM algorithms or are based on EM-like ideas, so this article includes an overview of EM algorithms for finite mixture models.

  7. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  8. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  9. A mechanistic model for rational design of optimal cellulase mixtures.

    Science.gov (United States)

    Levine, Seth E; Fox, Jerome M; Clark, Douglas S; Blanch, Harvey W

    2011-11-01

    A model-based framework is described that permits the optimal composition of cellulase enzyme mixtures to be found for lignocellulose hydrolysis. The rates of hydrolysis are shown to be dependent on the nature of the substrate. For bacterial microcrystalline cellulose (BMCC) hydrolyzed by a ternary cellulase mixture of EG2, CBHI, and CBHII, the optimal predicted mixture was 1:0:1 EG2:CBHI:CBHII at 24 h and 1:1:0 at 72 h, at loadings of 10 mg enzyme per g substrate. The model was validated with measurements of soluble cello-oligosaccharide production from BMCC during both single enzyme and mixed enzyme hydrolysis. Three-dimensional diagrams illustrating cellulose conversion were developed for mixtures of EG2, CBHI, CBHII acting on BMCC and predicted for other substrates with a range of substrate properties. Model predictions agreed well with experimental values of conversion after 24 h for a variety of enzyme mixtures. The predicted mixture performances for substrates with varying properties demonstrated the effects of initial degree of polymerization (DP) and surface area on the performance of cellulase mixtures. For substrates with a higher initial DP, endoglucanase enzymes accounted for a larger fraction of the optimal mixture. Substrates with low surface areas showed significantly reduced hydrolysis rates regardless of mixture composition. These insights, along with the quantitative predictions, demonstrate the utility of this model-based framework for optimizing cellulase mixtures. Copyright © 2011 Wiley Periodicals, Inc.

  10. Aggregate crash prediction models: introducing crash generation concept.

    Science.gov (United States)

    Naderan, Ali; Shahi, Jalil

    2010-01-01

    Safety conscious planning is a new proactive approach towards understanding crashes. It requires a planning-level decision-support tool to facilitate proactive approach to assessing safety effects of alternative urban planning scenarios. The objective of this research study is to develop a series of aggregate crash prediction models (ACPM) that are consistent with the trip generation step of the conventional four-step demand models. The concept of crash generation models (CGMs) is introduced utilizing trip generation data in a generalized linear regression with the assumption of a negative binomial error structure. The relationship of crash frequencies in traffic analysis zones (TAZ) and number of trips generated by purpose is investigated. This translates into immediate checking of the impact of future trip generations on crash frequencies in comprehensive transportation-planning studies (i.e. ability to forecast crashes at each time-step trips are being forecasted). A good relation was seen between crash frequency and number of trips produced/attracted by purpose per TAZ.

  11. Lattice Models of Amphiphile and Solvent Mixtures.

    Science.gov (United States)

    Brindle, David

    Available from UMI in association with The British Library. Materials based on amphiphilic molecules have a wide range of industrial applications and are of fundamental importance in the structure of many biological systems. Their importance derives from their behaviour as surface-active agents in solubilization applications and because of their ability to form systems with varying degrees of structural order such as micelles, bilayers and liquid crystal phases. The nature of the molecular ordering is of importance both during the processing of these materials and in their final application. A Monte Carlo simulation of a three dimensional lattice model of an amphiphile and solvent mixture has been developed as an extension of earlier work in two dimensions. In the earlier investigation the simulation was carried out with three segment amphiphiles on a two dimensional lattice and cluster size distributions were determined for a range of temperatures, amphiphile concentrations and intermolecular interaction energies. In the current work, a wider range of structures are observed including micelles, bilayers and a vesicle. The structures are studied as a function of temperature, chain length, amphiphile concentration and intermolecular interaction energies. Clusters are characterised according to their shape, size and surface roughness. A detailed temperature -concentration phase diagram is presented for a system with four segment amphiphiles. The phase diagram shows a critical micelle concentration (c.m.c.) at low amphiphile concentrations and a transition from a bicontinuous to lamellar region at amphiphile concentrations around 50%. At high amphiphile concentrations, there is some evidence for the formation of a gel. The results obtained question the validity of current models of the c.m.c. The Monte Carlo simulations require extensive computing power and the simulation was carried out on a transputer array, where the parallel architecture allows high speed. The

  12. Introducing Charge Hydration Asymmetry into the Generalized Born Model.

    Science.gov (United States)

    Mukhopadhyay, Abhishek; Aguilar, Boris H; Tolokh, Igor S; Onufriev, Alexey V

    2014-04-08

    The effect of charge hydration asymmetry (CHA)-non-invariance of solvation free energy upon solute charge inversion-is missing from the standard linear response continuum electrostatics. The proposed charge hydration asymmetric-generalized Born (CHA-GB) approximation introduces this effect into the popular generalized Born (GB) model. The CHA is added to the GB equation via an analytical correction that quantifies the specific propensity of CHA of a given water model; the latter is determined by the charge distribution within the water model. Significant variations in CHA seen in explicit water (TIP3P, TIP4P-Ew, and TIP5P-E) free energy calculations on charge-inverted "molecular bracelets" are closely reproduced by CHA-GB, with the accuracy similar to models such as SEA and 3D-RISM that go beyond the linear response. Compared against reference explicit (TIP3P) electrostatic solvation free energies, CHA-GB shows about a 40% improvement in accuracy over the canonical GB, tested on a diverse set of 248 rigid small neutral molecules (root mean square error, rmse = 0.88 kcal/mol for CHA-GB vs 1.24 kcal/mol for GB) and 48 conformations of amino acid analogs (rmse = 0.81 kcal/mol vs 1.26 kcal/mol). CHA-GB employs a novel definition of the dielectric boundary that does not subsume the CHA effects into the intrinsic atomic radii. The strategy leads to finding a new set of intrinsic atomic radii optimized for CHA-GB; these radii show physically meaningful variation with the atom type, in contrast to the radii set optimized for GB. Compared to several popular radii sets used with the original GB model, the new radii set shows better transferability between different classes of molecules.

  13. Lattice Boltzmann model for thermal binary-mixture gas flows.

    Science.gov (United States)

    Kang, Jinfen; Prasianakis, Nikolaos I; Mantzaras, John

    2013-05-01

    A lattice Boltzmann model for thermal gas mixtures is derived. The kinetic model is designed in a way that combines properties of two previous literature models, namely, (a) a single-component thermal model and (b) a multicomponent isothermal model. A comprehensive platform for the study of various practical systems involving multicomponent mixture flows with large temperature differences is constructed. The governing thermohydrodynamic equations include the mass, momentum, energy conservation equations, and the multicomponent diffusion equation. The present model is able to simulate mixtures with adjustable Prandtl and Schmidt numbers. Validation in several flow configurations with temperature and species concentration ratios up to nine is presented.

  14. A Multilevel Mixture IRT Model with an Application to DIF

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.

    2010-01-01

    Mixture item response theory models have been suggested as a potentially useful methodology for identifying latent groups formed along secondary, possibly nuisance dimensions. In this article, we describe a multilevel mixture item response theory (IRT) model (MMixIRTM) that allows for the possibility that this nuisance dimensionality may function…

  15. Introducing Earth Sciences Students to Modeling Using MATLAB Exercises

    Science.gov (United States)

    Anderson, R. S.

    2003-12-01

    While we subject our students to math and physics and chemistry courses to complement their geological studies, we rarely allow them to experience the joys of modeling earth systems. Given the degree to which modern earth sciences relies upon models of complex systems, it seems appropriate to allow our students to develop some experience with this activity. In addition, as modeling is an unforgivingly logical exercise, it demands the student absorb the fundamental concepts, the assumptions behind them, and the means of constraining the relevant parameters in a problem. These concepts commonly include conservation of some quantity, the fluxes of that quantity, and careful prescription of the boundary and initial conditions. I have used MATLAB as an entrance to this world, and will illustrate the products of the exercises we have worked. This software is platform-independent, and has a wonderful graphics package (including movies) that is embedded intimately as one-to-several line calls. The exercises should follow a progression from simple to complex, and serve to introduce the many discrete tasks within modeling. I advocate full immersion in the first exercise. Example exercises include: growth of spatter cones (summation of parabolic trajectories of lava bombs); response of thermal profiles in the earth to varying surface temperature (thermal conduction); hillslope or fault scarp evolution (topographic diffusion); growth and subsidence of volcanoes (flexure); and coral growth on a subsiding platform in the face of sealevel fluctuations (coral biology and light extinction). These exercises can be motivated by reading a piece in the classical or modern literature that either describes a model, or better yet serves to describe the system well, but does not present a model. I have found that the generation of movies from even the early simulation exercises serves as an additional motivator for students. We discuss the models in each class meeting, and learn that there

  16. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  17. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    Science.gov (United States)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  18. A Dirichlet process mixture model for brain MRI tissue classification.

    Science.gov (United States)

    Ferreira da Silva, Adelino R

    2007-04-01

    Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.

  19. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  20. Introducing Students to Gas Chromatography-Mass Spectrometry Analysis and Determination of Kerosene Components in a Complex Mixture

    Science.gov (United States)

    Pacot, Giselle Mae M.; Lee, Lyn May; Chin, Sung-Tong; Marriott, Philip J.

    2016-01-01

    Gas chromatography-mass spectrometry (GC-MS) and GC-tandem MS (GC-MS/MS) are useful in many separation and characterization procedures. GC-MS is now a common tool in industry and research, and increasingly, GC-MS/MS is applied to the measurement of trace components in complex mixtures. This report describes an upper-level undergraduate experiment…

  1. Extending Growth Mixture Models Using Continuous Non-Elliptical Distributions

    OpenAIRE

    Wei, Yuhong; Tang, Yang; Shireman, Emilie; McNicholas, Paul D.; Steinley, Douglas L.

    2017-01-01

    Growth mixture models (GMMs) incorporate both conventional random effects growth modeling and latent trajectory classes as in finite mixture modeling; therefore, they offer a way to handle the unobserved heterogeneity between subjects in their development. GMMs with Gaussian random effects dominate the literature. When the data are asymmetric and/or have heavier tails, more than one latent class is required to capture the observed variable distribution. Therefore, a GMM with continuous non-el...

  2. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    Dynamic functional connectivity (dFC) has recently become a popular way of tracking the temporal evolution of the brains functional integration. However, there does not seem to be a consensus on how to choose the complexity, i.e. number of brain states, and the time-scale of the dynamics, i.......e. the window length. In this work we use the Wishart Mixture Model (WMM) as a probabilistic model for dFC based on variational inference. The framework admits arbitrary window lengths and number of dynamic components and includes the static one-component model as a special case. We exploit that the WMM...... framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...

  3. Cry-Based Classification of Healthy and Sick Infants Using Adapted Boosting Mixture Learning Method for Gaussian Mixture Models

    Directory of Open Access Journals (Sweden)

    Hesam Farsaie Alaie

    2012-01-01

    Full Text Available We make use of information inside infant’s cry signal in order to identify the infant’s psychological condition. Gaussian mixture models (GMMs are applied to distinguish between healthy full-term and premature infants, and those with specific medical problems available in our cry database. Cry pattern for each pathological condition is created by using adapted boosting mixture learning (BML method to estimate mixture model parameters. In the first experiment, test results demonstrate that the introduced adapted BML method for learning of GMMs has a better performance than conventional EM-based reestimation algorithm as a reference system in multipathological classification task. This newborn cry-based diagnostic system (NCDS extracted Mel-frequency cepstral coefficients (MFCCs as a feature vector for cry patterns of newborn infants. In binary classification experiment, the system discriminated a test infant’s cry signal into one of two groups, namely, healthy and pathological based on MFCCs. The binary classifier achieved a true positive rate of 80.77% and a true negative rate of 86.96% which show the ability of the system to correctly identify healthy and diseased infants, respectively.

  4. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  5. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  6. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  7. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid–liqu...

  8. Anharmonic effects in simple physical models: introducing undergraduates to nonlinearity

    Science.gov (United States)

    Christian, J. M.

    2017-09-01

    Given the pervasive character of nonlinearity throughout the physical universe, a case is made for introducing undergraduate students to its consequences and signatures earlier rather than later. The dynamics of two well-known systems—a spring and a pendulum—are reviewed when the standard textbook linearising assumptions are relaxed. Some qualitative effects of nonlinearity can be anticipated from symmetry (e.g., inspection of potential energy functions), and further physical insight gained by applying a simple successive-approximation method that might be taught in parallel with courses on classical mechanics, ordinary differential equations, and computational physics. We conclude with a survey of how these ideas have been deployed on programmes at a UK university.

  9. Introducing MERGANSER: A Flexible Framework for Ecological Niche Modeling

    Science.gov (United States)

    Klawonn, M.; Dow, E. M.

    2015-12-01

    Ecological Niche Modeling (ENM) is a collection of techniques to find a "fundamental niche", the range of environmental conditions suitable for a species' survival in the absence of inter-species interactions, given a set of environmental parameters. Traditional approaches to ENM face a number of obstacles including limited data accessibility, data management problems, computational costs, interface usability, and model validation. The MERGANSER system, which stands for Modeling Ecological Residency Given A Normalized Set of Environmental Records, addresses these issues through powerful data persistence and flexible data access, coupled with a clear presentation of results and fine-tuned control over model parameters. MERGANSER leverages data measuring 72 weather related phenomena, land cover, soil type, population, species occurrence, general species information, and elevation, totaling over 1.5 TB of data. To the best of the authors' knowledge, MERGANSER uses higher-resolution spatial data sets than previously published models. Since MERGANSER stores data in an instance of Apache SOLR, layers generated in support of niche models are accessible to users via simplified Apache Lucene queries. This is made even simpler via an HTTP front end that generates Lucene queries automatically. Specifically, a user need only enter the name of a place and a species to run a model. Using this approach to synthesizing model layers, the MERGANSER system has successfully reproduced previously published niche model results with a simplified user experience. Input layers for the model are generated dynamically using OpenStreetMap and SOLR's spatial search functionality. Models are then run using either user-specified or automatically determined parameters after normalizing them into a common grid. Finally, results are visualized in the web interface, which allows for quick validation. Model results and all surrounding metadata are also accessible to the user for further study.

  10. Introducing data-model assimilation to students of ecology.

    Science.gov (United States)

    Hobbs, N Thompson; Ogle, Kiona

    2011-07-01

    Quantitative training for students of ecology has traditionally emphasized two sets of topics: mathematical modeling and statistical analysis. Until recently, these topics were taught separately, modeling courses emphasizing mathematical techniques for symbolic analysis and statistics courses emphasizing procedures for analyzing data. We advocate the merger of these traditions in ecological education by outlining a curriculum for an introductory course in data-model assimilation. This course replaces the procedural emphasis of traditional introductory material in statistics with an emphasis on principles needed to develop hierarchical models of ecological systems, fusing models of data with models of ecological processes. We sketch nine elements of such a course: (1) models as routes to insight, (2) uncertainty, (3) basic probability theory, (4) hierarchical models, (5) data simulation, (6) likelihood and Bayes, (7) computational methods, (8) research design, and (9) problem solving. The outcome of teaching these combined elements can be the fundamental understanding and quantitative confidence needed by students to create revealing analyses for a broad array of research problems.

  11. Modeling self-assembly and phase behavior in complex mixtures.

    Science.gov (United States)

    Balazs, Anna C

    2007-01-01

    Using a variety of computational techniques, I investigate how the self-assembly of complex mixtures can be guided by surfaces or external stimuli to form spatially regular or temporally periodic patterns. Focusing on mixtures in confined geometries, I examine how thermodynamic and hydrodynamic effects can be exploited to create regular arrays of nanowires or monodisperse, particle-filled droplets. I also show that an applied light source and chemical reaction can be harnessed to create hierarchically ordered patterns in ternary, phase-separating mixtures. Finally, I consider the combined effects of confining walls and a chemical reaction to demonstrate that a swollen polymer gel can be driven to form dynamically periodic structures. In addition to illustrating the effectiveness of external factors in directing the self-organization of multicomponent mixtures, the selected examples illustrate how coarse-grained models can be used to capture both the equilibrium phase behavior and the dynamics of these complex systems.

  12. Introducing the Collaborative Learning Modeling Language (ColeML)

    DEFF Research Database (Denmark)

    Bundsgaard, Jeppe

    2014-01-01

    et al., 1998, p. 306; cf. Bundsgaard, 2005, p. 315ff.). At least five challenges can be identified (Barron et al., 1998; Bundsgaard, 2009, 2010; Gregersen & Mikkelsen, 2007; Krajcik et al., 1998): Organizing collaboration, structuring workflows, integrating academic content, sharing products...... of such a platform. The Collaborative Learning Modeling Language (ColeML) makes it possible to articulate complex designs for learning visually and to activate these design models as interactive learning materials. ColeML is based on research in workflow and business process modeling. The traditional approach...... in this area, represented by, for example, the Workflow Management Coalition (Hollingsworth, 1995) and the very widespread standard Business Process Modeling and Notation (BPMN), has been criticized on the basis of research in knowledge work processes. Inspiration for ColeML is found in this research area...

  13. A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS

    1995-01-01

    A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of

  14. A Gamma Model for Mixture STR Samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Morling, Niels

    This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered in the amp...

  15. Introducing a moisture scheme to a nonhydrostatic sigma coordinate model

    CSIR Research Space (South Africa)

    Bopape, Mary-Jane M

    2011-09-01

    Full Text Available and precipitation in mid-latitude cyclones. VII: A model for the ?seeder-feeder? process in warm-frontal rainbands. Journal of the Atmospheric Sciences, 40, 1185-1206. Stensrud DJ, 2007: Parameterization schemes. Keys to understanding numerical weather...

  16. An Equiratio Mixture Model for non-additive components: a case study for aspartame/acesulfame-K mixtures.

    Science.gov (United States)

    Schifferstein, H N

    1996-02-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alcohols, but is unable to predict intensity for aspartame/sucrose mixtures. In this paper, the sweetness of aspartame/acesulfame-K mixtures in aqueous and acidic solutions is investigated. These two intensive sweeteners probably do not comply with the model's original assumption of sensory dependency among components. However, they reveal how the Equiratio Mixture Model could be modified to describe and predict mixture functions for non-additive substances. To predict equiratio functions for all similar tasting substances, a new Equiratio Mixture Model should yield accurate predictions for components eliciting similar intensities at widely differing concentration levels, and for substances exhibiting hypo- or hyperadditivity. In addition, it should be able to correct violations of Stevens's power law. These three problems are resolved in a model that uses equi-intense units as the measure of physical concentration. An interaction index in the formula for the constant accounts for the degree of interaction between mixture components. Deviations from the power law are corrected by a nonlinear response output transformation, assuming a two-stage model of psychophysical judgment.

  17. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... occurring genes in the population. CONCLUSION: Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes....

  18. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  19. Introducing serendipity in a social network model of knowledge diffusion

    International Nuclear Information System (INIS)

    Cremonini, Marco

    2016-01-01

    Highlights: • Serendipity as a control mechanism for knowledge diffusion in social network. • Local communication enhanced in the periphery of a network. • Prevalence of hub nodes in the network core mitigated. • Potential disruptive effect on network formation of uncontrolled serendipity. - Abstract: In this paper, we study serendipity as a possible strategy to control the behavior of an agent-based network model of knowledge diffusion. The idea of considering serendipity in a strategic way has been first explored in Network Learning and Information Seeking studies. After presenting the major contributions of serendipity studies to digital environments, we discuss the extension to our model: Agents are enriched with random topics for establishing new communication according to different strategies. The results show how important network properties could be influenced, like reducing the prevalence of hubs in the network’s core and increasing local communication in the periphery, similar to the effects of more traditional self-organization methods. Therefore, from this initial study, when serendipity is opportunistically directed, it appears to behave as an effective and applicable approach to social network control.

  20. Overfitting Bayesian Mixture Models with an Unknown Number of Components.

    Directory of Open Access Journals (Sweden)

    Zoé van Havre

    Full Text Available This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

  1. Overfitting Bayesian Mixture Models with an Unknown Number of Components.

    Science.gov (United States)

    van Havre, Zoé; White, Nicole; Rousseau, Judith; Mengersen, Kerrie

    2015-01-01

    This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

  2. Models for the computation of opacity of mixtures

    International Nuclear Information System (INIS)

    Klapisch, Marcel; Busquet, Michel

    2013-01-01

    We compare four models for the partial densities of the components of mixtures. These models yield different opacities as shown on polystyrene, acrylic and polyimide in local thermodynamical equilibrium (LTE). Two of these models, the ‘whole volume partial pressure’ model (M1) and its modification (M2) are not thermodynamically consistent (TC). The other two models are TC and minimize free energy. M3, the ‘partial volume equal pressure’ model, uses equality of chemical potential. M4 uses commonality of free electron density. The latter two give essentially identical results in LTE, but M4’s convergence is slower. M4 is easily generalized to non-LTE conditions. Non-LTE effects are shown by the variation of the Planck mean opacity of the mixtures with temperature and density. (paper)

  3. Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models

    OpenAIRE

    Martin Burda; Artem Prokhorov

    2012-01-01

    Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...

  4. Firewaves: introducing a platform for modelling volcanic tsunamis

    Science.gov (United States)

    Paris, Raphaël; Ulvrova, Martina; Kelfoun, Karim; Giachetti, Thomas; Switzer, Adam

    2014-05-01

    When embracing all tsunamis generated by eruptive processes, rapid ground deformation and slope instability at volcanoes, "volcanic tsunamis" represent around 5 % of all tsunamis listed for the last four centuries (>130 events since 1600 AD). About 20-25 % of all fatalities directly attributable to volcanoes during the last 250 years have been caused by volcanic tsunamis (e.g. Krakatau 1883, Mayuyama 1792). Up to eight mechanisms are implied in the generation of volcanic tsunamis: underwater explosions, pyroclastic flows and lahars entering the water, earthquake preceding or during a volcanic eruption, and flank failure, collapse of coastal lava bench, caldera collapse, and shock wave produced by large explosion. It is unlikely that shock waves, lahars and collapses of lava bench can give birth to tsunamis with wave heights of more than 3 m. Pyroclastic flows, flank failures and caldera subsidence are the only source mechanisms likely to imply volumes larger than 1 km³. Volcanic tsunamis are characterised by short-period waves and greater dispersion compared to earthquake-generated tsunamis. With the exceptions of the 1888 Ritter Island and 1883 Krakatau tsunamis, 100 % of the victims of volcanic tsunamis in Southeast Asia were less than 20 km from the volcano. Travel time of the waves from the volcano to a distance of 20 km is typically less than 15 minutes (Paris et al. 2014). In this setting, priority are (1) to improve population's preparedness around highlighted volcanoes, (2) to monitor sea / lake around volcanoes, (3) and to build a database of numerical simulations based on different eruptive scenarios. The Firewaves platform, hosted at Magmas & Volcans laboratory in Clermont-Ferrand (FRance) is a numerical solution for modelling volcanic tsunamis of different sources. Tsunamis generated by volcanic mass flows (including pyroclastic flows, debris avalanches etc.) are simulated using VolcFlow code (Kelfoun et al. 2010), and underwater explosions and caldera

  5. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  6. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  7. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...

  8. Sparse Gaussian graphical mixture model | Lotsi | Afrika Statistika

    African Journals Online (AJOL)

    Abstract. This paper considers the problem of networks reconstruction from heterogeneous data using a Gaussian Graphical Mixture Model (GGMM). It is well known that parameter estimation in this context is challenging due to large numbers of variables coupled with the degenerate nature of the likelihood. We propose as ...

  9. Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua

    2016-01-01

    Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...

  10. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  11. Polymer mixtures in confined geometries: Model systems to explore ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 64; Issue 6. Polymer mixtures in confined geometries: Model systems to explore phase transitions. K Binder M Müller A Cavallo E V Albano. Invited Talks:- Topic 7. Soft condensed matter (colloids, polymers, liquid crystals, microemulsions, foams, membranes, etc.) ...

  12. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...

  13. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...

  14. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...

  15. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time var....... Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%....... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  16. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  17. A general mixture model for sediment laden flows

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián

    2017-09-01

    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  18. Mixture model for biomagnetic separation in microfluidic systems

    Science.gov (United States)

    Khashan, S. A.; Alazzam, A.; Mathew, B.; Hamdan, M.

    2017-11-01

    In this paper, we show that mixture model, with algebraic slip velocity relating to the magnetophoresis, provides a continuum-based, and cost-effective tool to simulate biomagnetic separations in microfluidics. The model is most effective to simulate magnetic separation protocols in which magnetic or magnetically labeled biological targets are within a naturally dilute or diluted samples. The transport of these samples is characterized as mixtures in which the dispersed magnetic microparticles establish their magnetophoretic mobility quickly in response to the acting forces. Our simulations demonstrate the coupled particle-fluid transport and the High Gradient Magnetic Capture (HGMC) of magnetic beads flowing through a microchannel. Also, we show that the mixture model and accordingly the modeling of the slip velocity model, unlike with the case with dense and/or macro-scale systems, can be further simplified by ignoring the gravitational and granular parameters. Furthermore, we show, by conducting comparative simulations, that the developed model provides an easier and viable alternative to the commonly used Lagrangian-Eulerian (particle-based) models.

  19. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    Science.gov (United States)

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  20. A hybrid sampler for Poisson-Kingman mixture models

    OpenAIRE

    Lomeli, M.; Favaro, S.; Teh, Y. W.

    2015-01-01

    This paper concerns the introduction of a new Markov Chain Monte Carlo scheme for posterior sampling in Bayesian nonparametric mixture models with priors that belong to the general Poisson-Kingman class. We present a novel compact way of representing the infinite dimensional component of the model such that while explicitly representing this infinite component it has less memory and storage requirements than previous MCMC schemes. We describe comparative simulation results demonstrating the e...

  1. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  2. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  3. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict...... multicomponent adsorption equilibria on the basis of single-component adsorption data. We compare the MPTA and IAST models to a large set of experimental data, obtaining reasonable good agreement with experimental data and high degree of predictability. Some limitations of both models are also discussed....

  4. Hydrogenic ionization model for mixtures in non-LTE plasmas

    International Nuclear Information System (INIS)

    Djaoui, A.

    1999-01-01

    The Hydrogenic Ionization Model for Mixtures (HIMM) is a non-Local Thermodynamic Equilibrium (non-LTE), time-dependent ionization model for laser-produced plasmas containing mixtures of elements (species). In this version, both collisional and radiative rates are taken into account. An ionization distribution for each species which is consistent with the ambient electron density is obtained by use of an iterative procedure in a single calculation for all species. Energy levels for each shell having a given principal quantum number and for each ion stage of each species in the mixture are calculated using screening constants. Steady-state non-LTE as well as LTE solutions are also provided. The non-LTE rate equations converge to the LTE solution at sufficiently high densities or as the radiation temperature approaches the electron temperature. The model is particularly useful at low temperatures where convergence problems are usually encountered in our previous models. We apply our model to typical situation in x-ray laser research, laser-produced plasmas and inertial confinement fusion. Our results compare well with previously published results for a selenium plasma. (author)

  5. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    Science.gov (United States)

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  6. Detecting Clusters in Atom Probe Data with Gaussian Mixture Models.

    Science.gov (United States)

    Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P

    2017-04-01

    Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.

  7. Color Texture Segmentation by Decomposition of Gaussian Mixture Model

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 287-296 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA MŠk 2C06019 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * gaussian mixture model * EM algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/grim-color texture segmentation by decomposition of gaussian mixture model.pdf

  8. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  9. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-11-01

    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  10. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  11. Thresholding functional connectomes by means of mixture modeling.

    Science.gov (United States)

    Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F

    2018-05-01

    Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture

  12. Modeling viscosity and diffusion of plasma mixtures across coupling regimes

    Science.gov (United States)

    Arnault, Philippe

    2014-10-01

    Viscosity and diffusion of plasma for pure elements and multicomponent mixtures are modeled from the high-temperature low-density weakly coupled regime to the low-temperature high-density strongly coupled regime. Thanks to an atom in jellium modeling, the effect of electron screening on the ion-ion interaction is incorporated through a self-consistent definition of the ionization. This defines an effective One Component Plasma, or an effective Binary Ionic Mixture, that is representative of the strength of the interaction. For the viscosity and the interdiffusion of mixtures, approximate kinetic expressions are supplemented by mixing laws applied to the excess viscosity and self-diffusion of pure elements. The comparisons with classical and quantum molecular dynamics results reveal deviations in the range 20--40% on average with almost no predictions further than a factor of 2 over many decades of variation. Applications in the inertial confinement fusion context could help in predicting the growth of hydrodynamic instabilities.

  13. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  14. Sand - rubber mixtures submitted to isotropic loading: a minimal model

    Science.gov (United States)

    Platzer, Auriane; Rouhanifar, Salman; Richard, Patrick; Cazacliu, Bogdan; Ibraim, Erdin

    2017-06-01

    The volume of scrap tyres, an undesired urban waste, is increasing rapidly in every country. Mixing sand and rubber particles as a lightweight backfill is one of the possible alternatives to avoid stockpiling them in the environment. This paper presents a minimal model aiming to capture the evolution of the void ratio of sand-rubber mixtures undergoing an isotropic compression loading. It is based on the idea that, submitted to a pressure, the rubber chips deform and partially fill the porous space of the system, leading to a decrease of the void ratio with increasing pressure. Our simple approach is capable of reproducing experimental data for two types of sand (a rounded one and a sub-angular one) and up to mixtures composed of 50% of rubber.

  15. Multicomponent gas mixture air bearing modeling via lattice Boltzmann method

    Science.gov (United States)

    Tae Kim, Woo; Kim, Dehee; Hari Vemuri, Sesha; Kang, Soo-Choon; Seung Chung, Pil; Jhon, Myung S.

    2011-04-01

    As the demand for ultrahigh recording density increases, development of an integrated head disk interface (HDI) modeling tool, which considers the air bearing and lubricant film morphology simultaneously is of paramount importance. To overcome the shortcomings of the existing models based on the modified Reynolds equation (MRE), the lattice Boltzmann method (LBM) is a natural choice in modeling high Knudsen number (Kn) flows owing to its advantages over conventional methods. The transient and parallel nature makes this LBM an attractive tool for the next generation air bearing design. Although LBM has been successfully applied to single component systems, a multicomponent system analysis has been thwarted because of the complexity in coupling the terms for each component. Previous studies have shown good results in modeling immiscible component mixtures by use of an interparticle potential. In this paper, we extend our LBM model to predict the flow rate of high Kn pressure-driven flows in multicomponent gas mixture air bearings, such as the air-helium system. For accurate modeling of slip conditions near the wall, we adopt our LBM scheme with spatially dependent relaxation times for air bearings in HDIs. To verify the accuracy of our code, we tested our scheme via simple two-dimensional benchmark flows. In the pressure-driven flow of an air-helium mixture, we found that the simple linear combination of pure helium and pure air flow rates, based on helium and air mole fraction, gives considerable error when compared to our LBM calculation. Hybridization with the existing MRE database can be adopted with the procedure reported here to develop the state-of-the-art slider design software.

  16. Spatially adaptive mixture modeling for analysis of FMRI time series.

    Science.gov (United States)

    Vincent, Thomas; Risser, Laurent; Ciuciu, Philippe

    2010-04-01

    Within-subject analysis in fMRI essentially addresses two problems, the detection of brain regions eliciting evoked activity and the estimation of the underlying dynamics. In Makni et aL, 2005 and Makni et aL, 2008, a detection-estimation framework has been proposed to tackle these problems jointly, since they are connected to one another. In the Bayesian formalism, detection is achieved by modeling activating and nonactivating voxels through independent mixture models (IMM) within each region while hemodynamic response estimation is performed at a regional scale in a nonparametric way. Instead of IMMs, in this paper we take advantage of spatial mixture models (SMM) for their nonlinear spatial regularizing properties. The proposed method is unsupervised and spatially adaptive in the sense that the amount of spatial correlation is automatically tuned from the data and this setting automatically varies across brain regions. In addition, the level of regularization is specific to each experimental condition since both the signal-to-noise ratio and the activation pattern may vary across stimulus types in a given brain region. These aspects require the precise estimation of multiple partition functions of underlying Ising fields. This is addressed efficiently using first path sampling for a small subset of fields and then using a recently developed fast extrapolation technique for the large remaining set. Simulation results emphasize that detection relying on supervised SMM outperforms its IMM counterpart and that unsupervised spatial mixture models achieve similar results without any hand-tuning of the correlation parameter. On real datasets, the gain is illustrated in a localizer fMRI experiment: brain activations appear more spatially resolved using SMM in comparison with classical general linear model (GLM)-based approaches, while estimating a specific parcel-based HRF shape. Our approach therefore validates the treatment of unsmoothed fMRI data without fixed GLM

  17. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    Science.gov (United States)

    Kawabata, Takeshi

    2018-03-06

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  18. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    NARCIS (Netherlands)

    Martens, M.A.W.; Janssen, M.J.; Ruijssenaars, A.J.J.M.; Riksen-Walraven, J.M.A.

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital

  19. Detecting Multiple Random Changepoints in Bayesian Piecewise Growth Mixture Models.

    Science.gov (United States)

    Lock, Eric F; Kohli, Nidhi; Bose, Maitreyee

    2017-11-17

    Piecewise growth mixture models are a flexible and useful class of methods for analyzing segmented trends in individual growth trajectory over time, where the individuals come from a mixture of two or more latent classes. These models allow each segment of the overall developmental process within each class to have a different functional form; examples include two linear phases of growth, or a quadratic phase followed by a linear phase. The changepoint (knot) is the time of transition from one developmental phase (segment) to another. Inferring the location of the changepoint(s) is often of practical interest, along with inference for other model parameters. A random changepoint allows for individual differences in the transition time within each class. The primary objectives of our study are as follows: (1) to develop a PGMM using a Bayesian inference approach that allows the estimation of multiple random changepoints within each class; (2) to develop a procedure to empirically detect the number of random changepoints within each class; and (3) to empirically investigate the bias and precision of the estimation of the model parameters, including the random changepoints, via a simulation study. We have developed the user-friendly package BayesianPGMM for R to facilitate the adoption of this methodology in practice, which is available at https://github.com/lockEF/BayesianPGMM . We describe an application to mouse-tracking data for a visual recognition task.

  20. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  1. Modeling adsorption of liquid mixtures on porous materials

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2009-01-01

    The multicomponent potential theory of adsorption (MPTA), which was previously applied to adsorption from gases, is extended onto adsorption of liquid mixtures on porous materials. In the MPTA, the adsorbed fluid is considered as an inhomogeneous liquid with thermodynamic properties that depend...... of the MPTA onto liquids has been tested on experimental binary and ternary adsorption data. We show that, for the set of experimental data considered in this work, the MPTA model is capable of correlating binary adsorption equilibria. Based on binary adsorption data, the theory can then predict ternary...

  2. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  3. An Ionization and Equation of State Model for Dense, Plasma Mixtures

    Science.gov (United States)

    Stanton, Liam; Argus, Robert; Dorabiala, Olga; Kelley, Zander; Sripimonwan, Brandon; Scullard, Christian; Graziani, Frank; Shen, Yannan; Murillo, Michael

    2017-10-01

    Almost all high energy-density physics experiments involve a multitude of species, which introduces nontrivial challenges to the models for both theoretical and practical reasons. To make matters worse, the ionic species will be composed of multiple ionization states themselves. The theoretical connection to the single-species properties, such as the transport coefficients or equations of state, is rarely as straightforward as a simple superposition. Additionally, our knowledge of such mixtures must span orders of magnitude in temperature and density, and impurities from higher-Z elements can fundamentally change the physical properties of the plasma as well. Here, we present a new model that can accurately and efficiently predict ionization, thermodynamic and correlational properties of dense plasma mixtures over a wide range parameter. This model is not only applicable to mixtures of an arbitrary number of ionic components, but it resolves properties of individual ionization states as well. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. A mixture copula Bayesian network model for multimodal genomic data

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2017-04-01

    Full Text Available Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  5. Categorization of Digital Games in English Language Learning Studies: Introducing the SSI Model

    Science.gov (United States)

    Sundqvist, Pia

    2013-01-01

    The main aim of the present paper is to introduce a model for digital game categorization suitable for use in English language learning studies: the Scale of Social Interaction (SSI) Model (original idea published as Sundqvist, 2013). The SSI Model proposes a classification of commercial off-the-shelf (COTS) digital games into three categories:…

  6. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.

    2012-04-01

    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  7. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  8. Gaussian Mixture Model and Rjmcmc Based RS Image Segmentation

    Science.gov (United States)

    Shi, X.; Zhao, Q. H.

    2017-09-01

    For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  9. GAUSSIAN MIXTURE MODEL AND RJMCMC BASED RS IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-09-01

    Full Text Available For the image segmentation method based on Gaussian Mixture Model (GMM, there are some problems: 1 The number of component was usually a fixed number, i.e., fixed class and 2 GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC. In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  10. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  11. An enhanced method for real-time modelling of cardiac related biosignals using Gaussian mixtures.

    Science.gov (United States)

    Alqudah, Ali Mohammad

    2017-11-01

    Cardiac related biosignals modelling is very important for detecting, classification, compression and transmission of such health-related signals. This paper introduces a new, fast and accurate method for modelling the cardiac related biosignals (ECG and PPG) based on a mixture of Gaussian waves. For any signal, at first, the start and end of the ECG beat or PPG pulse is detected, then the baseline is detected then subtracted from the original signal, after that the signal is divided into two signals positive and negative, each modelled separately then incorporated together to form the modelled signal. The proposed method is applied in the MIMIC, and MIT-BIH Arrhythmia databases available online at PhysioNet.

  12. Sampling from Dirichlet process mixture models with unknown concentration parameter: mixing issues in large data implementations.

    Science.gov (United States)

    Hastie, David I; Liverani, Silvia; Richardson, Sylvia

    We consider the question of Markov chain Monte Carlo sampling from a general stick-breaking Dirichlet process mixture model, with concentration parameter [Formula: see text]. This paper introduces a Gibbs sampling algorithm that combines the slice sampling approach of Walker (Communications in Statistics - Simulation and Computation 36:45-54, 2007) and the retrospective sampling approach of Papaspiliopoulos and Roberts (Biometrika 95(1):169-186, 2008). Our general algorithm is implemented as efficient open source C++ software, available as an R package, and is based on a blocking strategy similar to that suggested by Papaspiliopoulos (A note on posterior sampling from Dirichlet mixture models, 2008) and implemented by Yau et al. (Journal of the Royal Statistical Society, Series B (Statistical Methodology) 73:37-57, 2011). We discuss the difficulties of achieving good mixing in MCMC samplers of this nature in large data sets and investigate sensitivity to initialisation. We additionally consider the challenges when an additional layer of hierarchy is added such that joint inference is to be made on [Formula: see text]. We introduce a new label-switching move and compute the marginal partition posterior to help to surmount these difficulties. Our work is illustrated using a profile regression (Molitor et al. Biostatistics 11(3):484-498, 2010) application, where we demonstrate good mixing behaviour for both synthetic and real examples.

  13. Tractography segmentation using a hierarchical Dirichlet processes mixture model.

    Science.gov (United States)

    Wang, Xiaogang; Grimson, W Eric L; Westin, Carl-Fredrik

    2011-01-01

    In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  15. Clustering disaggregated load profiles using a Dirichlet process mixture model

    International Nuclear Information System (INIS)

    Granell, Ramon; Axon, Colin J.; Wallom, David C.H.

    2015-01-01

    Highlights: • We show that the Dirichlet process mixture model is scaleable. • Our model does not require the number of clusters as an input. • Our model creates clusters only by the features of the demand profiles. • We have used both residential and commercial data sets. - Abstract: The increasing availability of substantial quantities of power-use data in both the residential and commercial sectors raises the possibility of mining the data to the advantage of both consumers and network operations. We present a Bayesian non-parametric model to cluster load profiles from households and business premises. Evaluators show that our model performs as well as other popular clustering methods, but unlike most other methods it does not require the number of clusters to be predetermined by the user. We used the so-called ‘Chinese restaurant process’ method to solve the model, making use of the Dirichlet-multinomial distribution. The number of clusters grew logarithmically with the quantity of data, making the technique suitable for scaling to large data sets. We were able to show that the model could distinguish features such as the nationality, household size, and type of dwelling between the cluster memberships

  16. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  17. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  18. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  19. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  20. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    ability of the model to identify spatial heterogeneity is validated through a set of simulations. The model was applied to Los Angeles county housing prices data for the year 2002. The results suggests that the statistically identified number of submarkets, after taking into account the dwellings......The problem of modeling housing prices has attracted considerable attention due to its importance in terms of households' wealth and in terms of public revenues through taxation. One of the main concerns raised in both the theoretical and the empirical literature is the existence of spatial...... association between prices that can be attributed, among others, to unobserved neighborhood effects. In this paper, a model of spatial association for housing markets is introduced. Spatial association is treated in the context of spatial heterogeneity, which is explicitly modeled in both a global and a local...

  1. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.

  2. UNSUPERVISED CHANGE DETECTION IN SAR IMAGES USING GAUSSIAN MIXTURE MODELS

    Directory of Open Access Journals (Sweden)

    E. Kiana

    2015-12-01

    Full Text Available In this paper, we propose a method for unsupervised change detection in Remote Sensing Synthetic Aperture Radar (SAR images. This method is based on the mixture modelling of the histogram of difference image. In this process, the difference image is classified into three classes; negative change class, positive change class and no change class. However the SAR images suffer from speckle noise, the proposed method is able to map the changes without speckle filtering. To evaluate the performance of this method, two dates of SAR data acquired by Uninhabited Aerial Vehicle Synthetic from an agriculture area are used. Change detection results show better efficiency when compared to the state-of-the-art methods.

  3. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  4. Introducing free-function camera calibration model for central-projection and omni-directional lenses

    Science.gov (United States)

    Nekouei Shahraki, M.; Haala, N.

    2015-09-01

    To ensure making valid decisions with high accuracy in machine vision systems such as driver-assistant systems, a primary key factor is to have accurate measurements, which means that we need accurate camera calibration for various optical designs and a very fast approach to analyse the calibration data in real-time. Conventional methods have specific limitations such as limited accuracy, instability by using complex models, difficulties to model the local lens distortions and limitation in real-time calculations that altogether show the necessity to introduce new solutions. We introduce a new model for lens distortion modelling with high accuracies beyond conventional models while yet allowing real-time calculation. The concept is based on Free-Function modelling in a posterior calibration step using the initial distortion estimation and the corresponding residuals on the observations as input information. Free-Function model is the technique of numerically and locally modelling the lens distortion field by assuming unknown functions in our calibration model. This increases the model's flexibility to fit to different optical designs and be able to model the very local lens distortions. Using the Free-Function model one can observe great enhancements in accuracy (in comparison with classical models). Furthermore, by increasing the number of control points and improving their distribution the quality of lens modelling would be improved; a characteristic which is not present in the classical methods.

  5. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  6. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    Science.gov (United States)

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  7. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  8. Induced polarization of clay-sand mixtures: experiments and modeling

    International Nuclear Information System (INIS)

    Okay, G.; Leroy, P.; Tournassat, C.; Ghorbani, A.; Jougnot, D.; Cosenza, P.; Camerlynck, C.; Cabrera, J.; Florsch, N.; Revil, A.

    2012-01-01

    were performed with a cylindrical four-electrode sample-holder (cylinder made of PVC with 30 cm in length and 19 cm in diameter) associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO 4 electrodes. These electrodes were installed at 10 cm from the base of the sample holder and regularly spaced (each 90 degree). The results illustrate the strong impact of the Cationic Exchange Capacity (CEC) of the clay minerals upon the complex conductivity. The amplitude of the in-phase conductivity of the kaolinite-clay samples is strongly dependent to saturating fluid salinity for all volumetric clay fractions, whereas the in-phase conductivity of the smectite-clay samples is quite independent on the salinity, except at the low clay content (5% and 1% of clay in volume). This is due to the strong and constant surface conductivity of smectite associated with its very high CEC. The quadrature conductivity increases steadily with the CEC and the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents (1% in volume of kaolinite-clay). This is due to the constant surface site density of Na counter-ions in the Stern layer of clay materials. At the lowest clay content (1%), the magnitude of the quadrature conductivity increases with the salinity, as expected for silica sands. In this case, the surface site density of Na counter-ions in the Stern layer increases with salinity. The experimental data show good agreement with predicted values given by our Spectral Induced Polarization (SIP) model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex

  9. Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations

    Science.gov (United States)

    Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad

    2016-01-01

    In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…

  10. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2015-01-01

    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  11. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    Science.gov (United States)

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  12. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  13. Introducing Systems Biology to Bioscience students through mathematical modelling. A practical module

    Directory of Open Access Journals (Sweden)

    Néstor V. Torres Darias

    2013-11-01

    Full Text Available Systems Biology, one of the current approaches to the understanding of living things, aims to understand the behaviour of living systems through the creation of mathematical models that integrate the available knowledge of the system’s component parts and the relations among them. Accordingly, model building should play a central part in any biology degree program. One difficulty that we face when confronted with this task, however, is that the mathematical background of undergraduate students is very often deficient in essential concepts required for dynamic mathematical modelling. In this practical module, students are introduced to the basic techniques of mathematical modelling and computer simulation from a Systems Biology perspective.

  14. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  15. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  16. Resolving the cycle skip introduced by the multi-layer static model using a hybrid approach

    Science.gov (United States)

    Tawadros, Emad Ekladios Toma

    Cycle skips (breaks) in seismic data are occasionally irresolvable using conventional static correction programs. Such artificial cycle skips can be misleading for interpreters and introduce false subsurface images. After applying datum static corrections using either the single-layer or multi-layer models, artificial cycle skips might develop in the data. Although conventional residual static correction techniques are occasionally able to solve this problem, they fail in solving many other cases. A new approach is introduced in this study to resolve this problem by forming a static model that is free of these artificial cycle skips, which can be used as a pilot volume for residual statics calculation. The pilot volume is formed by combining the high-frequency static component of the single-layer model which show better static solution at the static problem locations and the low-frequency static component of the two-layer model. This new approach is applied on a 3-D seismic data set from Haba Field of Eastern Saudi Arabia where a major cycle skip was introduced by the multilayer model. Results show a better image of the subsurface structure after application of the new approach.

  17. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  18. A mixture model for water uptake, degradation, erosion and drug release from polydisperse polymeric networks.

    Science.gov (United States)

    Soares, João S; Zunino, Paolo

    2010-04-01

    We introduce a general class of mixture models suitable to describe water-dependent degradation and erosion of biodegradable polymers in conjunction with drug release. The ability to predict and quantify degradation and erosion has direct impact in a variety of biomedical applications and is a useful design tool for biodegradable implants and tissue engineering scaffolds. The model is based on a finite number of constituents describing the polydisperse polymeric system, each representing chains of an average size, and two additional constituents, water and drug. Hydrolytic degradation of individual chains occurs at the molecular level and mixture constituents diffuse individually accordingly to Fick's 1st law at the bulk level - such analysis confers a multi-scale aspect to the resulting reaction-diffusion system. A shift between two different types of behavior, each identified to surface or bulk erosion, is observed with the variation of a single non-dimensional parameter measuring the relative importance of the mechanisms of reaction and diffusion. Mass loss follows a sigmoid decrease in bulk eroding polymers, whereas decreases linearly in surface eroding polymers. Polydispersity influences degradation and erosion of bulk eroding polymers and drug release from unstable surface eroding matrices is dramatically enhanced in an erosion-controlled release. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  20. Wetting kinetics of oil mixtures on fluorinated model cellulose surfaces.

    Science.gov (United States)

    Aulin, Christian; Shchukarev, Andrei; Lindqvist, Josefina; Malmström, Eva; Wågberg, Lars; Lindström, Tom

    2008-01-15

    The wetting of two different model cellulose surfaces has been studied; a regenerated cellulose (RG) surface prepared by spin-coating, and a novel multilayer film of poly(ethyleneimine) and a carboxymethylated microfibrillated cellulose (MFC). The cellulose films were characterized in detail using atomic force microscopy (AFM) and X-ray photoelectron spectroscopy (XPS). AFM indicates smooth and continuous films on a nanometer scale and the RMS roughness of the RG cellulose and MFC surfaces was determined to be 3 and 6 nm, respectively. The cellulose films were modified by coating with various amounts of an anionic fluorosurfactant, perfluorooctadecanoic acid, or covalently modified with pentadecafluorooctanyl chloride. The fluorinated cellulose films were used to follow the spreading mechanisms of three different oil mixtures. The viscosity and surface tension of the oils were found to be essential parameters governing the spreading kinetics on these surfaces. XPS and dispersive surface energy measurements were made on the cellulose films coated with perfluorooctadecanoic acid. A strong correlation was found between the surface concentration of fluorine, the dispersive surface energy and the contact angle of castor oil on the surface. A dispersive surface energy less than 18 mN/m was required in order for the cellulose surface to be non-wetting (theta e>90 degrees ) by castor oil.

  1. Zero-truncated panel Poisson mixture models: Estimating the impact on tourism benefits in Fukushima Prefecture.

    Science.gov (United States)

    Narukawa, Masaki; Nohara, Katsuhito

    2018-04-01

    This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  3. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    OpenAIRE

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2016-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fi...

  4. A Note on the Use of Mixture Models for Individual Prediction.

    Science.gov (United States)

    Cole, Veronica T; Bauer, Daniel J

    Mixture models capture heterogeneity in data by decomposing the population into latent subgroups, each of which is governed by its own subgroup-specific set of parameters. Despite the flexibility and widespread use of these models, most applications have focused solely on making inferences for whole or sub-populations, rather than individual cases. The current article presents a general framework for computing marginal and conditional predicted values for individuals using mixture model results. These predicted values can be used to characterize covariate effects, examine the fit of the model for specific individuals, or forecast future observations from previous ones. Two empirical examples are provided to demonstrate the usefulness of individual predicted values in applications of mixture models. The first example examines the relative timing of initiation of substance use using a multiple event process survival mixture model whereas the second example evaluates changes in depressive symptoms over adolescence using a growth mixture model.

  5. Introducing a model for competitiveness of suppliers in supply chain through game theory approach

    Directory of Open Access Journals (Sweden)

    Hengameh Cighary Deljavan

    2012-10-01

    Full Text Available Cighary Deljavan and Fariba Sadeghi PDF (300 KAbstract: The purpose of the present study is to introduce a model for competitiveness of suppliers in supply chain through game theory approach in one of the automobile companies of Iran. In this study, the game is based on price and non-price factors and this company is going to estimate the real profit obtained from collaboration with each of supply chain members. This happens by considering the governing competitive condition based on game theory before entering a bit for purchase of α piece as spare part among 8 companies supplying this piece as the supply chain members. According to experts in this industry, the quality is the main non-price competitiveness factor after price. In the current research models, the model introduced by Lu and Tsao (2011 [Lu, J.C., Tsao, Y.C., & Charoensiriwath, C. (2011. Competition Under manufacturer Service and retail price. Economic Modeling, 28,1256-1264.] with two manufacturers- one distributer, being appropriate for the research data, has been considered as the basis and implemented for case study and then it has been extended to n-manufacturers-one common retailer. Following price elasticity of demand, potential size of market or maximum product demand, retailer price, production price, wholesale price, demand amount, manufacturer and retailer profit are estimated under three scenario of manufacturer Stackelberg, Retailer Sackelberg and Vertical Nash. Therefore, by comparing them, price balance points and optimum level of services are specified and the better optimum scenario can be determined. Sensitivity analysis is performed for new model and manufacturers are ranked based on manufacture profit, Retailer profit and customer satisfaction. Finally, in this research in addition to introducing-person game model, customer satisfaction, which has been presented in the previous models as a missed circle are analyzed.

  6. Numerical simulation of slurry jets using mixture model

    Directory of Open Access Journals (Sweden)

    Wen-xin Huai

    2013-01-01

    Full Text Available Slurry jets in a static uniform environment were simulated with a two-phase mixture model in which flow-particle interactions were considered. A standard k-ε turbulence model was chosen to close the governing equations. The computational results were in agreement with previous laboratory measurements. The characteristics of the two-phase flow field and the influences of hydraulic and geometric parameters on the distribution of the slurry jets were analyzed on the basis of the computational results. The calculated results reveal that if the initial velocity of the slurry jet is high, the jet spreads less in the radial direction. When the slurry jet is less influenced by the ambient fluid (when the Stokes number St is relatively large, the turbulent kinetic energy k and turbulent dissipation rate ε, which are relatively concentrated around the jet axis, decrease more rapidly after the slurry jet passes through the nozzle. For different values of St, the radial distributions of streamwise velocity and particle volume fraction are both self-similar and fit a Gaussian profile after the slurry jet fully develops. The decay rate of the particle velocity is lower than that of water velocity along the jet axis, and the axial distributions of the centerline particle streamwise velocity are self-similar along the jet axis. The pattern of particle dispersion depends on the Stokes number St. When St = 0.39, the particle dispersion along the radial direction is considerable, and the relative velocity is very low due to the low dynamic response time. When St = 3.08, the dispersion of particles along the radial direction is very little, and most of the particles have high relative velocities along the streamwise direction.

  7. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  8. Introducing Mudbox

    CERN Document Server

    Kermanikian, Ara

    2010-01-01

    One of the first books on Autodesk's new Mudbox 3D modeling and sculpting tool!. Autodesk's Mudbox was used to create photorealistic creatures for The Dark Knight , The Mist , and others films. Now you can join the crowd interested in learning this exciting new digital modeling and sculpting tool with this complete guide. Get up to speed on all of Mudbox's features and functions, learn how sculpt and paint, and master the art of using effective workflows to make it all go easier.: Introduces Autodesk's Mudbox, an exciting 3D modeling and sculpting tool that enables you to create photorealistic

  9. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...... spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians...

  10. Pattern-mixture models for analyzing normal outcome data with proxy respondents.

    Science.gov (United States)

    Shardell, Michelle; Hicks, Gregory E; Miller, Ram R; Langenberg, Patricia; Magaziner, Jay

    2010-06-30

    Studies of older adults often involve interview questions regarding subjective constructs such as perceived disability. In some studies, when subjects are unable (e.g. due to cognitive impairment) or unwilling to respond to these questions, proxies (e.g. relatives or other care givers) are recruited to provide responses in place of the subject. Proxies are usually not approached to respond on behalf of subjects who respond for themselves; thus, for each subject, data from only one of the subject or proxy are available. Typically, proxy responses are simply substituted for missing subject responses, and standard complete-data analyses are performed. However, this approach may introduce measurement error and produce biased parameter estimates. In this paper, we propose using pattern-mixture models that relate non-identifiable parameters to identifiable parameters to analyze data with proxy respondents. We posit three interpretable pattern-mixture restrictions to be used with proxy data, and we propose estimation procedures using maximum likelihood and multiple imputation. The methods are applied to a cohort of elderly hip-fracture patients. (c) 2010 John Wiley & Sons, Ltd.

  11. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  12. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  13. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    Science.gov (United States)

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  14. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene...

  15. Application of association models to mixtures containing alkanolamines

    DEFF Research Database (Denmark)

    Avlund, Ane Søgaard; Eriksen, Daniel Kunisch; Kontogeorgis, Georgios

    2011-01-01

    . The role of association schemes is investigated in connection with CPA, while for sPC-SAFT emphasisis given on the role of different types of data in the determination of pure compound parameters suitable for mixture calculations. Moreover, the performance of CPA and sPC-SAFT for MEA-containing systems...... is compared.The investigation showed that vapor pressures and liquid densities were not sufficient for obtaining reliable parameters with either CPA or sPC-SAFT, but that at least one other type of information is needed. LLE data for a binary mixture of the associating component with an inert compound is very...

  16. Gassmann Modeling of Acoustic Properties of Sand-clay Mixtures

    Science.gov (United States)

    Gurevich, B.; Carcione, J. M.

    The feasibility of modeling elastic properties of a fluid-saturated sand-clay mixture rock is analyzed by assuming that the rock is composed of macroscopic regions of sand and clay. The elastic properties of such a composite rock are computed using two alternative schemes.The first scheme, which we call the composite Gassmann (CG) scheme, uses Gassmann equations to compute elastic moduli of the saturated sand and clay from their respective dry moduli. The effective elastic moduli of the fluid-saturated composite rock are then computed by applying one of the mixing laws commonly used to estimate elastic properties of composite materials.In the second scheme which we call the Berryman-Milton scheme, the elastic moduli of the dry composite rock matrix are computed from the moduli of dry sand and clay matrices using the same composite mixing law used in the first scheme. Next, the saturated composite rock moduli are computed using the equations of Brown and Korringa, which, together with the expressions for the coefficients derived by Berryman and Milton, provide an extension of Gassmann equations to rocks with a heterogeneous solid matrix.For both schemes, the moduli of the dry homogeneous sand and clay matrices are assumed to obey the Krief's velocity-porosity relationship. As a mixing law we use the self-consistent coherent potential approximation proposed by Berryman.The calculated dependence of compressional and shear velocities on porosity and clay content for a given set of parameters using the two schemes depends on the distribution of total porosity between the sand and clay regions. If the distribution of total porosity between sand and clay is relatively uniform, the predictions of the two schemes in the porosity range up to 0.3 are very similar to each other. For higher porosities and medium-to-large clay content the elastic moduli predicted by CG scheme are significantly higher than those predicted by the BM scheme.This difference is explained by the fact

  17. Polymer mixtures in confined geometries: Model systems to explore ...

    Indian Academy of Sciences (India)

    to mean field behavior for very long chains, the critical behavior of mixtures confined into thin film geometry falls in the 2d Ising class irrespective of chain length. The critical temperature always scales .... tive monomer blocks all the eight sites of an elementary cube, and these monomers are connected by bond vectors b ...

  18. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    Science.gov (United States)

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  19. Introducing renewable energy and industrial restructuring to reduce GHG emission: Application of a dynamic simulation model

    International Nuclear Information System (INIS)

    Song, Junnian; Yang, Wei; Higano, Yoshiro; Wang, Xian’en

    2015-01-01

    Highlights: • Renewable energy development is expanded and introduced into socioeconomic activities. • A dynamic optimization simulation model is developed based on input–output approach. • Regional economic, energy and environmental impacts are assessed dynamically. • Industrial and energy structure is adjusted optimally for GHG emission reduction. - Abstract: Specifying the renewable energy development as new energy industries to be newly introduced into current socioeconomic activities, this study develops a dynamic simulation model with input–output approach to make comprehensive assessment of the impacts on economic development, energy consumption and GHG emission under distinct levels of GHG emission constraints involving targeted GHG emission reduction policies (ERPs) and industrial restructuring. The model is applied to Jilin City to conduct 16 terms of dynamic simulation work with GRP as objective function subject to mass, value and energy balances aided by the extended input–output table with renewable energy industries introduced. Simulation results indicate that achievement of GHG emission reduction target is contributed by renewable energy industries, ERPs and industrial restructuring collectively, which reshape the terminal energy consumption structure with a larger proportion of renewable energy. Wind power, hydropower and biomass combustion power industries account for more in the power generation structure implying better industrial prospects. Mining, chemical, petroleum processing, non-metal, metal and thermal power industries are major targets for industrial restructuring. This method is crucial for understanding the role of renewable energy development in GHG mitigation efforts and other energy-related planning settings, allowing to explore the optimal level for relationships among all socioeconomic activities and facilitate to simultaneous pursuit of economic development, energy utilization and environmental preservation

  20. Introducing Modeling Transition Diagrams as a Tool to Connect Mathematical Modeling to Mathematical Thinking

    Science.gov (United States)

    Czocher, Jennifer A.

    2016-01-01

    This study contributes a methodological tool to reconstruct the cognitive processes and mathematical activities carried out by mathematical modelers. Represented as Modeling Transition Diagrams (MTDs), individual modeling routes were constructed for four engineering undergraduate students. Findings stress the importance and limitations of using…

  1. Growth Mixture Modeling of Depression Symptoms Following Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    Rapson Gomez

    2017-08-01

    Full Text Available Growth Mixture Modeling (GMM was used to investigate the longitudinal trajectory of groups (classes of depression symptoms, and how these groups were predicted by the covariates of age, sex, severity, and length of hospitalization following Traumatic Brain Injury (TBI in a group of 1074 individuals (696 males, and 378 females from the Royal Hobart Hospital, who sustained a TBI. The study began in late December 2003 and recruitment continued until early 2007. Ages ranged from 14 to 90 years, with a mean of 35.96 years (SD = 16.61. The study also examined the associations between the groups and causes of TBI. Symptoms of depression were assessed using the Hospital Anxiety and Depression Scale within 3 weeks of injury, and at 1, 3, 6, 12, and 24 months post-injury. The results revealed three groups: low, high, and delayed depression. In the low group depression scores remained below the clinical cut-off at all assessment points during the 24-months post-TBI, and in the high group, depression scores were above the clinical cut-off at all assessment points. The delayed group showed an increase in depression symptoms to 12 months after injury, followed by a return to initial assessment level during the following 12 months. Covariates were found to be differentially associated with the three groups. For example, relative to the low group, the high depression group was associated with more severe TBI, being female, and a shorter period of hospitalization. The delayed group also had a shorter period of hospitalization, were younger, and sustained less severe TBI. Our findings show considerable fluctuation of depression over time, and that a non-clinical level of depression at any one point in time does not necessarily mean that the person will continue to have non-clinical levels in the future. As we used GMM, we were able to show new findings and also bring clarity to contradictory past findings on depression and TBI. Consequently, we recommend the use

  2. Sound speed models for a noncondensible gas-steam-water mixture

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1984-01-01

    An analytical expression is derived for the homogeneous equilibrium speed of sound in a mixture of noncondensible gas, steam, and water. The expression is based on the Gibbs free energy interphase equilibrium condition for a Gibbs-Dalton mixture in contact with a pure liquid phase. Several simplified models are discussed including the homogeneous frozen model. These idealized models can be used as a reference for data comparison and also serve as a basis for empirically corrected nonhomogeneous and nonequilibrium models

  3. Photometry and models of selected main belt asteroids: IX. Introducing interactive service for asteroid models (ISAM)

    DEFF Research Database (Denmark)

    Marciniak, A.; Bartczak, P.; Santana-Ros, T.

    2012-01-01

    occultations, or space probe imaging. Aims. During our ongoing work to increase the set of asteroids with known spin and shape parameters, there appeared a need for displaying the model plane-of-sky orientations for specific epochs to compare models from different techniques. It would also be instructive...... to be able to track how the complex lightcurves are produced by various asteroid shapes. Methods. Basing our analysis on an extensive photometric observational dataset, we obtained eight asteroid models with the convex lightcurve inversion method. To enable comparison of the photometric models with those......, we increase the sample of asteroid spin and shape models based on disk-integrated photometry to over 200. Three of the shape models obtained here are confirmed by the stellar occultation data; this also allowed independent determinations of their sizes to be made. Conclusions. The ISAM service can...

  4. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  5. Mixture Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2007-12-01

    A mixture experiment involves combining two or more components in various proportions or amounts and then measuring one or more responses for the resulting end products. Other factors that affect the response(s), such as process variables and/or the total amount of the mixture, may also be studied in the experiment. A mixture experiment design specifies the combinations of mixture components and other experimental factors (if any) to be studied and the response variable(s) to be measured. Mixture experiment data analyses are then used to achieve the desired goals, which may include (i) understanding the effects of components and other factors on the response(s), (ii) identifying components and other factors with significant and nonsignificant effects on the response(s), (iii) developing models for predicting the response(s) as functions of the mixture components and any other factors, and (iv) developing end-products with desired values and uncertainties of the response(s). Given a mixture experiment problem, a practitioner must consider the possible approaches for designing the experiment and analyzing the data, and then select the approach best suited to the problem. Eight possible approaches include 1) component proportions, 2) mathematically independent variables, 3) slack variable, 4) mixture amount, 5) component amounts, 6) mixture process variable, 7) mixture of mixtures, and 8) multi-factor mixture. The article provides an overview of the mixture experiment designs, models, and data analyses for these approaches.

  6. Introducing a price variation limiter mechanism into a behavioral financial market model.

    Science.gov (United States)

    Naimzada, Ahmad; Pireddu, Marina

    2015-08-01

    In the present paper, we consider a nonlinear financial market model in which, in order to decrease the complexity of the dynamics and to achieve price stabilization, we introduce a price variation limiter mechanism, which in each period bounds the price variation so that the current price is forced to belong to a certain interval determined by the price realization in the previous period. More precisely, we introduce such mechanism into a financial market model in which the price dynamics are described by a sigmoidal price adjustment mechanism characterized by the presence of two asymptotes that bound the price variation and thus the dynamics. We show that the presence of our asymptotes prevents divergence and negativity issues. Moreover, we prove that the basins of attraction are complicated only under suitable conditions on the parameters and that chaos arises just when the price limiters are loose enough. On the other hand, for some suitable parameter configurations, we detect multistability phenomena characterized by the presence of up to three coexisting attractors.

  7. Using mixture models to characterize disease-related traits

    OpenAIRE

    Ye Kenny Q; Chase Gary A; Finch Stephen J; Duan Tao; Mendell Nancy R

    2005-01-01

    Abstract We consider 12 event-related potentials and one electroencephalogram measure as disease-related traits to compare alcohol-dependent individuals (cases) to unaffected individuals (controls). We use two approaches: 1) two-way analysis of variance (with sex and alcohol dependency as the factors), and 2) likelihood ratio tests comparing sex adjusted values of cases to controls assuming that within each group the trait has a 2 (or 3) component normal mixture distribution. In the second ap...

  8. Introducing a boreal wetland model within the Earth System model framework

    Science.gov (United States)

    Getzieh, R. J.; Brovkin, V.; Reick, C.; Kleinen, T.; Raddatz, T.; Raivonen, M.; Sevanto, S.

    2009-04-01

    Wetlands of the northern high latitudes with their low temperatures and waterlogged conditions are prerequisite for peat accumulation. They store at least 25% of the global soil organic carbon and constitute currently the largest natural source of methane. These boreal and subarctic peat carbon pools are sensitive to climate change since the ratio of carbon sequestration and emission is closely dependent on hydrology and temperature. Global biogeochemistry models used for simulations of CO2 dynamics in the past and future climates usually ignore changes in the peat storages. Our approach aims at the evaluation of the boreal wetland feedback to climate through the CO2 and CH4 fluxes on decadal to millennial time scales. A generic model of organic matter accumulation and decay in boreal wetlands is under development in the MPI for Meteorology in cooperation with the University of Helsinki. Our approach is to develop a wetland model which is consistent with the physical and biogeochemical components of the land surface module JSBACH as a part of the Earth System model framework ECHAM5-MPIOM-JSBACH. As prototypes, we use modelling approach by Frolking et al. (2001) for the peat dynamics and the wetland model by Wania (2007) for vegetation cover and plant productivity. An initial distribution of wetlands follows the GLWD-3 map by Lehner and Döll (2004). First results of the modelling approach will be presented. References: Frolking, S. E., N. T. Roulet, T. R. Moore, P. J. H. Richard, M. Lavoie and S. D. Muller (2001): Modeling Northern Peatland Decomposition and Peat Accumulation, Ecosystems, 4, 479-498. Lehner, B., Döll P. (2004): Development and validation of a global database of lakes, reservoirs and wetlands. Journal of Hydrology 296 (1-4), 1-22. Wania, R. (2007): Modelling northern peatland land surface processes, vegetation dynamics and methane emissions. PhD thesis, University of Bristol, 122 pp.

  9. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  10. Introducing vaccination against serogroup B meningococcal disease: an economic and mathematical modelling study of potential impact.

    Science.gov (United States)

    Christensen, Hannah; Hickman, Matthew; Edmunds, W John; Trotter, Caroline L

    2013-05-28

    Meningococcal disease remains an important cause of morbidity and mortality worldwide. The first broadly effective vaccine against group B disease (which causes considerable meningococcal disease in Europe, the Americas and Australasia) was licensed in the EU in January 2013; our objective was to estimate the potential impact of introducing such a vaccine in England. We developed two models to estimate the impact of introducing a new 'MenB' vaccine. The cohort model assumes the vaccine protects against disease only; the transmission dynamic model also allows the vaccine to protect against carriage (accounting for herd effects). We used these, and economic models, to estimate the case reduction and cost-effectiveness of a number of different vaccine strategies. We estimate 27% of meningococcal disease cases could be prevented over the lifetime of an English birth cohort by vaccinating infants at 2,3,4 and 12 months of age with a vaccine that prevents disease only; this strategy could be cost-effective at £9 per vaccine dose. Substantial reductions in disease (71%) can be produced after 10 years by routinely vaccinating infants in combination with a large-scale catch-up campaign, using a vaccine which protects against carriage as well as disease; this could be cost-effective at £17 per vaccine dose. New 'MenB' vaccines could substantially reduce disease in England and be cost-effective if competitively priced, particularly if the vaccines can prevent carriage as well as disease. These results are relevant to other countries, with a similar epidemiology to England, considering the introduction of a new 'MenB' vaccine. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  12. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    Science.gov (United States)

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  13. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro

    DEFF Research Database (Denmark)

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael

    2013-01-01

    and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast...... was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects...

  14. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    International Nuclear Information System (INIS)

    Snodin, Benedict E. K.; Mosayebi, Majid; Schreck, John S.; Romano, Flavio; Doye, Jonathan P. K.; Randisi, Ferdinando; Šulc, Petr; Ouldridge, Thomas E.; Tsukanov, Roman; Nir, Eyal; Louis, Ard A.

    2015-01-01

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na + ] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA

  15. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    Energy Technology Data Exchange (ETDEWEB)

    Snodin, Benedict E. K., E-mail: benedict.snodin@chem.ox.ac.uk; Mosayebi, Majid; Schreck, John S.; Romano, Flavio; Doye, Jonathan P. K., E-mail: jonathan.doye@chem.ox.ac.uk [Physical and Theoretical Chemistry Laboratory, Department of Chemistry, University of Oxford, South Parks Road, Oxford OX1 3QZ (United Kingdom); Randisi, Ferdinando [Life Sciences Interface Doctoral Training Center, South Parks Road, Oxford OX1 3QU (United Kingdom); Rudolf Peierls Centre for Theoretical Physics, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Šulc, Petr [Center for Studies in Physics and Biology, The Rockefeller University, 1230 York Avenue, New York, New York 10065 (United States); Ouldridge, Thomas E. [Department of Mathematics, Imperial College, 180 Queen’s Gate, London SW7 2AZ (United Kingdom); Tsukanov, Roman; Nir, Eyal [Department of Chemistry and the Ilse Katz Institute for Nanoscale Science and Technology, Ben-Gurion University of the Negev, Beer Sheva (Israel); Louis, Ard A. [Rudolf Peierls Centre for Theoretical Physics, 1 Keble Road, Oxford OX1 3NP (United Kingdom)

    2015-06-21

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na{sup +}] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.

  16. Reverse engineering Boolean networks: from Bernoulli mixture models to rule based systems.

    Directory of Open Access Journals (Sweden)

    Mehreen Saeed

    Full Text Available A Boolean network is a graphical model for representing and analyzing the behavior of gene regulatory networks (GRN. In this context, the accurate and efficient reconstruction of a Boolean network is essential for understanding the gene regulation mechanism and the complex relations that exist therein. In this paper we introduce an elegant and efficient algorithm for the reverse engineering of Boolean networks from a time series of multivariate binary data corresponding to gene expression data. We call our method ReBMM, i.e., reverse engineering based on Bernoulli mixture models. The time complexity of most of the existing reverse engineering techniques is quite high and depends upon the indegree of a node in the network. Due to the high complexity of these methods, they can only be applied to sparsely connected networks of small sizes. ReBMM has a time complexity factor, which is independent of the indegree of a node and is quadratic in the number of nodes in the network, a big improvement over other techniques and yet there is little or no compromise in accuracy. We have tested ReBMM on a number of artificial datasets along with simulated data derived from a plant signaling network. We also used this method to reconstruct a network from real experimental observations of microarray data of the yeast cell cycle. Our method provides a natural framework for generating rules from a probabilistic model. It is simple, intuitive and illustrates excellent empirical results.

  17. Photometry and models of selected main belt asteroids. IX. Introducing interactive service for asteroid models (ISAM)

    NARCIS (Netherlands)

    Marciniak, A.; Bartczak, P.; Santana-Ros, T.; Michalowski, T.; Antonini, P.; Behrend, R.; Bembrick, C.; Bernasconi, L.; Borczyk, W.; Colas, F.; Coloma, J.; Crippa, R.; Esseiva, N.; Fagas, M.; Fauvaud, M.; Fauvaud, S.; Ferreira, D. D. M.; Hein - Bertelsen, R.P.; Higgins, D.; Hirsch, R.; Kajava, J. J. E.; Kaminski, K.; Kryszczynska, A.; Kwiatkowski, T.; Manzini, F.; Michalowski, J.; Michalowski, M. J.; Paschke, A.; Polinska, M.; Poncy, R.; Roy, R.; Santacana, G.; Sobkowiak, K.; Stasik, M.; Starczewski, S.; Velichko, F.; Wucher, H.; Zafar, T.

    Context. The shapes and spin states of asteroids observed with photometric techniques can be reconstructed using the lightcurve inversion method. The resultant models can then be confirmed or exploited further by other techniques, such as adaptive optics, radar, thermal infrared, stellar

  18. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  19. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...... for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...

  20. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties...... for all the distinct elements. The dynamic modulus and phase angle from uniaxial complex modulus tests of the asphalt mixtures in the laboratory have been collected. A macro-scale Burger’s model was first established and the input parameters of Burger’s contact model were calibrated by fitting...... with the lab test data of the complex modulus of the asphalt mixture. The Burger’s contact model parameters are usually calibrated for each frequency. While in this research a constant set of Burger’s parameters has been calibrated and used for all the test frequencies, the calibration procedure...

  1. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  2. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    containing supercritical methane, argon, nitrogen, and carbon dioxide gases at high pressure. With this model it is unnecessary to solve the time-consuming density profile equations of the gradient theory model. The model has been tested on a number of mixtures at low and high pressures. The results show...... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile.......In this research work, we assumed that the densities of each component in a mixture are linearly distributed across the interface between the coexisting vapor and liquid phases, and we developed a linear gradient theory model for computing interfacial tensions of mixtures, especially mixtures...

  3. A predictive model of natural gas mixture combustion in internal combustion engines

    Directory of Open Access Journals (Sweden)

    Henry Espinoza

    2007-05-01

    Full Text Available This study shows the development of a predictive natural gas mixture combustion model for conventional com-bustion (ignition engines. The model was based on resolving two areas; one having unburned combustion mixture and another having combustion products. Energy and matter conservation equations were solved for each crankshaft turn angle for each area. Nonlinear differential equations for each phase’s energy (considering compression, combustion and expansion were solved by applying the fourth-order Runge-Kutta method. The model also enabled studying different natural gas components’ composition and evaluating combustion in the presence of dry and humid air. Validation results are shown with experimental data, demonstrating the software’s precision and accuracy in the results so produced. The results showed cylinder pressure, unburned and burned mixture temperature, burned mass fraction and combustion reaction heat for the engine being modelled using a natural gas mixture.

  4. Mixture estimation with state-space components and Markov model of switching

    Czech Academy of Sciences Publication Activity Database

    Nagy, Ivan; Suzdaleva, Evgenia

    2013-01-01

    Roč. 37, č. 24 (2013), s. 9970-9984 ISSN 0307-904X R&D Projects: GA TA ČR TA01030123 Institutional support: RVO:67985556 Keywords : probabilistic dynamic mixtures, * probability density function * state-space models * recursive mixture estimation * Bayesian dynamic decision making under uncertainty * Kerridge inaccuracy Subject RIV: BC - Control Systems Theory Impact factor: 2.158, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/nagy-mixture estimation with state-space components and markov model of switching.pdf

  5. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    Science.gov (United States)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.

    2015-01-01

    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~ 275 to ~ 400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures. To this end we introduce a new parameterisation for the temperature dependence. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multi-component system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~ 190 to ~ 440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 28% in

  6. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Science.gov (United States)

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  7. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Directory of Open Access Journals (Sweden)

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  8. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Science.gov (United States)

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  9. Latent variable mixture modeling in psychiatric research--a review and application.

    Science.gov (United States)

    Miettunen, J; Nordström, T; Kaakinen, M; Ahmed, A O

    2016-02-01

    Latent variable mixture modeling represents a flexible approach to investigating population heterogeneity by sorting cases into latent but non-arbitrary subgroups that are more homogeneous. The purpose of this selective review is to provide a non-technical introduction to mixture modeling in a cross-sectional context. Latent class analysis is used to classify individuals into homogeneous subgroups (latent classes). Factor mixture modeling represents a newer approach that represents a fusion of latent class analysis and factor analysis. Factor mixture models are adaptable to representing categorical and dimensional states of affairs. This article provides an overview of latent variable mixture models and illustrates the application of these methods by applying them to the study of the latent structure of psychotic experiences. The flexibility of latent variable mixture models makes them adaptable to the study of heterogeneity in complex psychiatric and psychological phenomena. They also allow researchers to address research questions that directly compare the viability of dimensional, categorical and hybrid conceptions of constructs.

  10. Quantifying the interactions among metal mixtures in toxicodynamic process with generalized linear model.

    Science.gov (United States)

    Feng, Jianfeng; Gao, Yongfei; Ji, Yijun; Zhu, Lin

    2018-03-05

    Predicting the toxicity of chemical mixtures is difficult because of the additive, antagonistic, or synergistic interactions among the mixture components. Antagonistic and synergistic interactions are dominant in metal mixtures, and their distributions may correlate with exposure concentrations. However, whether the interaction types of metal mixtures change at different time points during toxicodynamic (TD) processes is undetermined because of insufficient appropriate models and metal bioaccumulation data at different time points. In the present study, the generalized linear model (GLM) was used to illustrate the combined toxicities of binary metal mixtures, such as Cu-Zn, Cu-Cd, and Cd-Pb, to zebrafish larvae (Danio rerio). GLM was also used to identify possible interaction types among these method for the traditional concentration addition (CA) and independent action (IA) models. Then the GLM were applied to quantify the different possible interaction types for metal mixture toxicity (Cu-Zn, Cu-Cd, and Cd-Pb to D. rerio and Ni-Co to Oligochaeta Enchytraeus crypticus) during the TD process at different exposure times. We found different metal interaction responses in the TD process and interactive coefficients significantly changed at different exposure times (pmixture toxicology on organisms. Moreover, care should be taken when evaluating interactions in toxicity prediction because results may vary at different time points. The GLM could be an alternative or complementary approach for BLM to analyze and predict metal mixture toxicity. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  12. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    Science.gov (United States)

    Portner, H.; Wolf, A.; Bugmann, H.

    2009-04-01

    Many biogeochemical models have been applied to study the response of the carbon cycle to changes in climate, whereby the process of carbon uptake (photosynthesis) has usually gained more attention than the equally important process of carbon release by respiration. The decomposition of soil organic matter is driven by a combination of factors with a prominent one being soil temperature [Berg and Laskowski(2005)]. One uncertainty concerns the response function used to describe the sensitivity of soil organic matter decomposition to temperature. This relationship is often described by one out of a set of similar exponential functions, but it has not been investigated how uncertainties in the choice of the response function influence the long term predictions of biogeochemical models. We built upon the well-established LPJ-GUESS model [Smith et al.(2001)]. We tested five candidate functions and calibrated them against eight datasets from different Ameriflux and CarboEuropeIP sites [Hibbard et al.(2006)]. We used a simple Exponential function with a constant Q10, the Arrhenius function, the Gaussian function [Tuomi et al.(2008), O'Connell(1990)], the Van't Hoff function [Van't Hoff(1901)] and the Lloyd&Taylor function [Lloyd and Taylor(1994)]. We assessed the impact of uncertainty in model formulation of temperature response on estimates of present and future long-term carbon storage in ecosystems and hence on the CO2 feedback potential to the atmosphere. We specifically investigated the relative importance of model formulation and the error introduced by using different data sets for the parameterization. Our results suggested that the Exponential and Arrhenius functions are inappropriate, as they overestimated the respiration rates at lower temperatures. The Gaussian, Van't Hoff and Lloyd&Taylor functions all fit the observed data better, whereby the functions of Gaussian and Van't Hoff underestimated the response at higher temperatures. We suggest, that the

  13. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  14. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    Science.gov (United States)

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  15. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    International Nuclear Information System (INIS)

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  16. Introducing the 3-A grief intervention model for dementia caregivers: acknowledge, assess and assist.

    Science.gov (United States)

    Silverberg, Eleanor

    With our aging population, it is estimated that in the near future there will be an overwhelming increase in the number of individuals dealing with Alzheimer's disease or a related dementia (ADRD). From the time that symptoms begin to insidiously emerge, it can take well over ten years for the disease to run its course. In addition to the crippling effect for those inflicted, this lengthy duration can have an ongoing debilitating effect on the family members who are grieving while providing care. Researchers have claimed that the manner in which family members experience and manage their grief reactions to the pre-death losses can influence both caregiving outcomes and adjustment to bereavement once those with the disease have died. Given the relevance of grief management, this article provides answers to such questions as: How do family caregivers of individuals with ADRD manifest their grief? How can healthcare professionals intervene in assisting with grief management? The answers are provided introducing the 3-A grief intervention model for family caregivers of individuals with ADRD. The 3-A model enfranchises the caregiver grief experience through Acknowledging, Assessing, and Assisting in grief management. In doing so, different grieving styles are identified and the role that denial and respite plays in adapting to the family caregiver's grief experience is recognized. Clinical strategies to assist in grief management are also provided.

  17. Existence, uniqueness and positivity of solutions for BGK models for mixtures

    Science.gov (United States)

    Klingenberg, C.; Pirner, M.

    2018-01-01

    We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].

  18. A Generic Model for Prediction of Separation Performance of Olefin/Paraffin Mixture by Glassy Polymer Membranes

    Directory of Open Access Journals (Sweden)

    A.A. Ghoreyshi

    2008-02-01

    Full Text Available The separation of olefin/paraffin mixtures is an important process in petrochemical industries, which is traditionally performed by low temperature distillation with a high-energy consumption, or complex extractive distillationand adsorption techniques. Membrane separation process is emerging as an alternative for traditional separation processes with respect to low energy and simple operation. Investigations made by various researchers on polymeric membranes it is found that special glassy polymers render them as suitable materials for olefin/paraffin mixture separation. In this regard, having some knowledge on the possible transport mechanism of these processes would play a significant role in their design and applications. In this study, separation behavior of olefin/paraffin mixtures through glassy polymers was modeled by three different approaches: the so-called dual transport model, the basic adsorption-diffusion theory and the general Maxwell-Stefan formulation. The systems chosen to validate the developed transport models are separation of ethane-ethylene mixture by 6FDA-6FpDA polyimide membrane and propane-propylene mixture by 6FDA-TrMPD polyimide membrane for which the individual sorption and permeation data are available in the literature. Acritical examination of dual transport model shows that this model fails clearly to predict even the proper trend for selectivities. The adjustment of pemeabilities by accounting for the contribution of non-selective bulk flow in the transport model introduced no improvement in the predictability of the model. The modeling results based on the basic adsorption-diffusion theory revealed that in this approach only using mixed permeability data, an acceptable result is attainable which fades out the advantages of predictibility of multicomponent separation performance from pure component data. Finally, the results obtained from the model developed based on Maxwell-Stefan formulation approach show a

  19. Myocardium Segmentation From DE MRI Using Multicomponent Gaussian Mixture Model and Coupled Level Set.

    Science.gov (United States)

    Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu

    2017-11-01

    Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with

  20. Evaluation of fecal mRNA reproducibility via a marginal transformed mixture modeling approach

    Directory of Open Access Journals (Sweden)

    Davidson Laurie A

    2010-01-01

    Full Text Available Abstract Background Developing and evaluating new technology that enables researchers to recover gene-expression levels of colonic cells from fecal samples could be key to a non-invasive screening tool for early detection of colon cancer. The current study, to the best of our knowledge, is the first to investigate and report the reproducibility of fecal microarray data. Using the intraclass correlation coefficient (ICC as a measure of reproducibility and the preliminary analysis of fecal and mucosal data, we assessed the reliability of mixture density estimation and the reproducibility of fecal microarray data. Using Monte Carlo-based methods, we explored whether ICC values should be modeled as a beta-mixture or transformed first and fitted with a normal-mixture. We used outcomes from bootstrapped goodness-of-fit tests to determine which approach is less sensitive toward potential violation of distributional assumptions. Results The graphical examination of both the distributions of ICC and probit-transformed ICC (PT-ICC clearly shows that there are two components in the distributions. For ICC measurements, which are between 0 and 1, the practice in literature has been to assume that the data points are from a beta-mixture distribution. Nevertheless, in our study we show that the use of a normal-mixture modeling approach on PT-ICC could provide superior performance. Conclusions When modeling ICC values of gene expression levels, using mixture of normals in the probit-transformed (PT scale is less sensitive toward model mis-specification than using mixture of betas. We show that a biased conclusion could be made if we follow the traditional approach and model the two sets of ICC values using the mixture of betas directly. The problematic estimation arises from the sensitivity of beta-mixtures toward model mis-specification, particularly when there are observations in the neighborhood of the the boundary points, 0 or 1. Since beta-mixture modeling

  1. Does introduced fauna influence soil erosion? A field and modelling assessment.

    Science.gov (United States)

    Hancock, G R; Lowry, J B C; Dever, C; Braggins, M

    2015-06-15

    Pigs (Sus scrofa) are recognised as having significant ecological impacts in many areas of the world including northern Australia. The full consequences of the introduction of pigs are difficult to quantify as the impacts may only be detected over the long-term and there is a lack of quantitative information on the impacts of feral pigs globally. In this study the effect of feral pigs is quantified in an undisturbed catchment in the monsoonal tropics of northern Australia. Over a three-year period, field data showed that the areal extent of pig disturbance ranged from 0.3-3.3% of the survey area. The mass of material exhumed through these activities ranged from 4.3 t ha(-1) yr(-1) to 36.0 t ha(-1) yr(-1). The findings demonstrate that large introduced species such as feral pigs are disturbing large areas as well as exhuming considerable volumes of soil. A numerical landscape evolution and soil erosion model was used to assess the effect of this disturbance on catchment scale erosion rates. The modelling demonstrated that simulated pig disturbance in previously undisturbed areas produced lower erosion rates compared to those areas which had not been impacted by pigs. This is attributed to the pig disturbance increasing surface roughness and trapping sediment. This suggests that in this specific environment, disturbance by pigs does not enhance erosion. However, this conclusion is prefaced by two important caveats. First, the long term impact of soil disturbance is still very uncertain. Secondly, modelling results show a clear differentiation between those from an undisturbed environment and those from a post-mining landscape, in which pig disturbance may enhance erosion. Copyright © 2015. Published by Elsevier B.V.

  2. Introducing a model of pairing based on base pair specific interactions between identical DNA sequences

    Science.gov (United States)

    (O’ Lee, Dominic J.

    2018-02-01

    At present, there have been suggested two types of physical mechanism that may facilitate preferential pairing between DNA molecules, with identical or similar base pair texts, without separation of base pairs. One mechanism solely relies on base pair specific patterns of helix distortion being the same on the two molecules, discussed extensively in the past. The other mechanism proposes that there are preferential interactions between base pairs of the same composition. We introduce a model, built on this second mechanism, where both thermal stretching and twisting fluctuations are included, as well as the base pair specific helix distortions. Firstly, we consider an approximation for weak pairing interactions, or short molecules. This yields a dependence of the energy on the square root of the molecular length, which could explain recent experimental data. However, analysis suggests that this approximation is no longer valid at large DNA lengths. In a second approximation, for long molecules, we define two adaptation lengths for twisting and stretching, over which the pairing interaction can limit the accumulation of helix disorder. When the pairing interaction is sufficiently strong, both adaptation lengths are finite; however, as we reduce pairing strength, the stretching adaptation length remains finite but the torsional one becomes infinite. This second state persists to arbitrarily weak values of the pairing strength; suggesting that, if the molecules are long enough, the pairing energy scales as length. To probe differences between the two pairing mechanisms, we also construct a model of similar form. However, now, pairing between identical sequences solely relies on the intrinsic helix distortion patterns. Between the two models, we see interesting qualitative differences. We discuss our findings, and suggest new work to distinguish between the two mechanisms.

  3. Consequences of introducing bryophytes and Arctic-shrubs in a land surface model with dynamical vegetation.

    Science.gov (United States)

    Druel, A.; Peylin, P.; Krinner, G.; Ciais, P.; Viovy, N.

    2016-12-01

    Recent developments of boreal vegetation in land surface models show the importance of new plant functional types for a better representation of physical and carbon cycle related processes in northern latitudes. In past climate transitions, shifts in northern vegetation played a crucial role, for example in the inception of the Last Glacial Maximum. With the current high-latitude warming, a greening of vegetation is observed, associated with increased shrub cover. It has thus become essential to include shifts in vegetation in models. In the ORCHIDEE land surface model with a dynamic vegetation, we introduced new parameterizations and processes associated to Arctic-shrubs, bryophytes (mosses and lichens) and boreal C3 grasses to simulate their effect on biomass, albedo, snow cover and soil thermal dynamic (including frozen soils). Specific competition and survival conditions are defined for these three plant functional types. Competition between herbaceous vegetation, shrubs and trees is based on available light. Survival conditions of shrubs include their protection from cold temperatures by snow, and the competition between C3 grasses and bryophytes depends especially on soil water-saturation conditions. The equilibrium fractional coverage of the three competing plant functional types is based on the net primary production. We compare the results from simulations with different configurations: 1) vegetation being either fixed prescribed from a satellite land cover map or dynamic and 2) plant functional types used being either the default settings of ORCHIDEE which include three different boreal tree types and one grassland type, or the latter plus the new boreal vegetation types. The simulations are run for the historical period and with an additional run of 100 years according to the RCP 4.5 and 8.5 climate scenarios. We evaluate the effect of new plant functional types on the vegetation distribution, and their consequences for energy, water and carbon fluxes

  4. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  5. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  6. Identification of damage in composite structures using Gaussian mixture model-processed Lamb waves

    Science.gov (United States)

    Wang, Qiang; Ma, Shuxian; Yue, Dong

    2018-04-01

    Composite materials have comprehensively better properties than traditional materials, and therefore have been more and more widely used, especially because of its higher strength-weight ratio. However, the damage of composite structures is usually varied and complicated. In order to ensure the security of these structures, it is necessary to monitor and distinguish the structural damage in a timely manner. Lamb wave-based structural health monitoring (SHM) has been proved to be effective in online structural damage detection and evaluation; furthermore, the characteristic parameters of the multi-mode Lamb wave varies in response to different types of damage in the composite material. This paper studies the damage identification approach for composite structures using the Lamb wave and the Gaussian mixture model (GMM). The algorithm and principle of the GMM, and the parameter estimation, is introduced. Multi-statistical characteristic parameters of the excited Lamb waves are extracted, and the parameter space with reduced dimensions is adopted by principal component analysis (PCA). The damage identification system using the GMM is then established through training. Experiments on a glass fiber-reinforced epoxy composite laminate plate are conducted to verify the feasibility of the proposed approach in terms of damage classification. The experimental results show that different types of damage can be identified according to the value of the likelihood function of the GMM.

  7. Speech Enhancement Using Gaussian Mixture Models, Explicit Bayesian Estimation and Wiener Filtering

    Directory of Open Access Journals (Sweden)

    M. H. Savoji

    2014-09-01

    Full Text Available Gaussian Mixture Models (GMMs of power spectral densities of speech and noise are used with explicit Bayesian estimations in Wiener filtering of noisy speech. No assumption is made on the nature or stationarity of the noise. No voice activity detection (VAD or any other means is employed to estimate the input SNR. The GMM mean vectors are used to form sets of over-determined system of equations whose solutions lead to the first estimates of speech and noise power spectra. The noise source is also identified and the input SNR estimated in this first step. These first estimates are then refined using approximate but explicit MMSE and MAP estimation formulations. The refined estimates are then used in a Wiener filter to reduce noise and enhance the noisy speech. The proposed schemes show good results. Nevertheless, it is shown that the MAP explicit solution, introduced here for the first time, reduces the computation time to less than one third with a slight higher improvement in SNR and PESQ score and also less distortion in comparison to the MMSE solution.

  8. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Directory of Open Access Journals (Sweden)

    M. P. Mittermaier

    2008-05-01

    Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.

    The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  9. Modeling of the effect of intentionally introduced traps on hole transport in single-crystal rubrene

    KAUST Repository

    Dacuña, Javier

    2014-06-05

    Defects have been intentionally introduced in a rubrene single crystal by means of two different mechanisms: ultraviolet ozone (UVO) exposure and x-ray irradiation. A complete drift-diffusion model based on the mobility edge (ME) concept, which takes into account asymmetries and nonuniformities in the semiconductor, is used to estimate the energetic and spatial distribution of trap states. The trap distribution for pristine devices can be decomposed into two well defined regions: a shallow region ascribed to structural disorder and a deeper region ascribed to defects. UVO and x ray increase the hole trap concentration in the semiconductor with different energetic and spatial signatures. The former creates traps near the top surface in the 0.3-0.4 eV region, while the latter induces a wider distribution of traps extending from the band edge with a spatial distribution that peaks near the top and bottom interfaces. In addition to inducing hole trap states in the transport gap, both processes are shown to reduce the mobility with respect to a pristine crystal. © 2014 American Physical Society.

  10. On mixture model complexity estimation for music recommender systems

    NARCIS (Netherlands)

    Balkema, W.; van der Heijden, Ferdinand; Meijerink, B.

    2006-01-01

    Content-based music navigation systems are in need of robust music similarity measures. Current similarity measures model each song with the same model parameters. We propose methods to efficiently estimate the required number of model parameters of each individual song. First results of a study on

  11. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  12. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  13. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  14. Modeling and Computation of Thermodynamic Equilibrium for Mixtures of Inorganic and Organic Species

    Science.gov (United States)

    Caboussat, A.; Amundson, N. R.; He, J.; Martynenko, A. V.; Seinfeld, J. H.

    2007-05-01

    A series of modules has been developed in the atmospheric modeling community to predict the phase transition, crystallization and evaporation of inorganic aerosols. Modules for the computation of the thermodynamics of pure organic-containing aerosols have been developed more recently; however, the modeling of aerosols containing mixtures of inorganic and organic compounds has gathered less attention. We present here a model (UHAERO), that is flexible, efficient and rigorously computes the thermodynamic equilibrium of atmospheric particles containing inorganic and organic compounds. It is applied first to mixtures of inorganic electrolytes and dicarboxylic acids, and then to thermodynamic equilibria including crystallization and liquid-liquid phase separation. The model does not rely on any a priori specification of the phases present in certain atmospheric conditions. The multicomponent phase equilibrium for a closed organic aerosol system at constant temperature and pressure and for specified feeds is the solution to the equilibrium problem arising from the constrained minimization of the Gibbs free energy. For mixtures of inorganic electrolytes and dissociated organics, organic salts appear at equilibrium in the aqueous phase. In the general case, liquid-liquid phase separations happen and electrolytes dissociate in both aqueous and organic liquid phases. The Gibbs free energy is modeled by the UNIFAC model for the organic compounds, the PSC model for the inorganic constituents and a Pitzer model for interactions. The difficulty comes from the accurate estimation of interactions in the modeling of the activity coefficients. An accurate and efficient method for the computation of the minimum of energy is used to compute phase diagrams for mixtures of inorganic and organic species. Numerical results show the efficiency of the model for mixtures of inorganic electrolytes and organic acids, which make it suitable for insertion in global three-dimensional air quality

  15. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  16. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-02-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  17. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...

  18. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  19. Modelling time course gene expression data with finite mixtures of linear additive models.

    Science.gov (United States)

    Grün, Bettina; Scharl, Theresa; Leisch, Friedrich

    2012-01-15

    A model class of finite mixtures of linear additive models is presented. The component-specific parameters in the regression models are estimated using regularized likelihood methods. The advantages of the regularization are that (i) the pre-specified maximum degrees of freedom for the splines is less crucial than for unregularized estimation and that (ii) for each component individually a suitable degree of freedom is selected in an automatic way. The performance is evaluated in a simulation study with artificial data as well as on a yeast cell cycle dataset of gene expression levels over time. The latest release version of the R package flexmix is available from CRAN (http://cran.r-project.org/).

  20. Introducing the Interactive Model for the Training of Audiovisual Translators and Analysis of Multimodal Texts

    Directory of Open Access Journals (Sweden)

    Pietro Luigi Iaia

    2015-07-01

    Full Text Available Abstract – This paper introduces the ‘Interactive Model’ of audiovisual translation developed in the context of my PhD research on the cognitive-semantic, functional and socio-cultural features of the Italian-dubbing translation of a corpus of humorous texts. The Model is based on two interactive macro-phases – ‘Multimodal Critical Analysis of Scripts’ (MuCrAS and ‘Multimodal Re-Textualization of Scripts’ (MuReTS. Its construction and application are justified by a multidisciplinary approach to the analysis and translation of audiovisual texts, so as to focus on the linguistic and extralinguistic dimensions affecting both the reception of source texts and the production of target ones (Chaume 2004; Díaz Cintas 2004. By resorting to Critical Discourse Analysis (Fairclough 1995, 2001, to a process-based approach to translation and to a socio-semiotic analysis of multimodal texts (van Leeuwen 2004; Kress and van Leeuwen 2006, the Model is meant to be applied to the training of audiovisual translators and discourse analysts in order to help them enquire into the levels of pragmalinguistic equivalence between the source and the target versions. Finally, a practical application shall be discussed, detailing the Italian rendering of a comic sketch from the American late-night talk show Conan.Abstract – Questo studio introduce il ‘Modello Interattivo’ di traduzione audiovisiva sviluppato durante il mio dottorato di ricerca incentrato sulle caratteristiche cognitivo-semantiche, funzionali e socio-culturali della traduzione italiana per il doppiaggio di un corpus di testi comici. Il Modello è costituito da due fasi: la prima, di ‘Analisi critica e multimodale degli script’ (MuCrAS e la seconda, di ‘Ritestualizzazione critica e multimodale degli script’ (MuReTS, e la sua costruzione e applicazione sono frutto di un approccio multidisciplinare all’analisi e traduzione dei testi audiovisivi, al fine di esaminare le

  1. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  2. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    Science.gov (United States)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  3. Isothermal (vapour + liquid) equilibrium of (cyclic ethers + chlorohexane) mixtures: Experimental results and SAFT modelling

    International Nuclear Information System (INIS)

    Bandres, I.; Giner, B.; Lopez, M.C.; Artigas, H.; Lafuente, C.

    2008-01-01

    Experimental data for the isothermal (vapour + liquid) equilibrium of mixtures formed by several cyclic ethers (tetrahydrofuran, tetrahydropyran, 1,3-dioxolane, and 1,4-dioxane) and chlorohexane at temperatures of (298.15 and 328.15) K are presented. Experimental results have been discussed in terms of both, molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. Furthermore, the influence of the temperature on the (vapour + liquid) equilibrium of these mixtures has been explored and discussed. Transferable parameters of the SAFT-VR approach together with standard combining rules have been used to model the phase equilibrium of the mixtures and a description of the (vapour + liquid) equilibrium of them that is in excellent agreement with the experimental data are provided

  4. The phase behavior of a hard sphere chain model of a binary n-alkane mixture

    International Nuclear Information System (INIS)

    Malanoski, A. P.; Monson, P. A.

    2000-01-01

    Monte Carlo computer simulations have been used to study the solid and fluid phase properties as well as phase equilibrium in a flexible, united atom, hard sphere chain model of n-heptane/n-octane mixtures. We describe a methodology for calculating the chemical potentials for the components in the mixture based on a technique used previously for atomic mixtures. The mixture was found to conform accurately to ideal solution behavior in the fluid phase. However, much greater nonidealities were seen in the solid phase. Phase equilibrium calculations indicate a phase diagram with solid-fluid phase equilibrium and a eutectic point. The components are only miscible in the solid phase for dilute solutions of the shorter chains in the longer chains. (c) 2000 American Institute of Physics

  5. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  6. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    Science.gov (United States)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  7. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  8. Implications of introducing realistic fire response traits in a Dynamic Global Vegetation Model

    Science.gov (United States)

    Kelley, D.; Harrison, S. P.; Prentice, I. C.

    2013-12-01

    Bark thickness is a key trait protecting woody plants against fire damage, while the ability to resprout is a trait that confers competitive advantage over non-resprouting individuals in fire-prone landscapes. Neither trait is well represented in fire-enabled dynamic global vegetation models (DGVMs). Here we describe a version of the Land Processes and eXchanges (LPX-Mv1) DGVM that incorporates both of these traits in a realistic way. From a synthesis of a large number of field studies, we show there is considerable innate variability in bark thickness between species within a plant-functional type (PFT). Furthermore, bark thickness is an adaptive trait at ecosystem level, increasing with fire frequency. We use the data to specify the range of bark thicknesses characteristic of each model PFT. We allow this distribution to change dynamically: thinner-barked trees are killed preferentially by fire, shifting the distribution of bark thicknesses represented in a model grid cell. We use the PFT-specific bark-thickness probability range for saplings during re-establishment. Since it is rare to destroy all trees in a grid cell, this treatment results in average bark thickness increasing with fire frequency and intensity. Resprouting is a prominent adaptation of temperate and tropical trees in fire-prone areas. The ability to resprout from above-ground tissue (apical or epicormic resprouting) results in the fastest recovery of total biomass after disturbance; resprouting from basal or below-ground meristems results in slower recovery, while non-resprouting species must regenerate from seed and therefore take the longest time to recover. Our analyses show that resprouting species have thicker bark than non-resprouting species. Investment in resprouting is accompanied by reduced efficacy of regeneration from seed. We introduce resprouting PFTs in LPX-Mv1 by specifying an appropriate range of bark thickness, allowing resprouters to survive fire and regenerate vegetatively in

  9. Modelling storm development and the impact when introducing waves, sea spray and heat fluxes

    Science.gov (United States)

    Wu, Lichuan; Rutgersson, Anna; Sahlée, Erik

    2015-04-01

    In high wind speed conditions, sea spray generated due to intensity breaking waves have big influence on the wind stress and heat fluxes. Measurements show that drag coefficient will decrease in high wind speed. Sea spray generation function (SSGF), an important term of wind stress parameterization in high wind speed, usually treated as a function of wind speed/friction velocity. In this study, we introduce a wave state depended SSGG and wave age depended Charnock number into a high wind speed wind stress parameterization (Kudryavtsev et al., 2011; 2012). The proposed wind stress parameterization and sea spray heat fluxes parameterization from Andreas et al., (2014) were applied to an atmosphere-wave coupled model to test on four storm cases. Compared with measurements from the FINO1 platform in the North Sea, the new wind stress parameterization can reduce the forecast errors of wind in high wind speed range, but not in low wind speed. Only sea spray impacted on wind stress, it will intensify the storms (minimum sea level pressure and maximum wind speed) and lower the air temperature (increase the errors). Only the sea spray impacted on the heat fluxes, it can improve the model performance on storm tracks and the air temperature, but not change much in the storm intensity. If both of sea spray impacted on the wind stress and heat fluxes are taken into account, it has the best performance in all the experiment for minimum sea level pressure and maximum wind speed and air temperature. Andreas, E. L., Mahrt, L., and Vickers, D. (2014). An improved bulk air-sea surface flux algorithm, including spray-mediated transfer. Quarterly Journal of the Royal Meteorological Society. Kudryavtsev, V. and Makin, V. (2011). Impact of ocean spray on the dynamics of the marine atmospheric boundary layer. Boundary-layer meteorology, 140(3):383-410. Kudryavtsev, V., Makin, V., and S, Z. (2012). On the sea-surface drag and heat/mass transfer at strong winds. Technical report, Royal

  10. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    Science.gov (United States)

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cases......- udders relative to SCS from IMI+ udders. Further, the genetic correlation between SCS of IMI- and SCS of IMI+ was 0.61, and heritability for liability to putative mastitis was 0.07. Models B2 and C allocated approximately 30% of SCS records to IMI+, but for model B1 this fraction was only 10......%. The correlation between estimated breeding values for liability to putative mastitis based on the model (SCS for model A) and estimated breeding values for liability to clinical mastitis from the national evaluation was greatest for model B1, followed by models A, C, and B2. This may be explained by model B1...

  12. Use of a Modified Vector Model for Odor Intensity Prediction of Odorant Mixtures

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2015-03-01

    Full Text Available Odor intensity (OI indicates the perceived intensity of an odor by the human nose, and it is usually rated by specialized assessors. In order to avoid restrictions on assessor participation in OI evaluations, the Vector Model which calculates the OI of a mixture as the vector sum of its unmixed components’ odor intensities was modified. Based on a detected linear relation between the OI and the logarithm of odor activity value (OAV—a ratio between chemical concentration and odor threshold of individual odorants, OI of the unmixed component was replaced with its corresponding logarithm of OAV. The interaction coefficient (cosα which represented the degree of interaction between two constituents was also measured in a simplified way. Through a series of odor intensity matching tests for binary, ternary and quaternary odor mixtures, the modified Vector Model provided an effective way of relating the OI of an odor mixture with the lnOAV values of its constituents. Thus, OI of an odor mixture could be directly predicted by employing the modified Vector Model after usual quantitative analysis. Besides, it was considered that the modified Vector Model was applicable for odor mixtures which consisted of odorants with the same chemical functional groups and similar molecular structures.

  13. Solvatochromic and Kinetic Response Models in (Ethyl Acetate + Chloroform or Methanol Solvent Mixtures

    Directory of Open Access Journals (Sweden)

    L. R. Vottero

    2000-03-01

    Full Text Available The present work analyzes the solvent effects upon the solvatochromic response models for a set of chemical probes and the kinetic response models for an aromatic nucleophilic substitution reaction, in binary mixtures in which both pure components are able to form intersolvent complexes by hydrogen bonding.

  14. Estimation of Item Response Models Using the EM Algorithm for Finite Mixtures.

    Science.gov (United States)

    Woodruff, David J.; Hanson, Bradley A.

    This paper presents a detailed description of maximum parameter estimation for item response models using the general EM algorithm. In this paper the models are specified using a univariate discrete latent ability variable. When the latent ability variable is discrete the distribution of the observed item responses is a finite mixture, and the EM…

  15. Growth Kinetics and Modeling of Direct Oxynitride Growth with NO-O2 Gas Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Everist, Sarah; Nelson, Jerry; Sharangpani, Rahul; Smith, Paul Martin; Tay, Sing-Pin; Thakur, Randhir

    1999-05-03

    We have modeled growth kinetics of oxynitrides grown in NO-O2 gas mixtures from first principles using modified Deal-Grove equations. Retardation of oxygen diffusion through the nitrided dielectric was assumed to be the dominant growth-limiting step. The model was validated against experimentally obtained curves with good agreement. Excellent uniformity, which exceeded expected walues, was observed.

  16. Differential expression among tissues in morbidly obese individuals using a finite mixture model under BLUP approach

    DEFF Research Database (Denmark)

    Kogelman, Lisette; Trabzuni, Daniah; Bonder, Marc Jan

    effects of the interactions between tissues and probes using BLUP (Best Linear Unbiased Prediction) linear models correcting for gender, which were subsequently used in a finite mixture model to detect DE genes in each tissue. This approach evades the multiple-testing problem and is able to detect...

  17. Rheology of petrolatum-paraffin oil mixtures : Applications to analogue modelling of geological processes

    NARCIS (Netherlands)

    Duarte, João C.; Schellart, Wouter P.; Cruden, Alexander R.

    2014-01-01

    Paraffins have been widely used in analogue modelling of geological processes. Petrolatum and paraffin oil are commonly used to lubricate model boundaries and to simulate weak layers. In this paper, we present rheological tests of petrolatum, paraffin oil and several homogeneous mixtures of the two.

  18. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only...

  19. Computational microstructure modeling of asphalt mixtures subjected to rate-dependent fracture

    Science.gov (United States)

    Aragao, Francisco Thiago Sacramento

    2011-12-01

    Computational microstructure models have been actively pursued by the pavement mechanics community as a promising and advantageous alternative to limited analytical and semi-empirical modeling approaches. The primary goal of this research is to develop a computational microstructure modeling framework that will eventually allow researchers and practitioners of the pavement mechanics community to evaluate the effects of constituents and mix design characteristics (some of the key factors directly affecting the quality of the pavement structures) on the mechanical responses of asphalt mixtures. To that end, the mixtures are modeled as heterogeneous materials with inelastic mechanical behavior. To account for the complex geometric characteristics of the heterogeneous mixtures, an image treatment process is used to generate finite element meshes that closely reproduce the geometric characteristics of aggregate particles (size, shape, and volume fraction) that are distributed within a fine aggregate asphaltic matrix (FAM). These two mixture components, i.e., aggregate particles and FAM, are modeled, respectively, as isotropic linear elastic and isotropic linear viscoelastic materials and the material properties required as inputs for the computational model are obtained from simple and expedited laboratory tests. In addition to the consideration of the complex geometric characteristics and inelastic behavior of the mixtures, this study uses the cohesive zone model to simulate fracture as a gradual and rate-dependent phenomenon in which the initiation and propagation of discrete cracks take place in different locations of the mixture microstructure. Rate-dependent cohesive zone fracture properties are obtained using a procedure that combines laboratory tests of semi-circular bending specimens of the FAM and their numerical simulations. To address the rate-dependent fracture characteristics of the FAM phase, a rate-dependent cohesive zone model is developed and

  20. Maximum likelihood pixel labeling using a spatially variant finite mixture model

    International Nuclear Information System (INIS)

    Gopal, S.S.; Hebert, T.J.

    1996-01-01

    We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain

  1. Polymer mixtures in confined geometries: Model systems to explore ...

    Indian Academy of Sciences (India)

    Keywords. Polymers; phase separation; wetting; Monte Carlo simulation; finite size scaling. PACS Nos 61.41.+e; 64.75.+g; 68.45.Gd; 83.80.Es. 1. Introduction. Symmetric binary (A,B) polymer blends are model systems for the theoretical and experimental study of phase separation, since the chain length NA = NB = N can.

  2. Correlation for the Carbon Dioxide and Water Mixture Based on The Lemmon-Jacobsen Mixture Model and the Peng-Robinson Equation of State

    Science.gov (United States)

    Paulus, M. E.; Penoncello, S. G.

    2006-09-01

    Models representing the thermodynamic behavior of the CO2-H2O mixture have been developed. The single-phase model is based upon the thermodynamic property mixture model proposed by Lemmon and Jacobsen. The model represents the single-phase vapor states over the temperature range of 323-1074 K, up to a pressure of 100 MPa over the entire composition range. The experimental data used to develop these formulations include pressure-density-temperature-composition, second virial coefficients, and excess enthalpy. A nonlinear regression algorithm was used to determine the various adjustable parameters of the model. The model can be used to compute density values of the mixture to within ±0.1%. Due to a lack of single-phase liquid data for the mixture, the Peng-Robinson equation of state (PREOS) was used to predict the vapor-liquid equilibrium (VLE) properties of the mixture. Comparisons of values computed from the Peng-Robinson VLE predictions using standard binary interaction parameters to experimental data are presented to verify the accuracy of this calculation. The VLE calculation is shown to be accurate to within ±3 K in temperature over a temperature range of 323-624 K up to 20 MPa. The accuracy from 20 to 100 MPa is ±3 K up to ±30 K in temperature, being worse for higher pressures. Bubble-point mole fractions can be determined within ±0.05 for CO2.

  3. SMILES--toward a better laughter life: a model for introducing humor in the palliative care setting.

    Science.gov (United States)

    Borod, Manuel

    2006-01-01

    Humor and laughter have been thought to be beneficial for thousands of years. Although much has been written on this subject, there is very little written about the actual use of humor in practice. This article reviews the role that humor and laughter may play in medicine in general and palliative care in particular. In addition, it introduces a model that clinicians can follow when trying to introduce humor into their daily encounters with patients.

  4. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    error models. This is obtained by constructing an estimating function through projections of some chosen function of Yti+1 onto functions of previous observations Yti ; : : : ; Yt0 . The process of interest Xti+1 is partially observed through a measurement equation Yti+1 = h(Xti+1)+ noice, where h......(:) is restricted to be a polynomial. Through a simulation study we compare for the CIR process the obtained estimator with an estimator derived from utilizing the extended Kalman filter. The simulation study shows that the two estimation methods perform equally well.......The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...

  5. A mathematical framework for estimating pathogen transmission fitness and inoculum size using data from a competitive mixtures animal model.

    Directory of Open Access Journals (Sweden)

    James M McCaw

    2011-04-01

    Full Text Available We present a method to measure the relative transmissibility ("transmission fitness" of one strain of a pathogen compared to another. The model is applied to data from "competitive mixtures" experiments in which animals are co-infected with a mixture of two strains. We observe the mixture in each animal over time and over multiple generations of transmission. We use data from influenza experiments in ferrets to demonstrate the approach. Assessment of the relative transmissibility between two strains of influenza is important in at least three contexts: 1 Within the human population antigenically novel strains of influenza arise and compete for susceptible hosts. 2 During a pandemic event, a novel sub-type of influenza competes with the existing seasonal strain(s. The unfolding epidemiological dynamics are dependent upon both the population's susceptibility profile and the inherent transmissibility of the novel strain compared to the existing strain(s. 3 Neuraminidase inhibitors (NAIs, while providing significant potential to reduce transmission of influenza, exert selective pressure on the virus and so promote the emergence of drug-resistant strains. Any adverse outcome due to selection and subsequent spread of an NAI-resistant strain is exquisitely dependent upon the transmission fitness of that strain. Measurement of the transmission fitness of two competing strains of influenza is thus of critical importance in determining the likely time-course and epidemiology of an influenza outbreak, or the potential impact of an intervention measure such as NAI distribution. The mathematical framework introduced here also provides an estimate for the size of the transmitted inoculum. We demonstrate the framework's behaviour using data from ferret transmission studies, and through simulation suggest how to optimise experimental design for assessment of transmissibility. The method introduced here for assessment of mixed transmission events has

  6. A generalized physiologically-based toxicokinetic modeling system for chemical mixtures containing metals

    Directory of Open Access Journals (Sweden)

    Isukapalli Sastry S

    2010-06-01

    Full Text Available Abstract Background Humans are routinely and concurrently exposed to multiple toxic chemicals, including various metals and organics, often at levels that can cause adverse and potentially synergistic effects. However, toxicokinetic modeling studies of exposures to these chemicals are typically performed on a single chemical basis. Furthermore, the attributes of available models for individual chemicals are commonly estimated specifically for the compound studied. As a result, the available models usually have parameters and even structures that are not consistent or compatible across the range of chemicals of concern. This fact precludes the systematic consideration of synergistic effects, and may also lead to inconsistencies in calculations of co-occurring exposures and corresponding risks. There is a need, therefore, for a consistent modeling framework that would allow the systematic study of cumulative risks from complex mixtures of contaminants. Methods A Generalized Toxicokinetic Modeling system for Mixtures (GTMM was developed and evaluated with case studies. The GTMM is physiologically-based and uses a consistent, chemical-independent physiological description for integrating widely varying toxicokinetic models. It is modular and can be directly "mapped" to individual toxicokinetic models, while maintaining physiological consistency across different chemicals. Interaction effects of complex mixtures can be directly incorporated into the GTMM. Conclusions The application of GTMM to different individual metals and metal compounds showed that it explains available observational data as well as replicates the results from models that have been optimized for individual chemicals. The GTMM also made it feasible to model toxicokinetics of complex, interacting mixtures of multiple metals and nonmetals in humans, based on available literature information. The GTMM provides a central component in the development of a "source

  7. Application of fuzzy logic to determine the odour intensity of model gas mixtures using electronic nose

    Science.gov (United States)

    Szulczyński, Bartosz; Gębicki, Jacek; Namieśnik, Jacek

    2018-01-01

    The paper presents the possibility of application of fuzzy logic to determine the odour intensity of model, ternary gas mixtures (α-pinene, toluene and triethylamine) using electronic nose prototype. The results obtained using fuzzy logic algorithms were compared with the values obtained using multiple linear regression (MLR) model and sensory analysis. As the results of the studies, it was found the electronic nose prototype along with the fuzzy logic pattern recognition system can be successfully used to estimate the odour intensity of tested gas mixtures. The correctness of the results obtained using fuzzy logic was equal to 68%.

  8. FlexMix: A General Framework for Finite Mixture Models and Latent Class Regression in R

    Directory of Open Access Journals (Sweden)

    Friedrich Leisch

    2004-10-01

    Full Text Available FlexMix implements a general framework for fitting discrete mixtures of regression models in the R statistical computing environment: three variants of the EM algorithm can be used for parameter estimation, regressors and responses may be multivariate with arbitrary dimension, data may be grouped, e.g., to account for multiple observations per individual, the usual formula interface of the S language is used for convenient model specification, and a modular concept of driver functions allows to interface many different types of regression models. Existing drivers implement mixtures of standard linear models, generalized linear models and model-based clustering. FlexMix provides the E-step and all data handling, while the M-step can be supplied by the user to easily define new models.

  9. The Impact of Organic Aerosol Mixtures on Hygroscopicity: Comparison between Measurements and UNIFAC Model

    Science.gov (United States)

    Lee, J.; Hildemann, L.

    2011-12-01

    The presence of anthropogenic organic compounds in aerosols has the potential to contribute to global climate change by altering the hygroscopic behavior of cloud condensation nuclei. Dicarboxylic acids, including malonic, glutaric, and succinic acids, are among the more frequently measured water-soluble organic compounds in atmospheric aerosols. For solutions containing single or mixed inorganic species, aerosol water uptake has been most commonly modeled using the ZSR method. This approach has also been utilized for solutions containing mixtures of inorganics and organics. For solutions containing a single organic species, the UNIFAC or a modified UNIFAC model has been used, and the features it includes also allow it potentially to be utilized for mixtures. However, there is a dearth of experimental data involving the hygroscopic behavior of organic solution mixtures. In this study, water vapor pressure was measured at 12 C over aqueous bulk solutions containing dicarboxylic acids, using both a quadrupole mass spectrometer and a Baratron pressure transducer. The water uptake of malonic and glutaric acids showed good agreement with limited previous measurements reported in the literature that used an electrodynamic balance (EDB) or bulk solution method. Our experimental measurements of water uptake for malonic and glutaric acids also agreed to within 1% of the predictions using Peng's modified UNIFAC model (Environ. Sci. Technol, 35, 4495-4501, 2001). However, water vapor pressure measurements for solutions containing 50:50 molar mixtures of malonic and glutaric acids were not consistent with predictions using Peng's modified UNIFAC model for mixtures. In the modified UNIFAC model, this mixture of malonic/glutaric acids was predicted to fall roughly midway between the hygroscopicity of the two individual organics. In our measurements, malonic acid exerted the dominant influence in determining the overall water vapor pressure, so that the water uptake of the mixed

  10. Investigations in gasification of biomass mixtures using thermodynamic equilibrium and semi-equilibrium models

    Energy Technology Data Exchange (ETDEWEB)

    Buragohain, Buljit; Mahanta, Pinakeswar; Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati - 781 039, Assam (India)

    2011-07-01

    Biomass gasifiers with power generation capacities exceeding 1 MW have large biomass consumption. Availability of a single biomass in such large quantities is rather difficult, and hence, mixtures of biomasses need to be used as feed-stock for these gasifiers. This study has assessed feasibility of biomass mixtures as fuel in biomass gasifiers for decentralized power generation using thermodynamic equilibrium and semi-equilibrium (with limited carbon conversion) model employing Gibbs energy minimization. Binary mixtures of common biomasses found in northeastern states of India such as rice husk, bamboo dust and saw dust have been taken for analysis. The potential for power generation from gasifier has been evaluated on the basis of net yield (in Nm3) and LHV (in MJ/Nm3) of the producer gas obtained from gasification of 100 g of biomass mixture. The results of simulations have revealed interesting trends in performance of gasifiers with operating parameters such as air ratio, temperature of gasification and composition of the biomass mixture. For all biomass mixtures, the optimum air ratio is {approx} 0.3 with gasification temperature of 800oC. Under total equilibrium conditions, and for engine-generator efficiency of 30%, the least possible fuel consumption is found to be 0.8 kg/kW-h. As revealed in the simulations with semi-equilibrium model, this parameter shows an inverse variation with the extent of carbon conversion. For low carbon conversions ({approx} 60% or so), the specific fuel consumption could be as high as 1.5 kg/kW-h. The results of this study have also been compared with previous literature (theoretical as well as experimental) and good agreement has been found. This study, thus, has demonstrated potential of replacement of a single biomass fuel in the gasifier with mixtures of different biomasses.

  11. Ion swarm data for electrical discharge modeling in air and flue gas mixtures

    International Nuclear Information System (INIS)

    Nelson, D.; Benhenni, M.; Eichwald, O.; Yousfi, M.

    2003-01-01

    The first step of this work is the determination of the elastic and inelastic ion-molecule collision cross sections for the main ions (N 2 + , O 2 + , CO 2 + , H 2 O + and O - ) usually present either in the air or flue gas discharges. The obtained cross section sets, given for ion kinetic energies not exceeding 100 eV, correspond to the interactions of each ion with its parent molecule (symmetric case) or nonparent molecule (asymmetric case). Then by using these different cross section sets, it is possible to obtain the ion swarm data for the different gas mixtures involving N 2 , CO 2 , H 2 O and O 2 molecules whatever their relative proportions. These ion swarm data are obtained from an optimized Monte Carlo method well adapted for the ion transport in gas mixtures. This also allows us to clearly show that the classical linear approximations usually applied for the ion swarm data in mixtures such as Blanc's law are far to be valid. Then, the ion swarm data are given in three cases of gas mixtures: a dry air (80% N 2 , 20% O 2 ), a ternary gas mixture (82% N 2 , 12% CO 2 , 6% O 2 ) and a typical flue gas (76% N 2 , 12% CO 2 , 6% O 2 , 6% H 2 O). From these reliable ion swarm data, electrical discharge modeling for a wire to plane electrode configuration has been carried out in these three mixtures at the atmospheric pressure for different applied voltages. Under the same discharge conditions, large discrepancies in the streamer formation and propagation have been observed in these three mixture cases. They are due to the deviations existing not only between the different effective electron-molecule ionization rates but also between the ion transport properties mainly because of the presence of a highly polar molecule such as H 2 O. This emphasizes the necessity to properly consider the ion transport in the discharge modeling

  12. Modeling of drug delivery into tissues with a microneedle array using mixture theory.

    Science.gov (United States)

    Zhang, Rumi; Zhang, Peiyu; Dalton, Colin; Jullien, Graham A

    2010-02-01

    In this paper, we apply mixture theory to quantitatively predict the transient behavior of drug delivery by using a microneedle array inserted into tissue. In the framework of mixture theory, biological tissue is treated as a multi-phase fluid saturated porous medium, where the mathematical behavior of the tissue is characterized by the conservation equations of multi-phase models. Drug delivery by microneedle array imposes additional requirements on the simulation procedures, including drug absorption by the blood capillaries and tissue cells, as well as a moving interface along its flowing pathway. The contribution of this paper is to combine mixture theory with the moving mesh methods in modeling the transient behavior of drug delivery into tissue. Numerical simulations are provided to obtain drug concentration distributions into tissues and capillaries.

  13. Modeling when people quit: Bayesian censored geometric models with hierarchical and latent-mixture extensions.

    Science.gov (United States)

    Okada, Kensuke; Vandekerckhove, Joachim; Lee, Michael D

    2018-02-01

    People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.

  14. Semiparametric accelerated failure time cure rate mixture models with competing risks.

    Science.gov (United States)

    Choi, Sangbum; Zhu, Liang; Huang, Xuelin

    2018-01-15

    Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non-ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause-specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause-conditional survival function that are combined through a multinomial logistic model within the cure-mixture modeling framework. The cure-mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel-based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel-smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    Science.gov (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  16. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.

    2016-12-15

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  17. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part IV. Applications to mixtures of CO2 with alkanes

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Ali, Shahid; Kontogeorgis, Georgios

    2015-01-01

    models, capable of predicting the complex phase behavior of multicomponent mixtures as well as their volumetric properties. In this direction, over the last several years, the cubic-plus-association (CPA) thermodynamic model has been successfully used for describing volumetric properties and phase...

  18. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  19. Generalized Concentration Addition Modeling Predicts Mixture Effects of Environmental PPARγ Agonists.

    Science.gov (United States)

    Watt, James; Webster, Thomas F; Schlezinger, Jennifer J

    2016-09-01

    The vast array of potential environmental toxicant combinations necessitates the development of efficient strategies for predicting toxic effects of mixtures. Current practices emphasize the use of concentration addition to predict joint effects of endocrine disrupting chemicals in coexposures. Generalized concentration addition (GCA) is one such method for predicting joint effects of coexposures to chemicals and has the advantage of allowing for mixture components to have differences in efficacy (ie, dose-response curve maxima). Peroxisome proliferator-activated receptor gamma (PPARγ) is a nuclear receptor that plays a central role in regulating lipid homeostasis, insulin sensitivity, and bone quality and is the target of an increasing number of environmental toxicants. Here, we tested the applicability of GCA in predicting mixture effects of therapeutic (rosiglitazone and nonthiazolidinedione partial agonist) and environmental PPARγ ligands (phthalate compounds identified using EPA's ToxCast database). Transcriptional activation of human PPARγ1 by individual compounds and mixtures was assessed using a peroxisome proliferator response element-driven luciferase reporter. Using individual dose-response parameters and GCA, we generated predictions of PPARγ activation by the mixtures, and we compared these predictions with the empirical data. At high concentrations, GCA provided a better estimation of the experimental response compared with 3 alternative models: toxic equivalency factor, effect summation and independent action. These alternatives provided reasonable fits to the data at low concentrations in this system. These experiments support the implementation of GCA in mixtures analysis with endocrine disrupting compounds and establish PPARγ as an important target for further studies of chemical mixtures. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e

  20. Multidimensional Classification of Examinees Using the Mixture Random Weights Linear Logistic Test Model

    Science.gov (United States)

    Choi, In-Hee; Wilson, Mark

    2015-01-01

    An essential feature of the linear logistic test model (LLTM) is that item difficulties are explained using item design properties. By taking advantage of this explanatory aspect of the LLTM, in a mixture extension of the LLTM, the meaning of latent classes is specified by how item properties affect item difficulties within each class. To improve…

  1. A Longitudinal Investigation of Motivation and Secondary School Achievement Using Growth Mixture Modeling

    Science.gov (United States)

    Hodis, Flaviu A.; Meyer, Luanna H.; McClure, John; Weir, Kirsty F.; Walkey, Frank H.

    2011-01-01

    Early identification of risk can support interventions to prevent academic failure. This study investigated patterns of evolution in achievement trajectories for 1,522 high school students in relation to initial achievement, student motivation, and key demographic characteristics. Growth mixture modeling identified 2 classes of longitudinal…

  2. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  3. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    Science.gov (United States)

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  4. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  5. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite

  6. Modelling and simulation of an energy transport phenomenon in a solid-fluid mixture

    International Nuclear Information System (INIS)

    Costa, M.L.M.; Sampaio, R.; Gama, R.M.S. da.

    1989-08-01

    In the present work a model for a local description of the energy transfer phenomenon in a binary (solid-fluid) saturated mixture is proposed. The heat transfer in a saturated flow (through a porous medium) between two parallel plates is simulated by using the Finite Volumes Method. (author) [pt

  7. Detecting Intervention Effects Using a Multilevel Latent Transition Analysis with a Mixture IRT Model

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.; Bottge, Brian

    2013-01-01

    A multilevel latent transition analysis (LTA) with a mixture IRT measurement model (MixIRTM) is described for investigating the effectiveness of an intervention. The addition of a MixIRTM to the multilevel LTA permits consideration of both potential heterogeneity in students' response to instructional intervention as well as a methodology for…

  8. Sufficient Sample Sizes for Discrete-Time Survival Analysis Mixture Models

    NARCIS (Netherlands)

    Moerbeek, Mirjam

    2014-01-01

    Long-term survivors in trials with survival endpoints are subjects who will not experience the event of interest. Membership in the class of long-term survivors is unobserved and should be inferred from the data by means of a mixture model. An important question is how large the sample size should

  9. A Random Pattern Mixture Model for Ordinal Outcomes with Informative Dropouts

    Science.gov (United States)

    Liu, Chengcheng; Ratcliffe, Sarah J.; Guo, Wensheng

    2016-01-01

    We extend a random pattern mixture joint model for longitudinal ordinal outcomes and informative dropouts. The patients are generalized to ”pattern” groups based on known covariates that are potential surrogated for the severity of the underlying condition. The random pattern effects are defined as the latent effects linking the dropout process and the ordinal longitudinal outcome. Conditional on the random pattern effects, the longitudinal outcome and the dropout times are assumed independent. Estimates are obtained via the EM algorithm. We applied the model to the end-stage renal disease (ESRD) data. Anemia was found to be significantly affected by baseline iron treatment when the dropout information was adjusted via the study model; as opposed to an independent or shared parameter model. Simulations were performed to evaluate the performance of the random pattern mixture model under various assumptions. PMID:25894456

  10. Infrared image segmentation based on region of interest extraction with Gaussian mixture modeling

    Science.gov (United States)

    Yeom, Seokwon

    2017-05-01

    Infrared (IR) imaging has the capability to detect thermal characteristics of objects under low-light conditions. This paper addresses IR image segmentation with Gaussian mixture modeling. An IR image is segmented with Expectation Maximization (EM) method assuming the image histogram follows the Gaussian mixture distribution. Multi-level segmentation is applied to extract the region of interest (ROI). Each level of the multi-level segmentation is composed of the k-means clustering, the EM algorithm, and a decision process. The foreground objects are individually segmented from the ROI windows. In the experiments, various methods are applied to the IR image capturing several humans at night.

  11. Decoding complex chemical mixtures with a physical model of a sensor array.

    Directory of Open Access Journals (Sweden)

    Julia Tsitron

    2011-10-01

    Full Text Available Combinatorial sensor arrays, such as the olfactory system, can detect a large number of analytes using a relatively small number of receptors. However, the complex pattern of receptor responses to even a single analyte, coupled with the non-linearity of responses to mixtures of analytes, makes quantitative prediction of compound concentrations in a mixture a challenging task. Here we develop a physical model that explicitly takes receptor-ligand interactions into account, and apply it to infer concentrations of highly related sugar nucleotides from the output of four engineered G-protein-coupled receptors. We also derive design principles that enable accurate mixture discrimination with cross-specific sensor arrays. The optimal sensor parameters exhibit relatively weak dependence on component concentrations, making a single designed array useful for analyzing a sizable range of mixtures. The maximum number of mixture components that can be successfully discriminated is twice the number of sensors in the array. Finally, antagonistic receptor responses, well-known to play an important role in natural olfactory systems, prove to be essential for the accurate prediction of component concentrations.

  12. Decoding Complex Chemical Mixtures with a Physical Model of a Sensor Array

    Science.gov (United States)

    Broach, James R.; Morozov, Alexandre V.

    2011-01-01

    Combinatorial sensor arrays, such as the olfactory system, can detect a large number of analytes using a relatively small number of receptors. However, the complex pattern of receptor responses to even a single analyte, coupled with the non-linearity of responses to mixtures of analytes, makes quantitative prediction of compound concentrations in a mixture a challenging task. Here we develop a physical model that explicitly takes receptor-ligand interactions into account, and apply it to infer concentrations of highly related sugar nucleotides from the output of four engineered G-protein-coupled receptors. We also derive design principles that enable accurate mixture discrimination with cross-specific sensor arrays. The optimal sensor parameters exhibit relatively weak dependence on component concentrations, making a single designed array useful for analyzing a sizable range of mixtures. The maximum number of mixture components that can be successfully discriminated is twice the number of sensors in the array. Finally, antagonistic receptor responses, well-known to play an important role in natural olfactory systems, prove to be essential for the accurate prediction of component concentrations. PMID:22046111

  13. Dynamic mean field theory for lattice gas models of fluid mixtures confined in mesoporous materials.

    Science.gov (United States)

    Edison, J R; Monson, P A

    2013-11-12

    We present the extension of dynamic mean field theory (DMFT) for fluids in porous materials (Monson, P. A. J. Chem. Phys. 2008, 128, 084701) to the case of mixtures. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable equilibrium states for fluids in pores after a change in the bulk pressure or composition. It is especially useful for studying systems where there are capillary condensation or evaporation transitions. Nucleation processes associated with these transitions are emergent features of the theory and can be visualized via the time dependence of the density distribution and composition distribution in the system. For mixtures an important component of the dynamics is relaxation of the composition distribution in the system, especially in the neighborhood of vapor-liquid interfaces. We consider two different types of mixtures, modeling hydrocarbon adsorption in carbon-like slit pores. We first present results on bulk phase equilibria of the mixtures and then the equilibrium (stable/metastable) behavior of these mixtures in a finite slit pore and an inkbottle pore. We then use DMFT to describe the evolution of the density and composition in the pore in the approach to equilibrium after changing the state of the bulk fluid via composition or pressure changes.

  14. Physiologically based pharmacokinetic modeling of tea catechin mixture in rats and humans.

    Science.gov (United States)

    Law, Francis C P; Yao, Meicun; Bi, Hui-Chang; Lam, Stephen

    2017-06-01

    Although green tea ( Camellia sinensis) (GT) contains a large number of polyphenolic compounds with anti-oxidative and anti-proliferative activities, little is known of the pharmacokinetics and tissue dose of tea catechins (TCs) as a chemical mixture in humans. The objectives of this study were to develop and validate a physiologically based pharmacokinetic (PBPK) model of tea catechin mixture (TCM) in rats and humans, and to predict an integrated or total concentration of TCM in the plasma of humans after consuming GT or Polyphenon E (PE). To this end, a PBPK model of epigallocatechin gallate (EGCg) consisting of 13 first-order, blood flow-limited tissue compartments was first developed in rats. The rat model was scaled up to humans by replacing its physiological parameters, pharmacokinetic parameters and tissue/blood partition coefficients (PCs) with human-specific values. Both rat and human EGCg models were then extrapolated to other TCs by substituting its physicochemical parameters, pharmacokinetic parameters, and PCs with catechin-specific values. Finally, a PBPK model of TCM was constructed by linking three rat (or human) tea catechin models together without including a description for pharmacokinetic interaction between the TCs. The mixture PBPK model accurately predicted the pharmacokinetic behaviors of three individual TCs in the plasma of rats and humans after GT or PE consumption. Model-predicted total TCM concentration in the plasma was linearly related to the dose consumed by humans. The mixture PBPK model is able to translate an external dose of TCM into internal target tissue doses for future safety assessment and dose-response analysis studies in humans. The modeling framework as described in this paper is also applicable to the bioactive chemical in other plant-based health products.

  15. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  16. Sleep-promoting effects of the GABA/5-HTP mixture in vertebrate models.

    Science.gov (United States)

    Hong, Ki-Bae; Park, Yooheon; Suh, Hyung Joo

    2016-09-01

    The aim of this study was to investigate the sleep-promoting effect of combined γ-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) on sleep quality and quantity in vertebrate models. Pentobarbital-induced sleep test and electroencephalogram (EEG) analysis were applied to investigate sleep latency, duration, total sleeping time and sleep quality of two amino acids and GABA/5-HTP mixture. In addition, real-time PCR and HPLC analysis were applied to analyze the signaling pathway. The GABA/5-HTP mixture significantly regulated the sleep latency, duration (pHTP mixture modulates both GABAergic and serotonergic signaling. Moreover, the sleep architecture can be controlled by the regulation of GABAA receptor and GABA content with 5-HTP. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.

    2007-01-01

    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, eve...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved....... though pure-component liquid densities and vapour pressures are predicted equally accurately for the associating compound. As was the case in the study of pure components, there exists some confusion in the literature about the correct interpretation and comparison of experimental data and theoretical...

  18. Mixtures of endocrine disrupting contaminants modelled on human high end exposures

    DEFF Research Database (Denmark)

    Christiansen, Sofie; Kortenkamp, A.; Petersen, Marta Axelstad

    2012-01-01

    though each individual chemical is present at low, ineffective doses, but the effects of mixtures modelled based on human intakes have not previously been investigated. To address this issue for the first time, we selected 13 chemicals for a developmental mixture toxicity study in rats where data about...... in vivo endocrine disrupting effects and information about human exposures was available, including phthalates, pesticides, UV‐filters, bisphenol A, parabens and the drug paracetamol. The mixture ratio was chosen to reflect high end human intakes. To make decisions about the dose levels for studies...... in the rat, we employed the point of departure index (PODI) approach, which sums up ratios between estimated exposure levels and no‐observed‐adverse‐effect‐level (NOAEL) values of individual substances. For high end human exposures to the 13 selected chemicals, we calculated a PODI of 0.016. As only a PODI...

  19. A Mixture Model and a Hidden Markov Model to Simultaneously Detect Recombination Breakpoints and Reconstruct Phylogenies

    Directory of Open Access Journals (Sweden)

    Bastien Boussau

    2009-01-01

    Full Text Available Homologous recombination is a pervasive biological process that affects sequences in all living organisms and viruses. In the presence of recombination, the evolutionary history of an alignment of homologous sequences cannot be properly depicted by a single bifurcating tree: some sites have evolved along a specific phylogenetic tree, others have followed another path. Methods available to analyse recombination in sequences usually involve an analysis of the alignment through sliding-windows, or are particularly demanding in computational resources, and are often limited to nucleotide sequences. In this article, we propose and implement a Mixture Model on trees and a phylogenetic Hidden Markov Model to reveal recombination breakpoints while searching for the various evolutionary histories that are present in an alignment known to have undergone homologous recombination. These models are sufficiently efficient to be applied to dozens of sequences on a single desktop computer, and can handle equivalently nucleotide or protein sequences. We estimate their accuracy on simulated sequences and test them on real data.

  20. A Mixture Model and a Hidden Markov Model to Simultaneously Detect Recombination Breakpoints and Reconstruct Phylogenies

    Directory of Open Access Journals (Sweden)

    Bastien Boussau

    2009-06-01

    Full Text Available Homologous recombination is a pervasive biological process that affects sequences in all living organisms and viruses. In the presence of recombination, the evolutionary history of an alignment of homologous sequences cannot be properly depicted by a single bifurcating tree: some sites have evolved along a specific phylogenetic tree, others have followed another path. Methods available to analyse recombination in sequences usually involve an analysis of the alignment through sliding-windows, or are particularly demanding in computational resources, and are often limited to nucleotide sequences. In this article, we propose and implement a Mixture Model on trees and a phylogenetic Hidden Markov Model to reveal recombination breakpoints while searching for the various evolutionary histories that are present in an alignment known to have undergone homologous recombination. These models are sufficiently efficient to be applied to dozens of sequences on a single desktop computer, and can handle equivalently nucleotide or protein sequences. We estimate their accuracy on simulated sequences and test them on real data.

  1. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  2. GAUSSIAN MIXTURE MODELS FOR ADAPTATION OF DEEP NEURAL NETWORK ACOUSTIC MODELS IN AUTOMATIC SPEECH RECOGNITION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Natalia A. Tomashenko

    2016-11-01

    Full Text Available Subject of Research. We study speaker adaptation of deep neural network (DNN acoustic models in automatic speech recognition systems. The aim of speaker adaptation techniques is to improve the accuracy of the speech recognition system for a particular speaker. Method. A novel method for training and adaptation of deep neural network acoustic models has been developed. It is based on using an auxiliary GMM (Gaussian Mixture Models model and GMMD (GMM-derived features. The principle advantage of the proposed GMMD features is the possibility of performing the adaptation of a DNN through the adaptation of the auxiliary GMM. In the proposed approach any methods for the adaptation of the auxiliary GMM can be used, hence, it provides a universal method for transferring adaptation algorithms developed for GMMs to DNN adaptation.Main Results. The effectiveness of the proposed approach was shown by means of one of the most common adaptation algorithms for GMM models – MAP (Maximum A Posteriori adaptation. Different ways of integration of the proposed approach into state-of-the-art DNN architecture have been proposed and explored. Analysis of choosing the type of the auxiliary GMM model is given. Experimental results on the TED-LIUM corpus demonstrate that, in an unsupervised adaptation mode, the proposed adaptation technique can provide, approximately, a 11–18% relative word error reduction (WER on different adaptation sets, compared to the speaker-independent DNN system built on conventional features, and a 3–6% relative WER reduction compared to the SAT-DNN trained on fMLLR adapted features.

  3. Intralymphatic treatment of flagellin-ovalbumin mixture reduced allergic inflammation in murine model of allergic rhinitis.

    Science.gov (United States)

    Kim, E H; Kim, J H; Samivel, R; Bae, J-S; Chung, Y-J; Chung, P-S; Lee, S E; Mo, J-H

    2016-05-01

    Bacterial flagellin, a Toll-like receptor 5 agonist, is used as an adjuvant for immunomodulation. In this study, we aimed to evaluate the effect and its mechanism following intralymphatic administration of OVA-flagellin (FlaB) mixture in the mouse model of allergic rhinitis. BALB/c mice were sensitized with OVA and treated with an OVA-FlaB mixture via intranasal, sublingual, and intralymphatic routes to evaluate the effect of each treatment. Several parameters for allergic inflammation and its underlying mechanisms were then evaluated. Intralymphatic injection of the OVA-FlaB mixture reduced symptom scores, eosinophil infiltration in the nasal mucosa, and total and OVA-specific IgE levels more significantly than intranasal and sublingual administration. Systemic cytokine (IL-4, IL-5, IL-6, IL-17, and IFN-γ) production and local cytokine (IL-4 and IL-5) production were also reduced significantly after intralymphatic injection with OVA-FlaB. Double intralymphatic injection of the mixture was more effective than single injection. Moreover, the expression of innate cytokines such as IL-25 and IL-33 in nasal epithelial cells was reduced, and the expression of chemokines such as CCL24 (eotaxin-2), CXCL1, and CXCL2 was decreased in the nasal mucosa, suggesting the underlying mechanism for intralymphatic administration of the OVA-FlaB mixture. Intralymphatic administration of an OVA-FlaB mixture was more effective in alleviating allergic inflammation than intranasal and sublingual administration in a mouse model of allergic rhinitis. This effect may be attributed to the reduced expression of innate cytokines and chemokines. This treatment modality can be considered as a new therapeutic method and agent. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    Thermodynamic properties and phase equilibria of associating mixtures cannot often be satisfactorily modelled using conventional models such as cubic equations of state. CPA (cubic-plus-association) is an equation of state (EoS), which combines the SRK EoS with the association term of SAFT. For non......-polar (non self-associating) compounds it reduces to SRK. The model was first published in 1996 and since then it has been developed and applied with success to binary systems containing water-alkanes and alcohol/glycol/acid-alkanes (both VLE and LLE) as well as ternary and multicomponent (V)LLE for water......-alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...

  5. The STIRPAT Analysis on Carbon Emission in Chinese Cities: An Asymmetric Laplace Distribution Mixture Model

    Directory of Open Access Journals (Sweden)

    Shanshan Wang

    2017-12-01

    Full Text Available In cities’ policy-making, it is a hot issue to grasp the determinants of carbon dioxide emission in Chinese cities. And the common method is to use the STIRPAT model, where its coefficients represent the influence intensity of each determinants of carbon emission. However, less work discusses estimation accuracy, especially in the framework of non-normal distribution and heterogeneity among cities’ emission. To improve the estimation accuracy, this paper employs a new method to estimate the STIRPAT model. The method uses a mixture of Asymmetric Laplace distributions (ALDs to approximate the true distribution of the error term. Meantime, a designed two-layer EM algorithm is used to obtain estimators. We test the robustness via the comparison results of five different models. We find that the ALDs Mixture Model is more reliable the others. Further, a significant Kuznets curve relationship is identified in China.

  6. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    Science.gov (United States)

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  7. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli

    2015-03-01

    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  8. One-Stage and Bayesian Two-Stage Optimal Designs for Mixture Models

    OpenAIRE

    Lin, Hefang

    1999-01-01

    In this research, Bayesian two-stage D-D optimal designs for mixture experiments with or without process variables under model uncertainty are developed. A Bayesian optimality criterion is used in the first stage to minimize the determinant of the posterior variances of the parameters. The second stage design is then generated according to an optimality procedure that collaborates with the improved model from first stage data. Our results show that the Bayesian two-stage D-D optimal design...

  9. Modulational instability, solitons and periodic waves in a model of quantum degenerate boson-fermion mixtures

    International Nuclear Information System (INIS)

    Belmonte-Beitia, Juan; Perez-Garcia, Victor M.; Vekslerchik, Vadym

    2007-01-01

    In this paper, we study a system of coupled nonlinear Schroedinger equations modelling a quantum degenerate mixture of bosons and fermions. We analyze the stability of plane waves, give precise conditions for the existence of solitons and write explicit solutions in the form of periodic waves. We also check that the solitons observed previously in numerical simulations of the model correspond exactly to our explicit solutions and see how plane waves destabilize to form periodic waves

  10. Communication: Introducing prescribed biases in out-of-equilibrium Markov models

    Science.gov (United States)

    Dixit, Purushottam D.

    2018-03-01

    Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.

  11. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are

  12. Introducing Aviary

    CERN Document Server

    Peutz, Mike

    2010-01-01

    The world is changing. Where before you needed to purchase and install big and expensive programs on your computer in order to create stunning images, you can now do it all online for free using Aviary. Aviary is an online collection of applications that enable you to upload and modify your own photographs and images, and create new imagery from scratch. It includes a powerful photo-manipulation tool called Phoenix, a vector-drawing application called Raven, an effects suite for creating eye-watering image effects called Peacock, and much more. Introducing Aviary takes you through all of these

  13. An odor interaction model of binary odorant mixtures by a partial differential equation method.

    Science.gov (United States)

    Yan, Luchun; Liu, Jiemin; Wang, Guihua; Wu, Chuandong

    2014-07-09

    A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE) method. Based on the measurement method (tangent-intercept method) of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture's odor intensity to the individual odorant's relative odor activity value (OAV). Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors) also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  14. Comparison of activity coefficient models for atmospheric aerosols containing mixtures of electrolytes, organics, and water

    Science.gov (United States)

    Tong, Chinghang; Clegg, Simon L.; Seinfeld, John H.

    Atmospheric aerosols generally comprise a mixture of electrolytes, organic compounds, and water. Determining the gas-particle distribution of volatile compounds, including water, requires equilibrium or mass transfer calculations, at the heart of which are models for the activity coefficients of the particle-phase components. We evaluate here the performance of four recent activity coefficient models developed for electrolyte/organic/water mixtures typical of atmospheric aerosols. Two of the models, the CSB model [Clegg, S.L., Seinfeld, J.H., Brimblecombe, P., 2001. Thermodynamic modelling of aqueous aerosols containing electrolytes and dissolved organic compounds. Journal of Aerosol Science 32, 713-738] and the aerosol diameter dependent equilibrium model (ADDEM) [Topping, D.O., McFiggans, G.B., Coe, H., 2005. A curved multi-component aerosol hygroscopicity model framework: part 2—including organic compounds. Atmospheric Chemistry and Physics 5, 1223-1242] treat ion-water and organic-water interactions but do not include ion-organic interactions; these can be referred to as "decoupled" models. The other two models, reparameterized Ming and Russell model 2005 [Raatikainen, T., Laaksonen, A., 2005. Application of several activity coefficient models to water-organic-electrolyte aerosols of atmospheric interest. Atmospheric Chemistry and Physics 5, 2475-2495] and X-UNIFAC.3 [Erdakos, G.B., Change, E.I., Pandow, J.F., Seinfeld, J.H., 2006. Prediction of activity coefficients in liquid aerosol particles containing organic compounds, dissolved inorganic salts, and water—Part 3: Organic compounds, water, and ionic constituents by consideration of short-, mid-, and long-range effects using X-UNIFAC.3. Atmospheric Environment 40, 6437-6452], include ion-organic interactions; these are referred to as "coupled" models. We address the question—Does the inclusion of a treatment of ion-organic interactions substantially improve the performance of the coupled models over

  15. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    Science.gov (United States)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  16. Introducing formalism in economics: The growth model of John von Neumann

    Directory of Open Access Journals (Sweden)

    Gloria-Palermo Sandye

    2010-01-01

    Full Text Available The objective is to interpret John von Neumann's growth model as a decisive step of the forthcoming formalist revolution of the 1950s in economics. This model gave rise to an impressive variety of comments about its classical or neoclassical underpinnings. We go beyond this traditional criterion and interpret rather this model as the manifestation of von Neumann's involvement in the formalist programme of mathematician David Hilbert. We discuss the impact of Kurt Gödel's discoveries on this programme. We show that the growth model reflects the pragmatic turn of the formalist programme after Gödel and proposes the extension of modern axiomatisation to economics.

  17. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  18. Introducing MOZLEAP: an integrated long-run scenario model of the emerging energy sector of Mozambique

    NARCIS (Netherlands)

    Mahumane, G; Mulder, P.

    2016-01-01

    Since recently Mozambique is actively developing its large reserves of coal, natural gas and hydropower. Against this background, we present the first integrated long-run scenario model of the Mozambican energy sector. Our model, which we name MOZLEAP, is calibrated on the basis of recently

  19. Mathematical Modelling in Engineering: A Proposal to Introduce Linear Algebra Concepts

    Science.gov (United States)

    Cárcamo Bahamonde, Andrea; Gómez Urgelles, Joan; Fortuny Aymemí, Josep

    2016-01-01

    The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasise the development of mathematical abilities primarily associated with modelling and interpreting, which are not exclusively calculus abilities. Considering this, an instructional design was created based on mathematical modelling and…

  20. Introducing the Clean-Tech Adoption Model: A California Case Study

    NARCIS (Netherlands)

    Bijlveld, P.C. (Paul); Riezebos, P. (Peter); Wierstra, E. (Erik)

    2012-01-01

    Abstract. The Clean-Tech Adoption Model (C-TAM) explains the adoption process of clean technology. Based on the Unified Theory of Acceptance and Usage of Technology (UTAUT) combined with qualitative research and empirical data gathering, the model predicts adoption based on the perceived quality,

  1. Teacher Emotion Research: Introducing a Conceptual Model to Guide Future Research

    Science.gov (United States)

    Fried, Leanne; Mansfield, Caroline; Dobozy, Eva

    2015-01-01

    This article reports on the development of a conceptual model of teacher emotion through a review of teacher emotion research published between 2003 and 2013. By examining 82 publications regarding teacher emotion, the main aim of the review was to identify how teacher emotion was conceptualised in the literature and develop a conceptual model to…

  2. Introducing labour productivity changes into models used for economic impact analysis in tourism

    NARCIS (Netherlands)

    Klijs, Jeroen; Peerlings, Jack; Heijman, Wim

    2017-01-01

    In tourism management, traditional input-output models are often applied to calculate economic impacts, including employment impacts. These models imply that increases in output are translated into proportional increases in labour, indicating constant labour productivity. In non-linear input-

  3. Introducing a new open source GIS user interface for the SWAT model

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  4. A SAS Program Combining R Functionalities to Implement Pattern-Mixture Models

    Directory of Open Access Journals (Sweden)

    Pierre Bunouf

    2015-12-01

    Full Text Available Pattern-mixture models have gained considerable interest in recent years. Patternmixture modeling allows the analysis of incomplete longitudinal outcomes under a variety of missingness mechanisms. In this manuscript, we describe a SAS program which combines R functionalities to fit pattern-mixture models, considering the cases that missingness mechanisms are at random and not at random. Patterns are defined based on missingness at every time point and parameter estimation is based on a full group-bytime interaction. The program implements a multiple imputation method under so-called identifying restrictions. The code is illustrated using data from a placebo-controlled clinical trial. This manuscript and the program are directed to SAS users with minimal knowledge of the R language.

  5. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.

    2005-01-01

    Viscosity measurements of well-defined mixtures are useful in order to evaluate existing viscosity models. Recently, an extensive experimental study of the viscosity at pressures up to 140 MPa has been carried out for the binary systems methane + n-decane and methane toluene, between 293.15 and 373.......15 and for several methane compositions. Although very far from real petroleum fluids, these mixtures are interesting in order to study the potential of extending various models to the simulation of complex fluids with asymmetrical components (light/heavy hydrocarbon). These data (575 data points) have been...

  6. Data Requirements and Modeling for Gas Hydrate-Related Mixtures and a Comparison of Two Association Models

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Aloupis, Georgios; Kontogeorgis, Georgios M.

    2017-01-01

    the performance of the CPA and sPC-SAFT EOS for modeling the fluid-phase equilibria of gas hydrate-related systems and will try to explore how the models can help in suggesting experimental measurements. These systems contain water, hydrocarbon (alkane or aromatic), and either methanol or monoethylene glycol...... parameter sets have been chosen for the sPC-SAFT EOS for a fair comparison. The comparisons are made for pure fluid properties, vapor liquid-equilibria, and liquid liquid equilibria of binary and ternary mixtures as well as vapor liquid liquid equilibria of quaternary mixtures. The results show, from...

  7. Introducing Sectoral Models into Regional Management: An Assessment of Regulatory Impacts on the Economy

    Directory of Open Access Journals (Sweden)

    Voloshenko K.Yu.

    2017-12-01

    Full Text Available Regardless of the geography of regions, management at the regional level, both in Russia and the Baltic Sea countries, faces many challenges. Hence, it is necessary to search for new effective economic management tools, since traditional approaches and modeling practices at the regional level are not suitable for either analysing various types of impact on regional economy (production, market (product, sector, region, or assessment of their consequences and identification of the necessary measures in any given economic conditions. The authors construct sectoral models to assess regulatory impacts on regional economic performance. Assessments of regulatory impacts on product value chains, economic sectors, and regions as a whole show good repeatability, which makes it possible to provide a rationale for economic decision-making. The authors propose new sectoral models using the Kaliningrad region as an example. The models are used in a comprehensive analysis of conditions for a GRP growth resulting from an increase in sectoral contributions. To this end, the study uses the well-known approaches of simulation modelling, as well as qualitative and quantitative methods in combination with economic-mathematical optimisation models. The article presents a pilot model of regulatory impacts for selected sectors of the Kaliningrad economy. The developed and tested models suggest that a rationale for economic decision-making and consequent actions should be based on the assessment of the impact of different groups of external, internal, and independent factors on value chains, based on the criterion of optimal factor income. In conclusion, the authors offer recommendations for using the proposed models in business, public administration and regional economic modeling.

  8. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to

  9. Mixture Statistical Distribution Based Multiple Component Model for Target Detection in High Resolution SAR Imagery

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-11-01

    Full Text Available This paper proposes an innovative Mixture Statistical Distribution Based Multiple Component (MSDMC model for target detection in high spatial resolution Synthetic Aperture Radar (SAR images. Traditional detection algorithms usually ignore the spatial relationship among the target’s components. In the presented method, however, both the structural information and the statistical distribution are considered to better recognize the target. Firstly, the method based on compressed sensing reconstruction is used to recover the SAR image. Then, the multiple component model composed of a root filter and some corresponding part filters is applied to describe the structural information of the target. In the following step, mixture statistical distributions are utilised to discriminate the target from the background, and the Method of Logarithmic Cumulants (MoLC based Expectation Maximization (EM approach is adopted to estimate the parameters of the mixture statistical distribution model, which will be finally merged into the proposed MSDMC framework together with the multiple component model. In the experiment, the aeroplanes and the electrical power towers in TerraSAR-X SAR images are detected at three spatial resolutions. The results indicate that the presented MSDMC Model has potential for improving the detection performance compared with the state-of-the-art SAR target detection methods.

  10. Theory of synergistic effects: Hill-type response surfaces as 'null-interaction' models for mixtures.

    Science.gov (United States)

    Schindler, Michael

    2017-08-02

    The classification of effects caused by mixtures of agents as synergistic, antagonistic or additive depends critically on the reference model of 'null interaction'. Two main approaches are currently in use, the Additive Dose (ADM) or concentration addition (CA) and the Multiplicative Survival (MSM) or independent action (IA) models. We compare several response surface models to a newly developed Hill response surface, obtained by solving a logistic partial differential equation (PDE). Assuming that a mixture of chemicals with individual Hill-type dose-response curves can be described by an n-dimensional logistic function, Hill's differential equation for pure agents is replaced by a PDE for mixtures whose solution provides Hill surfaces as 'null-interaction' models and relies neither on Bliss independence or Loewe additivity nor uses Chou's unified general theory. An n-dimensional logistic PDE decribing the Hill-type response of n-component mixtures is solved. Appropriate boundary conditions ensure the correct asymptotic behaviour. Mathematica 11 (Wolfram, Mathematica Version 11.0, 2016) is used for the mathematics and graphics presented in this article. The Hill response surface ansatz can be applied to mixtures of compounds with arbitrary Hill parameters. Restrictions which are required when deriving analytical expressions for response surfaces from other principles, are unnecessary. Many approaches based on Loewe additivity turn out be special cases of the Hill approach whose increased flexibility permits a better description of 'null-effect' responses. Missing sham-compliance of Bliss IA, known as Colby's model in agrochemistry, leads to incompatibility with the Hill surface ansatz. Examples of binary and ternary mixtures illustrate the differences between the approaches. For Hill-slopes close to one and doses below the half-maximum effect doses MSM (Colby, Bliss, Finney, Abbott) predicts synergistic effects where the Hill model indicates 'null

  11. Separation of a multicomponent mixture by gaseous diffusion: modelization of the enrichment in a capillary - application to a pilot cascade

    International Nuclear Information System (INIS)

    Doneddu, F.

    1982-01-01

    Starting from the modelization of gaseous flow in a porous medium (flow in a capillary), we generalize the law of enrichment in an infinite cylindrical capillary, established for an isotropic linear mixture, to a multicomponent mixture. A generalization is given of the notion of separation yields and characteristic pressure classically used for separations of isotropic linear mixtures. We present formulas for diagonalizing the diffusion operator, modelization of a multistage, gaseous diffusion cascade and comparison with the experimental results of a drain cascade (N 2 -SF 6 -UF 6 mixture). [fr

  12. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  13. Introducing DeBRa: a detailed breast model for radiological studies

    Science.gov (United States)

    Ma, Andy K. W.; Gunn, Spencer; Darambara, Dimitra G.

    2009-07-01

    Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and γ-ray imaging studies.

  14. Introducing DeBRa: a detailed breast model for radiological studies

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Andy K W; Darambara, Dimitra G [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Fulham Road, London SW3 6JJ (United Kingdom); Gunn, Spencer [Dexela Ltd, Wenlock Business Centre, 50/52 Wharf Road, London N1 7SF (United Kingdom)], E-mail: andyma@physics.org, E-mail: dimitra.darambara@icr.ac.uk, E-mail: spencer@dexelaimaging.com

    2009-07-21

    Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and {gamma}-ray imaging studies.

  15. Introducing an integrated climate change perspective in POPs modelling, monitoring and regulation

    International Nuclear Information System (INIS)

    Lamon, L.; Dalla Valle, M.; Critto, A.; Marcomini, A.

    2009-01-01

    This paper presents a review on the implications of climate change on the monitoring, modelling and regulation of persistent organic pollutants (POPs). Current research gaps are also identified and discussed. Long-term data sets are essential to identify relationships between climate fluctuations and changes in chemical species distribution. Reconstructing the influence of climatic changes on POPs environmental behaviour is very challenging in some local studies, and some insights can be obtained by the few available dated sediment cores or by studying POPs response to inter-annual climate fluctuations. Knowledge gaps and future projections can be studied by developing and applying various modelling tools, identifying compounds susceptibility to climate change, local and global effects, orienting international policies. Long-term monitoring strategies and modelling exercises taking into account climate change should be considered when devising new regulatory plans in chemicals management. - Climate change implications on POPs are addressed here with special attention to monitoring, modelling and regulation issues.

  16. Introducing WISDEM:An Integrated System Modeling for Wind Turbines and Plant (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Dykes, K.; Graf, P.; Scott, G.; Ning, A.; King, R.; Guo, Y.; Parsons, T.; Damiani, R.; Felker, F.; Veers, P.

    2015-01-01

    The National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. This work illustrates a few case studies with WISDEM that focus on the design and analysis of wind turbines and plants at different system levels.

  17. Introducing the Tripartite Digitization Model for Engaging with the Intangible Cultural Heritage of the City

    DEFF Research Database (Denmark)

    Rehm, Matthias; Rodil, Kasper

    2016-01-01

    In this paper we investigate the notion of intangible cultural heritage as a driver for smart city learning applications. To this end, we shortly explore the notion of intangible heritage before presenting the tripartite digitization model that was originally developed for indigenous cultural...... heritage but can equally be applied to the smart city context. We then discuss parts of the model making use of a specific case study aiming at re-creating places in the city....

  18. Introducing an osteopathic approach into neonatology ward: the NE-O model

    Science.gov (United States)

    2014-01-01

    Background Several studies showed the effect of osteopathic manipulative treatment on neonatal care in reducing length of stay in hospital, gastrointestinal problems, clubfoot complications and improving cranial asymmetry of infants affected by plagiocephaly. Despite several results obtained, there is still a lack of standardized osteopathic evaluation and treatment procedures for newborns recovered in neonatal intensive care unit (NICU). The aim of this paper is to suggest a protocol on osteopathic approach (NE-O model) in treating hospitalized newborns. Methods The NE-O model is composed by specific evaluation tests and treatments to tailor osteopathic method according to preterm and term infants’ needs, NICU environment, medical and paramedical assistance. This model was developed to maximize the effectiveness and the clinical use of osteopathy into NICU. Results The NE-O model was adopted in 2006 to evaluate the efficacy of OMT in neonatology. Results from research showed the effectiveness of this osteopathic model in reducing preterms’ length of stay and hospital costs. Additionally the present model was demonstrated to be safe. Conclusion The present paper defines the key steps for a rigorous and effective osteopathic approach into NICU setting, providing a scientific and methodological example of integrated medicine and complex intervention. PMID:24904746

  19. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Using a Genetic mixture model to study Phenotypic traits: Differential fecundity among Yukon river Chinook Salmon

    Science.gov (United States)

    Bromaghin, J.F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.

    2011-01-01

    Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length-adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large-mesh gill-net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size-selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock-specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error.The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length-adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted

  1. Introducing tropical lianas in a vegetation model, methods and first results

    Science.gov (United States)

    Verbeeck, Hans; di Porcia, Manfredo; Kearsley, Elizabeth; Longo, Marcos

    2017-04-01

    Lianas are an important component of tropical forests, commonly constituting up to 40% of the woody stems and about 35% of the woody species and contributing substantially to forest leaf biomass. Lianas compete strongly with trees for both above- and below-ground resources. Their indirect impact on the carbon balance, due to their influence on tree community dynamics (by increasing mortality and suppressing tree growth), is far larger than their direct contribution to biomass. Currently tropical forests are experiencing large-scale structural changes, including an increase in liana abundance and biomass. This may eventually reduce the projected carbon sink of tropical forests. Despite their crucial role no single terrestrial ecosystem model has included lianas so far. The goal of this work is to include lianas in a vegetation model and to test it against experimental data. For the purpose we chose ED2 (Ecosystem Demography model version 2), a model that occupies the midpoint on the continuum from gap models that contain individual trees, to area-based global models. ED2 explicitly tracks horizontal and vertical heterogeneity in canopy structure making it very suitable to study liana impacts at a large scale. At the same time, the very inner structure of the model, that is its spatial implicitness, constraints the programming design of this new liana PFT. The first part of the presentation will focus on the current representation of lianas in ED2 and the parameterization that has been used. We will provide reference to the available literature to justify the choices made for parameters and allometries. In the second part first results will be shown where we compare the output of the model with data collected in the Paracou site (French Guiana). The data comes from both inventories and fluxtowers. We will focus mainly on plant density, diameter distributions (demography) and carbon/water fluxes. By comparing runs starting from bare ground, rus starting from observed

  2. A Beta-mixture model for dimensionality reduction, sample classification and analysis

    Directory of Open Access Journals (Sweden)

    Orntoft Torben

    2011-05-01

    Full Text Available Abstract Background Patterns of genome-wide methylation vary between tissue types. For example, cancer tissue shows markedly different patterns from those of normal tissue. In this paper we propose a beta-mixture model to describe genome-wide methylation patterns based on probe data from methylation microarrays. The model takes dependencies between neighbour probe pairs into account and assumes three broad categories of methylation, low, medium and high. The model is described by 37 parameters, which reduces the dimensionality of a typical methylation microarray significantly. We used methylation microarray data from 42 colon cancer samples to assess the model. Results Based on data from colon cancer samples we show that our model captures genome-wide characteristics of methylation patterns. We estimate the parameters of the model and show that they vary between different tissue types. Further, for each methylation probe the posterior probability of a methylation state (low, medium or high is calculated and the probability that the state is correctly predicted is assessed. We demonstrate that the model can be applied to classify cancer tissue types accurately and that the model provides accessible and easily interpretable data summaries. Conclusions We have developed a beta-mixture model for methylation microarray data. The model substantially reduces the dimensionality of the data. It can be used for further analysis, such as sample classification or to detect changes in methylation status between different samples and tissues.

  3. A beta-mixture model for dimensionality reduction, sample classification and analysis.

    Science.gov (United States)

    Laurila, Kirsti; Oster, Bodil; Andersen, Claus L; Lamy, Philippe; Orntoft, Torben; Yli-Harja, Olli; Wiuf, Carsten

    2011-05-27

    Patterns of genome-wide methylation vary between tissue types. For example, cancer tissue shows markedly different patterns from those of normal tissue. In this paper we propose a beta-mixture model to describe genome-wide methylation patterns based on probe data from methylation microarrays. The model takes dependencies between neighbour probe pairs into account and assumes three broad categories of methylation, low, medium and high. The model is described by 37 parameters, which reduces the dimensionality of a typical methylation microarray significantly. We used methylation microarray data from 42 colon cancer samples to assess the model. Based on data from colon cancer samples we show that our model captures genome-wide characteristics of methylation patterns. We estimate the parameters of the model and show that they vary between different tissue types. Further, for each methylation probe the posterior probability of a methylation state (low, medium or high) is calculated and the probability that the state is correctly predicted is assessed. We demonstrate that the model can be applied to classify cancer tissue types accurately and that the model provides accessible and easily interpretable data summaries. We have developed a beta-mixture model for methylation microarray data. The model substantially reduces the dimensionality of the data. It can be used for further analysis, such as sample classification or to detect changes in methylation status between different samples and tissues. © 2011 Laurila et al; licensee BioMed Central Ltd.

  4. Gaussian mixture model classification of odontocetes in the Southern California Bight and the Gulf of California.

    Science.gov (United States)

    Roch, Marie A; Soldevilla, Melissa S; Burtenshaw, Jessica C; Henderson, E Elizabeth; Hildebrand, John A

    2007-03-01

    A method for the automatic classification of free-ranging delphinid vocalizations is presented. The vocalizations of short-beaked and long-beaked common (Delphinus delphis and Delphinus capensis), Pacific white-sided (Lagenorhynchus obliquidens), and bottlenose (Tursiops truncatus) dolphins were recorded in a pelagic environment of the Southern California Bight and the Gulf of California over a period of 4 years. Cepstral feature vectors are extracted from call data which contain simultaneous overlapping whistles, burst-pulses, and clicks from a single species. These features are grouped into multisecond segments. A portion of the data is used to train Gaussian mixture models of varying orders for each species. The remaining call data are used to test the performance of the models. Species are predicted based upon probabilistic measures of model similarity with test segment groups having durations between 1 and 25 s. For this data set, 256 mixture Gaussian mixture models and segments of at least 10 s of call data resulted in the best classification results. The classifier predicts the species of groups with 67%-75% accuracy depending upon the partitioning of the training and test data.

  5. Symmetrization of excess Gibbs free energy: A simple model for binary liquid mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Castellanos-Suarez, Aly J., E-mail: acastell@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of); Garcia-Sucre, Maximo, E-mail: mgs@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of)

    2011-03-15

    A symmetric expression for the excess Gibbs free energy of liquid binary mixtures is obtained using an appropriate definition for the effective contact fraction. We have identified a mechanism of local segregation as the main cause of the contact fraction variation with the concentration. Starting from this mechanism we develop a simple model for describing binary liquid mixtures. In this model two parameters appear: one adjustable, and the other parameter depending on the first one. Following this procedure we reproduce the experimental data of (liquid + vapor) equilibrium with a degree of accuracy comparable to well-known more elaborated models. The way in which we take into account the effective contacts between molecules allows identifying the compound which may be considered to induce one of the following processes: segregation, anti-segregation and dispersion of the components in the liquid mixture. Finally, the simplicity of the model allows one to obtain only one resulting interaction energy parameter, which makes easier the physical interpretation of the results.

  6. Estimating animal abundance with N-mixture models using the R-INLA package for R

    KAUST Repository

    Meehan, Timothy D.

    2017-05-03

    Successful management of wildlife populations requires accurate estimates of abundance. Abundance estimates can be confounded by imperfect detection during wildlife surveys. N-mixture models enable quantification of detection probability and often produce abundance estimates that are less biased. The purpose of this study was to demonstrate the use of the R-INLA package to analyze N-mixture models and to compare performance of R-INLA to two other common approaches -- JAGS (via the runjags package), which uses Markov chain Monte Carlo and allows Bayesian inference, and unmarked, which uses Maximum Likelihood and allows frequentist inference. We show that R-INLA is an attractive option for analyzing N-mixture models when (1) familiar model syntax and data format (relative to other R packages) are desired, (2) survey level covariates of detection are not essential, (3) fast computing times are necessary (R-INLA is 10 times faster than unmarked, 300 times faster than JAGS), and (4) Bayesian inference is preferred.

  7. A New Simplified Local Density Model for Adsorption of Pure Gases and Binary Mixtures

    Science.gov (United States)

    Hasanzadeh, M.; Dehghani, M. R.; Feyzi, F.; Behzadi, B.

    2010-12-01

    Adsorption modeling is an important tool for process simulation and design. Many theoretical models have been developed to describe adsorption data for pure and multicomponent gases. The simplified local density (SLD) approach is a thermodynamic model that can be used with any equation of state and offers some predictive capability with adjustable parameters for modeling of slit-shaped pores. In previous studies, the SLD model has been utilized with the Lennard-Jones potential function for modeling of fluid-solid interactions. In this article, we have focused on application of the Sutherland potential function in an SLD-Peng-Robinson model. The advantages and disadvantages of using the new potential function for adsorption of methane, ethane, carbon dioxide, nitrogen, and three binary mixtures on two types of activated carbon are illustrated. The results have been compared with previous models. It is shown that the new SLD model can correlate adsorption data for different pressures and temperatures with minimum error.

  8. Mathematical modelling in engineering: A proposal to introduce linear algebra concepts

    Directory of Open Access Journals (Sweden)

    Andrea Dorila Cárcamo

    2016-03-01

    Full Text Available The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts:  span and spanning set. This was applied to first year engineering students. Results suggest that this type of instructional design contributes to the construction of these mathematical concepts and can also favour first year engineering students understanding of key linear algebra concepts and potentiate the development of higher order skills.

  9. Introducing Elitist Black-Box Models: When Does Elitist Selection Weaken the Performance of Evolutionary Algorithms?

    OpenAIRE

    Doerr, Carola; Lengler, Johannes

    2015-01-01

    Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new \\emph{elitist black-box model}, in which algorithms are required to base all decisions solely on ...

  10. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  11. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  12. Modeling of columnar and equiaxed solidification of binary mixtures; Modelisation de la solidification colonnaire et equiaxe de melanges binaires

    Energy Technology Data Exchange (ETDEWEB)

    Roux, P

    2005-12-15

    This work deals with the modelling of dendritic solidification in binary mixtures. Large scale phenomena are represented by volume averaging of the local conservation equations. This method allows to rigorously derive the partial differential equations of averaged fields and the closure problems associated to the deviations. Such problems can be resolved numerically on periodic cells, representative of dendritic structures, in order to give a precise evaluation of macroscopic transfer coefficients (Drag coefficients, exchange coefficients, diffusion-dispersion tensors...). The method had already been applied for a model of columnar dendritic mushy zone and it is extended to the case of equiaxed dendritic solidification, where solid grains can move. The two-phase flow is modelled with an Eulerian-Eulerian approach and the novelty is to account for the dispersion of solid velocity through the kinetic agitation of the particles. A coupling of the two models is proposed thanks to an original adaptation of the columnar model, allowing for undercooling calculation: a solid-liquid interfacial area density is introduced and calculated. At last, direct numerical simulations of crystal growth are proposed with a diffuse interface method for a representation of local phenomena. (author)

  13. Exploring Use of New Media in Environmental Education Contexts: Introducing Visitors' Technology Use in Zoos Model

    Science.gov (United States)

    Yocco, Victor; Danter, Elizabeth H.; Heimlich, Joseph E.; Dunckel, Betty A.; Myers, Chris

    2011-01-01

    Modern zoological gardens have invested substantial resources in technology to deliver environmental education concepts to visitors. Investment in these media reflects a currently unsubstantiated belief that visitors will both use and learn from these media alongside more traditional and less costly displays. This paper proposes a model that…

  14. A Path Model of School Violence Perpetration: Introducing Online Game Addiction as a New Risk Factor.

    Science.gov (United States)

    Kim, Jae Yop; Lee, Jeen Suk; Oh, Sehun

    2015-08-10

    Drawing on the cognitive information-processing model of aggression and the general aggression model, we explored why adolescents become addicted to online games and how their immersion in online games affects school violence perpetration (SVP). For this purpose, we conducted statistical analyses on 1,775 elementary and middle school students who resided in northern districts of Seoul, South Korea. The results validated the proposed structural equation model and confirmed the statistical significance of the structural paths from the variables; that is, the paths from child abuse and self-esteem to SVP were significant. The levels of self-esteem and child abuse victimization affected SVP, and this effect was mediated by online game addiction (OGA). Furthermore, a multigroup path analysis showed significant gender differences in the path coefficients of the proposed model, indicating that gender exerted differential effects on adolescents' OGA and SVP. Based on these results, prevention and intervention methods to curb violence in schools have been proposed. © The Author(s) 2015.

  15. Introducing Meta-models for a More Efficient Hazard Mitigation Strategy with Rockfall Protection Barriers

    Science.gov (United States)

    Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane

    2018-04-01

    The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.

  16. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  17. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    in the commercial software PFC3D, including the slip model, linear stiffness-contact model, and contact bond model. A macro-scale Burger's model was first established and the input parameters of Burger's contact model were calibrated by adjusting them so that the model fitted the experimental data for the complex...... modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted...

  18. A generalized longitudinal mixture IRT model for measuring differential growth in learning environments.

    Science.gov (United States)

    Kadengye, Damazo T; Ceulemans, Eva; Van den Noortgate, Wim

    2014-09-01

    This article describes a generalized longitudinal mixture item response theory (IRT) model that allows for detecting latent group differences in item response data obtained from electronic learning (e-learning) environments or other learning environments that result in large numbers of items. The described model can be viewed as a combination of a longitudinal Rasch model, a mixture Rasch model, and a random-item IRT model, and it includes some features of the explanatory IRT modeling framework. The model assumes the possible presence of latent classes in item response patterns, due to initial person-level differences before learning takes place, to latent class-specific learning trajectories, or to a combination of both. Moreover, it allows for differential item functioning over the classes. A Bayesian model estimation procedure is described, and the results of a simulation study are presented that indicate that the parameters are recovered well, particularly for conditions with large item sample sizes. The model is also illustrated with an empirical sample data set from a Web-based e-learning environment.

  19. Loss Severity Distribution Estimation Of Operational Risk Using Gaussian Mixture Model For Loss Distribution Approach

    OpenAIRE

    Sholihat, Seli Siti; Murfi, Hendri

    2016-01-01

    Banks must be able to manage all of banking risk; one of them is operational risk. Banks manage operational risk by calculates estimating operational risk which is known as the economic capital (EC). Loss Distribution Approach (LDA) is a popular method to estimate economic capital(EC).This paper propose Gaussian Mixture Model(GMM) for severity distribution estimation of loss distribution approach(LDA). The result on this research is the value at EC of LDA method using GMM is smaller 2 % -...

  20. Modeling the flow of activated H2 + CH4 mixture by deposition of diamond nanostructures

    Directory of Open Access Journals (Sweden)

    Plotnikov Mikhail

    2017-01-01

    Full Text Available Algorithm of the direct simulation Monte Carlo method for the flow of hydrogen and methane mixture in a cylindrical channel is developed. Heterogeneous reactions on tungsten channel surfaces are included into the model. Their effects on flows are analyzed. A one-dimensional approach based on the solution of equilibrium chemical kinetics equations is used to analyze gas-phase methane decomposition. The obtained results may be useful for optimization of gas-dynamic sources of activated gas diamond synthesis.

  1. Catalytically stabilized combustion of lean methane-air-mixtures: a numerical model

    Energy Technology Data Exchange (ETDEWEB)

    Dogwiler, U.; Benz, P.; Mantharas, I. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-06-01

    The catalytically stabilized combustion of lean methane/air mixtures has been studied numerically under conditions closely resembling the ones prevailing in technical devices. A detailed numerical model has been developed for a laminar, stationary, 2-D channel flow with full heterogeneous and homogeneous reaction mechanisms. The computations provide direct information on the coupling between heterogeneous-homogeneous combustion and in particular on the means of homogeneous ignitions and stabilization. (author) 4 figs., 3 refs.

  2. Using incremental general regression neural network for learning mixture models from incomplete data

    OpenAIRE

    Abas, Ahmed R.

    2011-01-01

    Finite mixture models (FMM) is a well-known pattern recognition method, in which parameters are commonly determined from complete data using the Expectation Maximization (EM) algorithm. In this paper, a new algorithm is proposed to determine FMM parameters from incomplete data. Compared with a modified EM algorithm that is proposed earlier the proposed algorithm has better performance than the modified EM algorithm when the dimensions containing missing values are at least moderately correlat...

  3. The Precise Measurement of Vapor-Liquid Equilibrium Properties of the CO2/Isopentane Binary Mixture, and Fitted Parameters for a Helmholtz Energy Mixture Model

    Science.gov (United States)

    Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.

    2017-10-01

    Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.

  4. Fermi problems as tasks to introduce modelling: what we know and what else we should know

    Directory of Open Access Journals (Sweden)

    Lluís Albarracín

    2017-08-01

    Full Text Available Fermi problems have been widely used in Physics teaching at university level in the United States. Multiple recommendations for use in other educational areas can be found in the literature, as the case of mathematical modeling introduction, but its presence in math classrooms has not been yet achieved. We present these problems and discuss about its definition and characteristics that make them particularly interesting for the use of mathematics in real contexts. We also review those aspects that have been investigated from the perspective of mathematics education, especially the way in which students generate mathematical models to solve them and we aim some directions that should be addressed in future research.

  5. Teaching Trauma: A Model for Introducing Traumatic Materials in the Classroom

    Directory of Open Access Journals (Sweden)

    Jessica D. Cless

    2017-09-01

    Full Text Available niversity courses in disciplines such as social work, family studies, humanities, and other areas often use classroom materials that contain traumatic material (Barlow & Becker-Blease, 2012. While many recommendations based on trauma theory exist for instructors at the university level, these are often made in the context of clinical training programs, rather than at the undergraduate level across disciplines. Furthermore, no organized model exists to aid instructors in developing a trauma-informed pedagogy for teaching courses on traumatic stress, violence, and other topics that may pose a risk for secondary traumatic stress in the classroom (Kostouros, 2008. This paper seeks to bridge the gap between trauma theory and implementation of sensitive content in classrooms of higher education, and presents a model of trauma-informed teaching that was developed in the context of an undergraduate trauma studies program. Implications and future directions for research in the area of trauma-informed university classrooms are discussed.

  6. Unfreezing the Flexnerian Model: introducing longitudinal integrated clerkships in rural communities.

    Science.gov (United States)

    Bing-You, Robert G; Trowbridge, Robert L; Kruithoff, Catherine; Daggett, John L

    2014-01-01

    Physician shortages in rural areas remain severe but may be ameliorated by recent expansions in medical school class sizes. Expanding student exposure to rural medicine by increasing the amount of prolonged clinical experiences in rural areas may increase the likelihood of students pursuing a career in rural medicine. This research sought to investigate the perspective of rural physicians on the introduction of a rurally based nine-month Longitudinal Integrated Clerkship (LIC). In this mixed-methods study, nine physician leaders were interviewed from five Maine, USA, rural hospitals participating in an LIC. Semi-structured interviews were audiotaped and transcribed. Qualitative analysis techniques were used to code the transcripts and develop themes. Forty-seven participating rural LIC preceptors were also surveyed through an online survey. Four major themes related to implementing the LIC model emerged: (1) melting old ways, (2) overcoming fears, (3) synergy of energy, and (4) benefits all-around. The faculty were very positive about the LIC, with increased job satisfaction, practice morale, and ongoing learning, but concerned about the financial impact on productivity. The importance of these themes and perceptions are discussed within the three-stage model of change by Lewin. These results describe how the innovative LIC model can conceptually unfreeze the traditional Flexnerian construct for rural physicians. Highlighting the many stakeholder benefits and addressing the anxieties and fears of rural faculty may facilitate the implementation of a rural LIC. Given the net favorable perception of rural faculty of the LIC, this educational model has the potential to play a major role in increasing the rural workforce.

  7. Introducing a rainfall compound distribution model based on weather patterns sub-sampling

    Directory of Open Access Journals (Sweden)

    F. Garavaglia

    2010-06-01

    Full Text Available This paper presents a probabilistic model for daily rainfall, using sub-sampling based on meteorological circulation. We classified eight typical but contrasted synoptic situations (weather patterns for France and surrounding areas, using a "bottom-up" approach, i.e. from the shape of the rain field to the synoptic situations described by geopotential fields. These weather patterns (WP provide a discriminating variable that is consistent with French climatology, and allows seasonal rainfall records to be split into more homogeneous sub-samples, in term of meteorological genesis.

    First results show how the combination of seasonal and WP sub-sampling strongly influences the identification of the asymptotic behaviour of rainfall probabilistic models. Furthermore, with this level of stratification, an asymptotic exponential behaviour of each sub-sample appears as a reasonable hypothesis. This first part is illustrated with two daily rainfall records from SE of France.

    The distribution of the multi-exponential weather patterns (MEWP is then defined as the composition, for a given season, of all WP sub-sample marginal distributions, weighted by the relative frequency of occurrence of each WP. This model is finally compared to Exponential and Generalized Pareto distributions, showing good features in terms of robustness and accuracy. These final statistical results are computed from a wide dataset of 478 rainfall chronicles spread on the southern half of France. All these data cover the 1953–2005 period.

  8. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    Directory of Open Access Journals (Sweden)

    Jae Young Lee

    2013-01-01

    Full Text Available Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water and glutaric acid solutions (0–5 mol/kg water agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%. However, for solutions containing mixtures of malonic/glutaric acids, malonic/succinic acids, and glutaric/succinic acids, the disagreements between the measurements and predictions using the ZSR model or Peng’s modified UNIFAC model are higher than those for the single-component cases. Measurements of the overall water vapor pressure for 50 : 50 molar mixtures of malonic/glutaric acids closely followed that for malonic acid alone. For mixtures of malonic/succinic acids and glutaric/succinic acids, the influence of a constant concentration of succinic acid on water uptake became more significant as the concentration of malonic acid or glutaric acid was increased.

  9. SAR Images Statistical Modeling and Classification Based on the Mixture of Alpha-Stable Distributions

    Directory of Open Access Journals (Sweden)

    Fangling Pu

    2013-05-01

    Full Text Available This paper proposes the mixture of Alpha-stable (MAS distributions for modeling statistical property of Synthetic Aperture Radar (SAR images in a supervised Markovian classification algorithm. Our work is motivated by the fact that natural scenes consist of various reflectors with different types that are typically concentrated within a small area, and SAR images generally exhibit sharp peaks, heavy tails, and even multimodal statistical property, especially at high resolution. Unimodal distributions do not fit such statistical property well, and thus a multimodal approach is necessary. Driven by the multimodality and impulsiveness of high resolution SAR images histogram, we utilize the mixture of Alpha-stable distributions to describe such characteristics. A pseudo-simulated annealing (PSA estimator based on Markov chain Monte Carlo (MCMC is present to efficiently estimate model parameters of the mixture of Alpha-stable distributions. To validate the proposed PSA estimator, we apply it to simulated data and compare its performance to that of a state-of-the-art estimator. Finally, we exploit the MAS distributions and a Markovian context for SAR images classification. The effectiveness of the proposed classifier is demonstrated by experiments on TerraSAR-X images, which verifies the validity of the MAS distributions for modeling and classification of SAR images.

  10. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    Directory of Open Access Journals (Sweden)

    H. Portner

    2010-11-01

    Full Text Available Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not well understood. Thus, we performed an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS.

    We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff. We determined the parameter confidence ranges of the formulations by nonlinear regression analysis based on eight experimental datasets from Northern Hemisphere ecosystems. We sampled over the confidence ranges of the parameters and ran simulations for each pair of temperature response function and calibration site. We analyzed both the long-term and the short-term heterotrophic soil carbon dynamics over a virtual elevation gradient in southern Switzerland.

    The temperature relationship of Lloyd-Taylor fitted the overall data set best as the other functions either resulted in poor fits (Exponential, Arrhenius or were not applicable for all datasets (Gaussian, Van't Hoff. There were two main sources of uncertainty for model simulations: (1 the lack of confidence in the parameter estimates of the temperature response, which increased with increasing temperature, and (2 the size of the simulated soil carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. Our results therefore indicate that such projections are more uncertain for higher elevations and hence also higher latitudes, which are of key importance for the global terrestrial carbon budget.

  11. Introducing the bio-psycho-social-physical model of dementia through a collective case study design.

    Science.gov (United States)

    Keady, John; Jones, Lesley; Ward, Richard; Koch, Susan; Swarbrick, Caroline; Hellström, Ingrid; Davies-Quarrell, Vivienne; Williams, Sion

    2013-10-01

    To provide evidence for the development of a physical domain attached to the well-known bio-psycho-social model of dementia. The objectives were to develop a set of international case studies that followed a trajectory approach, from prevention to end-of-life care. In the UK the bio-psycho-social model has informed the shape of the National Institute for Health and Clinical Excellence and the Social Care Institute for Excellence 'dementia' guideline. However, limited attention has been paid to outlining and describing a physical domain of dementia, a discrepancy that informed the rationale for this study. A collective case study design was used to address the research aim and objectives. Case studies from along the trajectory of dementia were provided by an international team of contributors from an inter-disciplinary background comprising nursing (general and mental health), social work and social science. The team's synthesis and analysis of the six case studies generated five repeating themes with each theme becoming components of a 'physical' domain of dementia. The five identified physical components were: (1) physical well-being, (2) physical health and examination, (3) physical care, (4) physical treatment and (5) physical environment. The development of a bio-psycho-social-physical model of dementia presents a holistic and culturally sensitive approach to understanding the experience of living with dementia, and to providing care and support in a variety of situations and contexts. The physical domain of dementia has particular relevance to nursing and nursing practice, such as providing physical care at the end-of-life. The interplay between the biological-psychological-social-physical domains of dementia and the trajectory of dementia could form the basis of clinical decision-making and practice. © 2012 Blackwell Publishing Ltd.

  12. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    Science.gov (United States)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-09-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mol N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3-32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. Under high irradiance, non-constitutive mixotrophy appreciably increases annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. In this ecosystem, non-constitutive mixotrophy is also observed to have an indirect stimulating effect on diatoms. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that contrasting forms of mixotrophy have different

  13. Introducing the Evaluation Tools for HSE Management System Performance Using Balanced Score Card Model

    Directory of Open Access Journals (Sweden)

    Ali Mohammadi

    2016-12-01

    Full Text Available Background: The performance of the HSE units has various dimensions Leading to different performances. Thus, any industry should be capable of evaluating these systems. The aim of this study was to design a standard questionnaire in the field of performance evaluation of HSE management system employing Balanced Score Card model. Methods: In this study we, first determined the criteria to be evaluated in the framework of Balanced Score Card model based on the objectives and strategies of HSE Management System and existing standards, and then designed questions on every criterion. We used content validity and Cronbach's Alpha to determine the reliability and validity of the questionnaire. Results: The primary questionnaire was comprised of 126 questions some of which were omitted regarding the results obtained from the CVR and CVI values. We obtained the CVI average of environmental dimension to be 0.75 and its CVI average 0.71. Conclusion: With respect to the results of the reliability and validity of this questionnaire,and its standardized design we can suggest using it for evaluation of HSE management system performance in organizations and industries with the mentioned system.

  14. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  15. The agony of ambivalence and ways to resolve it: introducing the MAID model.

    Science.gov (United States)

    van Harreveld, Frenk; van der Pligt, Joop; de Liver, Yael N

    2009-02-01

    People are generally averse toward conflict between beliefs and/or feelings underlying their attitudes-that is, attitudinal ambivalence. This review integrates literature on attitudinal ambivalence with theories on decision making and coping strategies to gain a better understanding of when and how people deal with feelings of ambivalence. First it shows that ambivalence is experienced as being particularly unpleasant when the ambivalent attitude holder is confronted with the necessity to make a choice concerning the ambivalent attitude object; then, incongruent evaluative components of the attitude become accessible, and feelings of uncertainty about the potential outcomes arise, which may involve the anticipation of aversive emotions. Several coping strategies are employed when ambivalence is experienced as unpleasant. Emotion- and problem-focused coping strategies are discussed. The article concludes with a discussion of the MAID (model of ambivalence-induced discomfort), which aims to describe the consequences of ambivalence.

  16. Introducing a Model for Suspicious Behaviors Detection in Electronic Banking by Using Decision Tree Algorithms

    Directory of Open Access Journals (Sweden)

    Rohulla Kosari Langari

    2014-02-01

    Full Text Available Change the world through information technology and Internet development, has created competitive knowledge in the field of electronic commerce, lead to increasing in competitive potential among organizations. In this condition The increasing rate of commercial deals developing guaranteed with speed and light quality is due to provide dynamic system of electronic banking until by using modern technology to facilitate electronic business process. Internet banking is enumerate as a potential opportunity the fundamental pillars and determinates of e-banking that in cyber space has been faced with various obstacles and threats. One of this challenge is complete uncertainty in security guarantee of financial transactions also exist of suspicious and unusual behavior with mail fraud for financial abuse. Now various systems because of intelligence mechanical methods and data mining technique has been designed for fraud detection in users’ behaviors and applied in various industrial such as insurance, medicine and banking. Main of article has been recognizing of unusual users behaviors in e-banking system. Therefore, detection behavior user and categories of emerged patterns to paper the conditions for predicting unauthorized penetration and detection of suspicious behavior. Since detection behavior user in internet system has been uncertainty and records of transactions can be useful to understand these movement and therefore among machine method, decision tree technique is considered common tool for classification and prediction, therefore in this research at first has determinate banking effective variable and weight of everything in internet behaviors production and in continuation combining of various behaviors manner draw out such as the model of inductive rules to provide ability recognizing of different behaviors. At least trend of four algorithm Chaid, ex_Chaid, C4.5, C5.0 has compared and evaluated for classification and detection of exist

  17. Modeling the long-term effects of introduced herbivores on the spread of an invasive tree

    Science.gov (United States)

    Zhang, Bo; DeAngelis, Don; Rayamajhi, Min B.; Botkin, Daniel B.

    2017-01-01

    ContextMelaleuca quinquenervia (Cav.) Blake (hereafter melaleuca) is an invasive tree from Australia that has spread over the freshwater ecosystems of southern Florida, displacing native vegetation, thus threatening native biodiversity. Suppression of melaleuca appears to be progressing through the introduction of insect species, the weevil, Oxiops vitiosa, and the psyllid, Boreioglycaspis melaleucae.ObjectiveTo improve understanding of the possible effects of herbivory on the landscape dynamics of melaleuca in native southern Florida plant communities.MethodsWe projected likely future changes in plant communities using the individual based modeling platform, JABOWA-II, by simulating successional processes occurring in two types of southern Florida habitat, cypress swamp and bay swamp, occupied by native species and melaleuca, with the impact of insect herbivores.ResultsComputer simulations show melaleuca invasion leads to decreases in density and basal area of native species, but herbivory would effectively control melaleuca to low levels, resulting in a recovery of native species. When herbivory was modeled on pure melaleuca stands, it was more effective in stands with initially larger-sized melaleuca. Although the simulated herbivory did not eliminate melaleuca, it decreased its presence dramatically in all cases, supporting the long-term effectiveness of herbivory in controlling melaleuca invasion.ConclusionsThe results provide three conclusions relevant to management: (1) The introduction of insect herbivory that has been applied to melaleuca appears sufficient to suppress melaleuca over the long term, (2) dominant native species may recover in about 50 years, and (3) regrowth of native species will further suppress melaleuca through competition.

  18. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi

    2016-05-01

    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  19. Microstructural Analysis and Rheological Modeling of Asphalt Mixtures Containing Recycled Asphalt Materials

    Directory of Open Access Journals (Sweden)

    Augusto Cannone Falchetto

    2014-09-01

    Full Text Available The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP, and, more recently, Recycled Asphalt Shingles (RAS on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP and modeling of rheological data obtained with the Bending Beam Rheometer (BBR. Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder.

  20. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  1. An ecological type nonlinear model for the removal of carbon dioxide from the atmosphere by introducing liquid species

    Directory of Open Access Journals (Sweden)

    Shyam Sundar

    2013-06-01

    Full Text Available The average temperature of our planet is increasing in past several decades due to emission of global warming gases such as CO2, CH4, etc. in the atmosphere leading to undesirable environmental consequences. Therefore, it is necessary to find a mechanism by which a global warming gas can be removed from the regional atmosphere. In this paper, therefore, we proposed an ecological type nonlinear mathematical model for the removal of a global warming gas CO2 from the regional atmosphere by externally introduced liquid species, which may react with this gas and removed it by gravity. The model consists of three dependent variables namely; the concentration of carbon dioxide, the concentration of externally introduced liquid species and the concentration of particulate matters (secondary species formed due to interaction of carbon dioxide with liquid species. The local and global stability conditions are discussed using Routh-Hurwitz criteria and suitable Lyapunove's function respectively. It is shown, analytically and numerically, that the concentration of carbon dioxide decreases as the rate of introduction of externally introduced liquid species increases.

  2. Introducing a Semi-Coated Model to Investigate Antibacterial Effects of Biocompatible Polymers on Titanium Surfaces

    Directory of Open Access Journals (Sweden)

    Andreas Winkel

    2015-02-01

    Full Text Available Peri-implant infections from bacterial biofilms on artificial surfaces are a common threat to all medical implants. They are a handicap for the patient and can lead to implant failure or even life-threatening complications. New implant surfaces have to be developed to reduce biofilm formation and to improve the long-term prognosis of medical implants. The aim of this study was (1 to develop a new method to test the antibacterial efficacy of implant surfaces by direct surface contact and (2 to elucidate whether an innovative antimicrobial copolymer coating of 4-vinyl-N-hexylpyridinium bromide and dimethyl(2-methacryloyloxyethyl phosphonate (VP:DMMEP 30:70 on titanium is able to reduce the attachment of bacteria prevalent in peri-implant infections. With a new in vitro model with semi-coated titanium discs, we were able to show a dramatic reduction in the adhesion of various pathogenic bacteria (Streptococcus sanguinis, Escherichia coli, Staphylococcus aureus, Staphylococcus epidermidis, completely independently of effects caused by soluble materials. In contrast, soft tissue cells (human gingival or dermis fibroblasts were less affected by the same coating, despite a moderate reduction in initial adhesion of gingival fibroblasts. These data confirm the hypothesis that VP:DMMEP 30:70 is a promising antibacterial copolymer that may be of use in several clinical applications.

  3. Introducing a Semi-Coated Model to Investigate Antibacterial Effects of Biocompatible Polymers on Titanium Surfaces

    Science.gov (United States)

    Winkel, Andreas; Dempwolf, Wibke; Gellermann, Eva; Sluszniak, Magdalena; Grade, Sebastian; Heuer, Wieland; Eisenburger, Michael; Menzel, Henning; Stiesch, Meike

    2015-01-01

    Peri-implant infections from bacterial biofilms on artificial surfaces are a common threat to all medical implants. They are a handicap for the patient and can lead to implant failure or even life-threatening complications. New implant surfaces have to be developed to reduce biofilm formation and to improve the long-term prognosis of medical implants. The aim of this study was (1) to develop a new method to test the antibacterial efficacy of implant surfaces by direct surface contact and (2) to elucidate whether an innovative antimicrobial copolymer coating of 4-vinyl-N-hexylpyridinium bromide and dimethyl(2-methacryloyloxyethyl) phosphonate (VP:DMMEP 30:70) on titanium is able to reduce the attachment of bacteria prevalent in peri-implant infections. With a new in vitro model with semi-coated titanium discs, we were able to show a dramatic reduction in the adhesion of various pathogenic bacteria (Streptococcus sanguinis, Escherichia coli, Staphylococcus aureus, Staphylococcus epidermidis), completely independently of effects caused by soluble materials. In contrast, soft tissue cells (human gingival or dermis fibroblasts) were less affected by the same coating, despite a moderate reduction in initial adhesion of gingival fibroblasts. These data confirm the hypothesis that VP:DMMEP 30:70 is a promising antibacterial copolymer that may be of use in several clinical applications. PMID:25690041

  4. Introducing a Virtual Lesion Model of Dysphagia Resulting from Pharyngeal Sensory Impairment

    Directory of Open Access Journals (Sweden)

    Paul Muhle

    2018-01-01

    Full Text Available Background/Aims: Performing neurophysiological and functional imaging studies in severely affected patients to investigate novel neurostimulation techniques for the treatment of neurogenic dysphagia is difficult. Therefore, basic research needs to be conducted in healthy subjects. Swallowing is a motor function highly dependent on sensory afferent input. Here we propose a virtual peripheral sensory lesion model to mimic pharyngeal sensory impairment, which is known as a major contributor to dysphagia in neurological disease. Methods: In this randomized crossover study on 11 healthy volunteers, cortical activation during pneumatic pharyngeal stimulation was measured applying magnetoencephalography in two separate sessions, with and without pharyngeal surface anesthesia. Results: Stimulation evoked bilateral event-related desynchronization (ERD mainly in the caudolateral pericentral cortex. In comparison to the no-anesthesia condition, topical anesthesia led to a reduction of ERD in beta (13-30 Hz and low gamma (30-60 Hz frequency ranges (p<0.05 in sensory but also motor cortical areas. Conclusions: Withdrawal of sensory afferent information by topical anesthesia leads to reduced response to pneumatic pharyngeal stimulation in a distributed cortical sensorimotor network in healthy subjects. The proposed paradigm may serve to investigate the effect of neuromodulatory treatments specifically on pharyngeal sensory impairment as relevant cause of neurogenic dysphagia.

  5. A framework to prevent and control tobacco among adolescents and children: introducing the IMPACT model.

    Science.gov (United States)

    Arora, Monika; Mathur, Manu Raj; Singh, Neha

    2013-03-01

    The objective of this paper is to provide a comprehensive evidence based model aimed at addressing multi-level risk factors influencing tobacco use among children and adolescents with multi-level policy and programmatic approaches in India. Evidences around effectiveness of policy and program interventions from developed and developing countries were reviewed using Pubmed, Scopus, Google Scholar and Ovid databases. This evidence was then categorized under three broad approaches: Policy level approaches (increased taxation on tobacco products, smoke-free laws in public places and work places, effective health warnings, prohibiting tobacco advertising, promotions and sponsorships, and restricting access to minors); Community level approaches (school health programs, mass media campaigns, community based interventions, promoting tobacco free norms) and Individual level approaches (promoting cessation in various settings). This review of literature around determinants and interventions was organized into developing the IMPACT framework. The paper further presents a comparative analysis of tobacco control interventions in India vis a vis the proposed approaches. Mixed results were found for prevention and control efforts targeting youth. However, this article suggests a number of intervention strategies that have shown to be effective. Implementing these interventions in a coordinated way will provide potential synergies across interventions. Pediatricians have prominent role in advocating and implementing the IMPACT framework in countries aiming to prevent and control tobacco use among adolescents and children.

  6. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    International Nuclear Information System (INIS)

    Turinsky, Paul J.; Abdel-Khalik, Hany S.; Stover, Tracy E.

    2011-01-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept's core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  7. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  8. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  9. Modeling phase transitions in mixtures of β–γ lens crystallins

    Science.gov (United States)

    Kastelic, Miha; Kalyuzhnyi, Yurij V.; Vlachy, Vojko

    2016-01-01

    We analyze experimentally determined phase diagram of γD–βB1 crystallin mixture. Proteins are described as dumbbells decorated with attractive sites to allow inter–particle interaction. We use thermodynamic perturbation theory to calculate the free energy of such mixtures and, by applying equilibrium conditions, also the compositions and concentrations of the co–existing phases. Initially we fit the Tcloud versus packing fraction η measurements for pure (x2 = 0) γD solution in 0.1 M phosphate buffer at pH = 7.0. Another piece of experimental data, used to fix the model parameters, is the isotherm x2 vs η at T = 268.5 K, at the same pH and salt content. We use the conventional Lorentz–Berthelot mixing rules to describe cross interactions. This enables us to determine (i) model parameters for pure βB1 crystallin protein and to calculate (ii) complete equilibrium surface (Tcloud – x2 – η) for the crystallin mixtures. iii) We present the results for several isotherms, including the tie–lines, as also the temperature–packing fraction curves. Good agreement with the available experimental data is obtained. An interesting result of these calculations is an evidence of coexistence of three phases. This domain appears for the region of temperatures just out of the experimental range studied so far. The input parameters, leading good description of experimental data, revealed large difference between the numbers of the attractive sites for γD and βB1 proteins. This interesting result may be related to the fact that γD has more than nine times smaller quadrupole moment than its partner in the mixture. PMID:27526288

  10. Thermal conductivity of molten salt mixtures: Theoretical model supported by equilibrium molecular dynamics simulations

    Science.gov (United States)

    Gheribi, Aïmen E.; Chartrand, Patrice

    2016-02-01

    A theoretical model for the description of thermal conductivity of molten salt mixtures as a function of composition and temperature is presented. The model is derived by considering the classical kinetic theory and requires, for its parametrization, only information on thermal conductivity of pure compounds. In this sense, the model is predictive. For most molten salt mixtures, no experimental data on thermal conductivity are available in the literature. This is a hindrance for many industrial applications (in particular for thermal energy storage technologies) as well as an obvious barrier for the validation of the theoretical model. To alleviate this lack of data, a series of equilibrium molecular dynamics (EMD) simulations has been performed on several molten chloride systems in order to determine their thermal conductivity in the entire range of composition at two different temperatures: 1200 K and 1300 K. The EMD simulations are first principles type, as the potentials used to describe the interactions have been parametrized on the basis of first principle electronic structure calculations. In addition to the molten chlorides system, the model predictions are also compared to a recent similar EMD study on molten fluorides and with the few reliable experimental data available in the literature. The accuracy of the proposed model is within the reported numerical and/or experimental errors.

  11. Thermal conductivity of molten salt mixtures: Theoretical model supported by equilibrium molecular dynamics simulations.

    Science.gov (United States)

    Gheribi, Aïmen E; Chartrand, Patrice

    2016-02-28

    A theoretical model for the description of thermal conductivity of molten salt mixtures as a function of composition and temperature is presented. The model is derived by considering the classical kinetic theory and requires, for its parametrization, only information on thermal conductivity of pure compounds. In this sense, the model is predictive. For most molten salt mixtures, no experimental data on thermal conductivity are available in the literature. This is a hindrance for many industrial applications (in particular for thermal energy storage technologies) as well as an obvious barrier for the validation of the theoretical model. To alleviate this lack of data, a series of equilibrium molecular dynamics (EMD) simulations has been performed on several molten chloride systems in order to determine their thermal conductivity in the entire range of composition at two different temperatures: 1200 K and 1300 K. The EMD simulations are first principles type, as the potentials used to describe the interactions have been parametrized on the basis of first principle electronic structure calculations. In addition to the molten chlorides system, the model predictions are also compared to a recent similar EMD study on molten fluorides and with the few reliable experimental data available in the literature. The accuracy of the proposed model is within the reported numerical and/or experimental errors.

  12. An Odor Interaction Model of Binary Odorant Mixtures by a Partial Differential Equation Method

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2014-07-01

    Full Text Available A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE method. Based on the measurement method (tangent-intercept method of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture’s odor intensity to the individual odorant’s relative odor activity value (OAV. Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  13. Improving Transferability of Introduced Species’ Distribution Models: New Tools to Forecast the Spread of a Highly Invasive Seaweed

    Science.gov (United States)

    Verbruggen, Heroen; Tyberghein, Lennert; Belton, Gareth S.; Mineur, Frederic; Jueterbock, Alexander; Hoarau, Galice; Gurgel, C. Frederico D.; De Clerck, Olivier

    2013-01-01

    The utility of species distribution models for applications in invasion and global change biology is critically dependent on their transferability between regions or points in time, respectively. We introduce two methods that aim to improve the transferability of presence-only models: density-based occurrence thinning and performance-based predictor selection. We evaluate the effect of these methods along with the impact of the choice of model complexity and geographic background on the transferability of a species distribution model between geographic regions. Our multifactorial experiment focuses on the notorious invasive seaweed Caulerpacylindracea (previously Caulerparacemosa var. cylindracea) and uses Maxent, a commonly used presence-only modeling technique. We show that model transferability is markedly improved by appropriate predictor selection, with occurrence thinning, model complexity and background choice having relatively minor effects. The data shows that, if available, occurrence records from the native and invaded regions should be combined as this leads to models with high predictive power while reducing the sensitivity to choices made in the modeling process. The inferred distribution model of Caulerpacylindracea shows the potential for this species to further spread along the coasts of Western Europe, western Africa and the south coast of Australia. PMID:23950789

  14. Introducing Toxics

    Directory of Open Access Journals (Sweden)

    David C. Bellinger

    2013-04-01

    Full Text Available With this inaugural issue, Toxics begins its life as a peer-reviewed, open access journal focusing on all aspects of toxic chemicals. We are interested in publishing papers that present a wide range of perspectives on toxicants and naturally occurring toxins, including exposure, biomarkers, kinetics, biological effects, fate and transport, treatment, and remediation. Toxics differs from many other journals in the absence of a page or word limit on contributions, permitting authors to present their work in as much detail as they wish. Toxics will publish original research papers, conventional reviews, meta-analyses, short communications, theoretical papers, case reports, commentaries and policy perspectives, and book reviews (Book reviews will be solicited and should not be submitted without invitation. Toxins and toxicants concern individuals from a wide range of disciplines, and Toxics is interested in receiving papers that represent the full range of approaches applied to their study, including in vitro studies, studies that use experimental animal or non-animal models, studies of humans or other biological populations, and mathematical modeling. We are excited to get underway and look forward to working with authors in the scientific and medical communities and providing them with a novel venue for sharing their work. [...

  15. Mixture model with multiple allocations for clustering spatially correlated observations in the analysis of ChIP-Seq data.

    Science.gov (United States)

    Ranciati, Saverio; Viroli, Cinzia; Wit, Ernst C

    2017-11-01

    Model-based clustering is a technique widely used to group a collection of units into mutually exclusive groups. There are, however, situations in which an observation could in principle belong to more than one cluster. In the context of next-generation sequencing (NGS) experiments, for example, the signal observed in the data might be produced by two (or more) different biological processes operating together and a gene could participate in both (or all) of them. We propose a novel approach to cluster NGS discrete data, coming from a ChIP-Seq experiment, with a mixture model, allowing each unit to belong potentially to more than one group: these multiple allocation clusters can be flexibly defined via a function combining the features of the original groups without introducing new parameters. The formulation naturally gives rise to a 'zero-inflation group' in which values close to zero can be allocated, acting as a correction for the abundance of zeros that manifest in this type of data. We take into account the spatial dependency between observations, which is described through a latent conditional autoregressive process that can reflect different dependency patterns. We assess the performance of our model within a simulation environment and then we apply it to ChIP-seq real data. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks.

    Science.gov (United States)

    Vakanski, A; Ferguson, J M; Lee, S

    2016-12-01

    The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of

  17. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks

    Science.gov (United States)

    Vakanski, A; Ferguson, JM; Lee, S

    2016-01-01

    Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach

  18. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    Energy Technology Data Exchange (ETDEWEB)

    Finne, E.F. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway) and University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway)]. E-mail: eivind.finne@niva.no; Cooper, G.A. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Koop, B.F. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Hylland, K. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway); University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway); Tollefsen, K.E. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway)

    2007-03-10

    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17{alpha}-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 {mu}M paraquat (PQ) and 0.75 {mu}M 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as

  19. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    International Nuclear Information System (INIS)

    Finne, E.F.; Cooper, G.A.; Koop, B.F.; Hylland, K.; Tollefsen, K.E.

    2007-01-01

    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17α-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 μM paraquat (PQ) and 0.75 μM 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as well as

  20. An analytical solubility model for nitrogen-methane-ethane ternary mixtures

    Science.gov (United States)

    Hartwig, Jason; Meyerhofer, Peter; Lorenz, Ralph; Lemmon, Eric

    2018-01-01

    Saturn's moon Titan has surface liquids of liquid hydrocarbons and a thick, cold, nitrogen atmosphere, and is a target for future exploration. Critical to the design and operation of vehicles for this environment is knowledge of the amount of dissolved nitrogen gas within the cryogenic liquid methane and ethane seas. This paper rigorously reviews experimental data on the vapor-liquid equilibrium of nitrogen/methane/ethane mixtures, noting the possibility for split liquid phases, and presents simple analytical models for conveniently predicting solubility of nitrogen in pure liquid ethane, pure liquid methane, and a mixture of liquid ethane and methane. Model coefficients are fit to three temperature ranges near the critical point, intermediate range, and near the freezing point to permit accurate predictions across the full range of thermodynamic conditions. The models are validated against the consolidated database of 2356 experimental data points, with mean absolute error between data and model less than 8% for both binary nitrogen/methane and nitrogen/ethane systems, and less than 17% for the ternary nitrogen/methane/ethane system. The model can be used to predict the mole fractions of ethane, methane, and nitrogen as a function of location within the Titan seas.

  1. Pervaporation separation of n-heptane/thiophene mixtures by polyethylene glycol membranes: Modeling and experimental.

    Science.gov (United States)

    Lin, Ligang; Zhang, Yuzhong; Kong, Ying

    2009-11-01

    Gasoline desulfurization by membrane processes is a newly emerged technology, which has provided an efficient new approach for sulfur removal and gained increasing attention of the membrane and petrochemical field. A deep understanding of the solution/diffusion of gasoline molecules on/in the membrane can provide helpful information in improving or optimizing membrane performance. In this study, a desulfurization mechanism of polyethylene glycol (PEG) membranes has been investigated by the study of sorption and diffusion behavior of typical sulfur and hydrocarbon species through PEG membranes. A solution-diffusion model based on UNIFAC and free volume theory has been established. Pervaporation (PV) and sorption experiments were conducted to compare with the model calculation results and to analyze the mass transport behavior. The dynamic sorption curves for pure components and the sorption experiments for binary mixtures showed that thiophene, which had a higher solubility coefficient than n-heptane, was the preferential sorption component, which is key in the separation of thiophene/hydrocarbon mixtures. In all cases, the model calculation results fit well the experimental data. The UNIFAC model was a sound way to predict the solubility of solvents in membranes. The established model can predict the removal of thiophene species from hydrocarbon compounds by PEG membranes effectively.

  2. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    Science.gov (United States)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  3. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Introducing Geoscience Students to Numerical Modeling of Volcanic Hazards: The example of Tephra2 on VHub.org

    Directory of Open Access Journals (Sweden)

    Leah M. Courtland

    2012-07-01

    Full Text Available The Tephra2 numerical model for tephra fallout from explosive volcanic eruptions is specifically designed to enable students to probe ideas in model literacy, including code validation and verification, the role of simplifying assumptions, and the concepts of uncertainty and forecasting. This numerical model is implemented on the VHub.org website, a venture in cyberinfrastructure that brings together volcanological models and educational materials. The VHub.org resource provides students with the ability to explore and execute sophisticated numerical models like Tephra2. We present a strategy for using this model to introduce university students to key concepts in the use and evaluation of Tephra2 for probabilistic forecasting of volcanic hazards. Through this critical examination students are encouraged to develop a deeper understanding of the applicability and limitations of hazard models. Although the model and applications are intended for use in both introductory and advanced geoscience courses, they could easily be adapted to work in other disciplines, such as astronomy, physics, computational methods, data analysis, or computer science.

  5. Modeling management scenarios and the effects of an introduced apex predator on a coastal riverine fish community

    Science.gov (United States)

    Pine, William E.; Kwak, T.J.; Rice, J.A.

    2007-01-01

    The flathead catfish Pylodictis olivaris, a carnivorous fish species native to most of the central interior basin of North America, has been introduced into at least 13 U.S. states and 1 Canadian province. Concurrent declines in abundance of native fishes have been reported in aquatic systems where flathead catfish have been introduced. To evaluate the potential impact of this invasive species on the native fish community we developed an ecosystem simulation model (including flathead catfish) based on empirical data collected from a North Carolina coastal river. The model results suggest that flathead catfish suppress native fish community biomass by 5-50% through both predatory and competitive interactions. However, our model suggests these reductions could be mitigated through sustained exploitation of flathead catfish by recreational or commercial fishers at rates equivalent to those for native flathead catfish populations (annual exploitation = 6-25%). These findings demonstrate the potential for using directed harvest of an invasive species to assist in restoring native communities. ?? Copyright by the American Fisheries Society 2007.

  6. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    Science.gov (United States)

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  7. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  8. Assessment of Differential Rater Functioning in Latent Classes with New Mixture Facets Models.

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2017-01-01

    Multifaceted data are very common in the human sciences. For example, test takers' responses to essay items are marked by raters. If multifaceted data are analyzed with standard facets models, it is assumed there is no interaction between facets. In reality, an interaction between facets can occur, referred to as differential facet functioning. A special case of differential facet functioning is the interaction between ratees and raters, referred to as differential rater functioning (DRF). In existing DRF studies, the group membership of ratees is known, such as gender or ethnicity. However, DRF may occur when the group membership is unknown (latent) and thus has to be estimated from data. To solve this problem, in this study, we developed a new mixture facets model to assess DRF when the group membership is latent and we provided two empirical examples to demonstrate its applications. A series of simulations were also conducted to evaluate the performance of the new model in the DRF assessment in the Bayesian framework. Results supported the use of the mixture facets model because all parameters were recovered fairly well, and the more data there were, the better the parameter recovery.

  9. N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models

    Science.gov (United States)

    Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.

    2018-01-01

    Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.

  10. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    Science.gov (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  11. Multigrid Nonlocal Gaussian Mixture Model for Segmentation of Brain Tissues in Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Yunjie Chen

    2016-01-01

    Full Text Available We propose a novel segmentation method based on regional and nonlocal information to overcome the impact of image intensity inhomogeneities and noise in human brain magnetic resonance images. With the consideration of the spatial distribution of different tissues in brain images, our method does not need preestimation or precorrection procedures for intensity inhomogeneities and noise. A nonlocal information based Gaussian mixture model (NGMM is proposed to reduce the effect of noise. To reduce the effect of intensity inhomogeneity, the multigrid nonlocal Gaussian mixture model (MNGMM is proposed to segment brain MR images in each nonoverlapping multigrid generated by using a new multigrid generation method. Therefore the proposed model can simultaneously overcome the impact of noise and intensity inhomogeneity and automatically classify 2D and 3D MR data into tissues of white matter, gray matter, and cerebral spinal fluid. To maintain the statistical reliability and spatial continuity of the segmentation, a fusion strategy is adopted to integrate the clustering results from different grid. The experiments on synthetic and clinical brain MR images demonstrate the superior performance of the proposed model comparing with several state-of-the-art algorithms.

  12. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  13. Three-dimensional modeling and simulation of asphalt concrete mixtures based on X-ray CT microstructure images

    Directory of Open Access Journals (Sweden)

    Hainian Wang

    2014-02-01

    Full Text Available X-ray CT (computed tomography was used to scan asphalt mixture specimen to obtain high resolution continuous cross-section images and the meso-structure. According to the theory of three-dimensional (3D reconstruction, the 3D reconstruction algorithm was investigated in this paper. The key to the reconstruction technique is the acquisition of the voxel positions and the relationship between the pixel element and node. Three-dimensional numerical model of asphalt mixture specimen was created by a self-developed program. A splitting test was conducted to predict the stress distributions of the asphalt mixture and verify the rationality of the 3D model.

  14. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  15. Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.

    Science.gov (United States)

    Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry

    2016-09-01

    Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Generation of a mixture model ground-motion prediction equation for Northern Chile

    Science.gov (United States)

    Haendel, A.; Kuehn, N. M.; Scherbaum, F.

    2012-12-01

    In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from

  17. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    OpenAIRE

    Jae Young Lee; Lynn M. Hildemann

    2013-01-01

    Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water) and glutaric acid solutions (0–5 mol/kg water) agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%). However, for solutions containing mixtures of malonic/g...

  18. Note: Gaussian mixture model for event recognition in optical time-domain reflectometry based sensing systems.

    Science.gov (United States)

    Fedorov, A K; Anufriev, M N; Zhirnov, A A; Stepanov, K V; Nesterov, E T; Namiot, D E; Karasik, V E; Pnev, A B

    2016-03-01

    We propose a novel approach to the recognition of particular classes of non-conventional events in signals from phase-sensitive optical time-domain-reflectometry-based sensors. Our algorithmic solution has two main features: filtering aimed at the de-nosing of signals and a Gaussian mixture model to cluster them. We test the proposed algorithm using experimentally measured signals. The results show that two classes of events can be distinguished with the best-case recognition probability close to 0.9 at sufficient numbers of training samples.

  19. Linear Mixture Models and Partial Unmixing in Multi- and Hyperspectral Image Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    1998-01-01

    As a supplement or an alternative to classification of hyperspectral image data the linear mixture model is considered in order to obtain estimates of abundance of each class or end-member in pixels with mixed membership. Full unmixing and the partial unmixing methods orthogonal subspace projection...... partial unmixing when we know the desired end-member spectra only and not the full set of end-member spectra. This is an advantage over full unmixing and OSP. An example with a simple simulated 2-band image shows the ability of the CEM method to isolate the desired signal. A case study with a 30 bands...

  20. MATHEMATICAL MODEL OF FORECASTING FOR OUTCOMES IN VICTIMS OF METHANE-COAL MIXTURE EXPLOSION

    OpenAIRE

    E. Y. Fistal; V. G. Guryanov; V. V. Soloshenko

    2016-01-01

    BACKGROUND. The severity of the victims’ state  in the early period after the combined  trauma (with the prevalence of a thermal  injury) is associated with the development of numerous  changes  in all organs and systems  which make proper  diagnosis  of complications and estimation of lethal  outcome  probability extremely  difficult to be performed.MATERIAL AND METHODS. The article  presents a mathematical model  for predicting  lethal  outcomes  in victims of methanecoal mixture explosion,...

  1. Mixtures of beta distributions in models of the duration of a project affected by risk

    Science.gov (United States)

    Gładysz, Barbara; Kuchta, Dorota

    2017-07-01

    This article presents a method for timetabling a project affected by risk. The times required to carry out tasks are modelled using mixtures of beta distributions. The parameters of these beta distributions are given by experts: one corresponding to the duration of a task in stable conditions, with no risks materializing, and the other corresponding to the duration of a task in the case when risks do occur. Finally, a case study will be presented and analysed: the project of constructing a shopping mall in Poland.

  2. Industry-Cost-Curve Approach for Modeling the Environmental Impact of Introducing New Technologies in Life Cycle Assessment.

    Science.gov (United States)

    Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André

    2015-07-07

    The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.

  3. Permeability of EVOH Barrier Material Used in Automotive Applications: Metrology Development for Model Fuel Mixtures

    Directory of Open Access Journals (Sweden)

    Zhao Jing

    2015-02-01

    Full Text Available EVOH (Ethylene-Vinyl Alcohol materials are widely used in automotive applications in multi-layer fuel lines and tanks owing to their excellent barrier properties to aromatic and aliphatic hydrocarbons. These barrier materials are essential to limit environmental fuel emissions and comply with the challenging requirements of fast changing international regulations. Nevertheless, the measurement of EVOH permeability to model fuel mixtures or to their individual components is particularly difficult due to the complexity of these systems and their very low permeability, which can vary by several orders of magnitude depending on the permeating species and their relative concentrations. This paper describes the development of a new automated permeameter capable of taking up the challenge of measuring minute quantities as low as 1 mg/(m2.day for partial fluxes for model fuel mixtures containing ethanol, i-octane and toluene at 50°C. The permeability results are discussed as a function of the model fuel composition and the importance of EVOH preconditioning is emphasized for accurate permeability measurements. The last part focuses on the influence of EVOH conditioning on its mechanical properties and its microstructure, and further illustrates the specific behavior of EVOH in presence of ethanol oxygenated fuels. The new metrology developed in this work offers a new insight in the permeability properties of a leading barrier material and will help prevent the consequences of (bioethanol addition in fuels on environmental emissions through fuel lines and tanks.

  4. Modeling intensive longitudinal data with mixtures of nonparametric trajectories and time-varying effects.

    Science.gov (United States)

    Dziak, John J; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P

    2015-12-01

    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, interindividual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semiparametric regression modeling, to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. (c) 2015 APA, all rights reserved).

  5. Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model

    KAUST Repository

    Liu, Bo

    2016-02-03

    An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior distribution are first integrated forward with the dynamical model for forecasting. A GM representation of the forecast distribution is then constructed from the forecast particles. Once an observation becomes available, the forecast GM is updated according to Bayes’ rule. This leads to (i) a Kalman filter-like update of the particles, and (ii) a Particle filter-like update of their weights, generalizing the ensemble Kalman filter update to non-Gaussian distributions. We focus on investigating the impact of the clustering strategy on the behavior of the filter. Three different clustering methods for constructing the prior GM are considered: (i) a standard kernel density estimation, (ii) clustering with a specified mixture component size, and (iii) adaptive clustering (with a variable GM size). Numerical experiments are performed using a two-dimensional reactive contaminant transport model in which the contaminant concentration and the heterogenous hydraulic conductivity fields are estimated within a confined aquifer using solute concentration data. The experimental results suggest that the performance of the GM filter is sensitive to the choice of the GM model. In particular, increasing the size of the GM does not necessarily result in improved performances. In this respect, the best results are obtained with the proposed adaptive clustering scheme.

  6. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  7. New Alcohol and Onyx Mixture for Embolization: Feasibility and Proof of Concept in Both In Vitro and In Vivo Models

    Energy Technology Data Exchange (ETDEWEB)

    Saeed Kilani, Mohammad, E-mail: msaeedkilani@gmail.com, E-mail: mohammadalikilani@yahoo.com [Centre Hospitalier Universitaire (CHRU) de Lille, Hôpital cardiologique (France); Zehtabi, Fatemeh, E-mail: fatemeh.zehtabi@gmail.com; Lerouge, Sophie, E-mail: Sophie.Lerouge@etsmtl.ca [école de technologie supérieure (ETS) & CHUM Research center (CRCHUM), Department of Mechanical Engineering (Canada); Soulez, Gilles, E-mail: gilles.soulez.chum@ssss.gouv.qc.ca [Centre Hospitalier de l’Université de Montréal, Department of Radiology (Canada); Bartoli, Jean Michel, E-mail: jean-michel.bartoli@ap-hm.fr [University Hospital Timone, Department of Medical Imaging (France); Vidal, Vincent, E-mail: Vincent.VIDAL@ap-hm.fr [Centre Hospitalier Universitaire (CHRU) de Lille, Hôpital cardiologique (France); Badran, Mohammad F., E-mail: mfbadran@hotmail.com [King Faisal Specialist Hospital and Research Center, Radiology Department (Saudi Arabia)

    2017-05-15

    IntroductionOnyx and ethanol are well-known embolic and sclerotic agents that are frequently used in embolization. These agents present advantages and disadvantages regarding visibility, injection control and penetration depth. Mixing both products might yield a new product with different characteristics. The aim of this study is to evaluate the injectability, radiopacity, and mechanical and occlusive properties of different mixtures of Onyx 18 and ethanol in vitro and in vivo (in a swine model).Materials and MethodsVarious Onyx 18 and ethanol formulations were prepared and tested in vitro for their injectability, solidification rate and shrinkage, cohesion and occlusive properties. In vivo tests were performed using 3 swine. Ease of injection, radiopacity, cohesiveness and penetration were analyzed using fluoroscopy and high-resolution CT.ResultsAll mixtures were easy to inject through a microcatheter with no resistance or blockage in vitro and in vivo. The 50%-ethanol mixture showed delayed copolymerization with fragmentation and proximal occlusion. The 75%-ethanol mixture showed poor radiopacity in vivo and was not tested in vitro. The 25%-ethanol mixture showed good occlusive properties and accepted penetration and radiopacity.ConclusionMixing Onyx and ethanol is feasible. The mixture of 25% of ethanol and 75% of Onyx 18 could be a new sclero-embolic agent. Further research is needed to study the chemical changes of the mixture, to confirm the significance of the added sclerotic effect and to find out the ideal mixture percentages.

  8. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  9. Nonideal equilibrium dissolution of trichloroethene from a decane-based nonaqueous phase liquid mixture: Experimental and modeling investigation

    Science.gov (United States)

    McCray, John E.; Dugan, Pamela J.

    2002-07-01

    Batch equilibrium solubility studies were conducted to examine the solubilization behavior of a chlorinated solvent, trichloroethene (TCE), from a fuel-based nonaqueous phase liquid (NAPL) mixture. An alkane (n-decane) was used as a model compound because it is often a primary compound in jet fuel. The NAPL phase mole fractions of the chlorinated solvent in the mixture (XTCEN) that were investigated are typical of in situ values found at industrial and military waste sites (0.0001 >= XTCEN UNIFAC method greatly underpredicts the γTCEN in this surrogate fuel. A NAPL-mixture equilibrium-dissolution model was developed that incorporates the observed nonideal dissolution. This model indicates that nonideal NAPL dissolution is 4 times faster than ideal dissolution for a hypothetical NAPL mixture with an initial XTCEN = 0.001. The magnitude of this effect becomes more important as the initial value of the XTCEN is decreased.

  10. Bayesian Non-Parametric Mixtures of GARCH(1,1 Models

    Directory of Open Access Journals (Sweden)

    John W. Lau

    2012-01-01

    Full Text Available Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.

  11. Link between hopping models and percolation scaling laws for charge transport in mixtures of small molecules

    Directory of Open Access Journals (Sweden)

    Dong-Gwang Ha

    2016-04-01

    Full Text Available Mixed host compositions that combine charge transport materials with luminescent dyes offer superior control over exciton formation and charge transport in organic light emitting devices (OLEDs. Two approaches are typically used to optimize the fraction of charge transport materials in a mixed host composition: either an empirical percolative model, or a hopping transport model. We show that these two commonly-employed models are linked by an analytic expression which relates the localization length to the percolation threshold and critical exponent. The relation is confirmed both numerically and experimentally through measurements of the relative conductivity of Tris(4-carbazoyl-9-ylphenylamine (TCTA :1,3-bis(3,5-dipyrid-3-yl-phenylbenzene (BmPyPb mixtures with different concentrations, where the TCTA plays a role as hole conductor and the BmPyPb as hole insulator. The analytic relation may allow the rational design of mixed layers of small molecules for high-performance OLEDs.

  12. Hemodynamic response based mixture model to estimate micro- and macro-vasculature contributions in functional MRI

    CERN Document Server

    Singh, Manbir; Sungkarat, Witaya; Zhou, Yongxia

    2003-01-01

    A multi-componet model reflecting the temporal characteristics of micro- and macro-vasculature hemodynamic responses was used to fit the time-course of voxels in functional MRI (fMRI). The number of relevant components, the latency of the first component, the time- separation among the components, their relative amplitude and possible interpretation in terms of partial volume contributions of micro- and macro-components to the time-course data were investigated. Analysis of a reversing checkerboard experiment revealed that there was no improvement in the filing beyond two components. Using a two-component model, the fractional abundances of the micro- and macro-vasculature were estimated in individual voxels. These results suggest the potential of a mixture-model approach to mitigate partial volume effects and separate contributions of vascular components within a voxel in fMRI.

  13. Mapping behavioral landscapes for animal movement: a finite mixture modeling approach

    Science.gov (United States)

    Tracey, Jeff A.; Zhu, Jun; Boydston, Erin E.; Lyren, Lisa M.; Fisher, Robert N.; Crooks, Kevin R.

    2013-01-01

    Because of its role in many ecological processes, movement of animals in response to landscape features is an important subject in ecology and conservation biology. In this paper, we develop models of animal movement in relation to objects or fields in a landscape. We take a finite mixture modeling approach in which the component densities are conceptually related to different choices for movement in response to a landscape feature, and the mixing proportions are related to the probability of selecting each response as a function of one or more covariates. We combine particle swarm optimization and an Expectation-Maximization (EM) algorithm to obtain maximum likelihood estimates of the model parameters. We use this approach to analyze data for movement of three bobcats in relation to urban areas in southern California, USA. A behavioral interpretation of the models revealed similarities and differences in bobcat movement response to urbanization. All three bobcats avoided urbanization by moving either parallel to urban boundaries or toward less urban areas as the proportion of urban land cover in the surrounding area increased. However, one bobcat, a male with a dispersal-like large-scale movement pattern, avoided urbanization at lower densities and responded strictly by moving parallel to the urban edge. The other two bobcats, which were both residents and occupied similar geographic areas, avoided urban areas using a combination of movements parallel to the urban edge and movement toward areas of less urbanization. However, the resident female appeared to exhibit greater repulsion at lower levels of urbanization than the resident male, consistent with empirical observations of bobcats in southern California. Using the parameterized finite mixture models, we mapped behavioral states to geographic space, creating a representation of a behavioral landscape. This approach can provide guidance for conservation planning based on analysis of animal movement data using

  14. The acute toxicity of major ion salts to Ceriodaphnia dubia. III. Mathematical models for mixture toxicity.

    Science.gov (United States)

    Erickson, Russell J; Mount, David R; Highland, Terry L; Hockett, J Russell; Hoff, Dale J; Jenson, Correne T; Norberg-King, Teresa J; Peterson, Kira N

    2018-01-01

    Based on previous research on the acute toxicity of major ions (Na + , K + , Ca 2+ , Mg 2+ , Cl - , SO 4 2- , and HCO 3 - /CO 3 2- ) to Ceriodaphnia dubia, a mathematical model was developed for predicting the median lethal concentration (LC50) for any ion mixture, excepting those dominated by K-specific toxicity. One component of the model describes a mechanism of general ion toxicity to which all ions contribute and predicts LC50s as a function of osmolarity and Ca activity. The other component describes Mg/Ca-specific toxicity to apply when such toxicity exceeds the general ion toxicity and predicts LC50s as a function of Mg and Ca activities. This model not only tracks well the observed LC50s from past research used for model development but also successfully predicts LC50s from new toxicity tests on synthetic mixtures of ions emulating chemistries of various ion-enriched effluents and receiving waters. It also performs better than a previously published model for major ion toxicity. Because of the complexities of estimating chemical activities and osmolarity, a simplified model based directly on ion concentrations was also developed and found to provide useful predictions. Environ Toxicol Chem 2018;37:247-259. Published 2017 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. Published 2017 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.

  15. Interaction patterns and toxicities of binary and ternary pesticide mixtures to Daphnia magna estimated by an accelerated failure time model.

    Science.gov (United States)

    Qiu, Xuchun; Tanoue, Wataru; Kawaguchi, Atsushi; Yanagawa, Takashi; Seki, Masanori; Shimasaki, Yohei; Honjo, Tsuneo; Oshima, Yuji

    2017-12-31

    Organisms in natural environments are often exposed to a broad variety of chemicals, and the multi-chemical mixtures exposure may produce significant toxic effects, even though the individual chemicals are present at concentrations below their no-observed-effect concentrations. This study represents the first attempt that uses the accelerated failure time (AFT) model to quantify the interaction and toxicity of multi-chemical mixtures in environmental toxicology. We firstly conducted the acute immobilization tests with Daphnia magna exposed to mixtures of diazinon (DZN), fenitrothion (MEP); and thiobencarb (TB) in single, binary, and ternary formulations, and then fitted the results to the AFT model. The 48-h EC 50 (concentration required to immobilize 50% of the daphnids at 48h) values for each pesticide obtained from the AFT model are within a factor of 2 of the corresponding values calculated from the single pesticide exposure tests, indicating the methodology is able to provide credible toxicity values. The AFT model revealed either significant synergistic (DZN and MEP; DZN and TB) or antagonistic (MEP and TB) interactions in binary mixtures, while the interaction pattern of ternary mixture depended on both the concentration levels and concentration ratios of pesticides. With a factor of 2, the AFT model accurately estimated the toxicities for 78% of binary mixture formulations that exhibited significant synergistic effects, and the toxicities for all the ternary formulations. Our results showed that the AFT model can provide a simple and efficient way to quantify the interactions between pesticides and to assess the toxicity of their mixtures. This ability may greatly facilitate the ecotoxicological risk assessment of exposure to multi-chemical mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Generalized corresponding states model for bulk and interfacial properties in pure fluids and fluid mixtures

    Science.gov (United States)

    Kiselev, S. B.; Ely, J. F.

    2003-10-01

    We have formulated a general approach for transforming an analytical equation of state (EOS) into the crossover form and developed a generalized cubic (GC) EOS for pure fluids, which incorporates nonanalytic scaling laws in the critical region and in the limit ρ→0 is transformed into the ideal gas equation EOS. Using the GC EOS as a reference equation, we have developed a generalized version of the corresponding states (GCS) model, which contains the critical point parameters and accentric factor as input as well as the Ginzburg number Gi. For nonionic fluids we propose a simple correlation between the Ginzburg number Gi and Zc, ω, and molecular weight Mw. In the second step, we develop on the basis of the GCS model and the density functional theory a GCS-density functional theory (DFT) crossover model for the vapor-liquid interface and surface tension. We use the GCS-DFT model for the prediction of the PVT, vapor-liquid equilibrium (VLE) and surface properties of more than 30 pure fluids. In a wide range of thermodynamic states, including the nearest vicinity of the critical point, the GCS reproduces the PVT and VLE surface and the surface tension of one-component fluids (polar and nonpolar) with high accuracy. In the critical region, the GCS-DFT predictions for the surface tension are in excellent agreement with experimental data and theoretical renormalization-group model developed earlier. Using the principle of the critical-point universality we extended the GCS-DFT model to fluid mixtures and developed a field-variable based GCS-FV model. We provide extensive comparisons of the GCS-FV model with experimental data and with the GCS-XV model formulated in terms of the conventional density variable—composition. Far from the critical point both models, GCS-FV and GCS-XV, give practically similar results, but in the critical region, the GCS-FV model yields a better representation of the VLE surface of binary mixtures than the GCS-XV model. We also show that

  17. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Firing rate estimation using infinite mixture models and its application to neural decoding.

    Science.gov (United States)

    Shibue, Ryohei; Komaki, Fumiyasu

    2017-11-01

    Neural decoding is a framework for reconstructing external stimuli from spike trains recorded by various neural recordings. Kloosterman et al. proposed a new decoding method using marked point processes (Kloosterman F, Layton SP, Chen Z, Wilson MA. J Neurophysiol 111: 217-227, 2014). This method does not require spike sorting and thereby improves decoding accuracy dramatically. In this method, they used kernel density estimation to estimate intensity functions of marked point processes. However, the use of kernel density estimation causes problems such as low decoding accuracy and high computational costs. To overcome these problems, we propose a new decoding method using infinite mixture models to estimate intensity. The proposed method improves decoding performance in terms of accuracy and computational speed. We apply the proposed method to simulation and experimental data to verify its performance. NEW & NOTEWORTHY We propose a new neural decoding method using infinite mixture models and nonparametric Bayesian statistics. The proposed method improves decoding performance in terms of accuracy and computation speed. We have successfully applied the proposed method to position decoding from spike trains recorded in a rat hippocampus. Copyright © 2017 the American Physiological Society.

  19. Automatic segmentation of corpus callosum using Gaussian mixture modeling and Fuzzy C means methods.

    Science.gov (United States)

    İçer, Semra

    2013-10-01

    This paper presents a comparative study of the success and performance of the Gaussian mixture modeling and Fuzzy C means methods to determine the volume and cross-sectionals areas of the corpus callosum (CC) using simulated and real MR brain images. The Gaussian mixture model (GMM) utilizes weighted sum of Gaussian distributions by applying statistical decision procedures to define image classes. In the Fuzzy C means (FCM), the image classes are represented by certain membership function according to fuzziness information expressing the distance from the cluster centers. In this study, automatic segmentation for midsagittal section of the CC was achieved from simulated and real brain images. The volume of CC was obtained using sagittal sections areas. To compare the success of the methods, segmentation accuracy, Jaccard similarity and time consuming for segmentation were calculated. The results show that the GMM method resulted by a small margin in more accurate segmentation (midsagittal section segmentation accuracy 98.3% and 97.01% for GMM and FCM); however the FCM method resulted in faster segmentation than GMM. With this study, an accurate and automatic segmentation system that allows opportunity for quantitative comparison to doctors in the planning of treatment and the diagnosis of diseases affecting the size of the CC was developed. This study can be adapted to perform segmentation on other regions of the brain, thus, it can be operated as practical use in the clinic. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Modeling and analysis of time-dependent processes in a chemically reactive mixture

    Science.gov (United States)

    Ramos, M. P.; Ribeiro, C.; Soares, A. J.

    2018-01-01

    In this paper, we study the propagation of sound waves and the dynamics of local wave disturbances induced by spontaneous internal fluctuations in a reactive mixture. We consider a non-diffusive, non-heat conducting and non-viscous mixture described by an Eulerian set of evolution equations. The model is derived from the kinetic theory in a hydrodynamic regime of a fast chemical reaction. The reactive source terms are explicitly computed from the kinetic theory and are built in the model in a proper way. For both time-dependent problems, we first derive the appropriate dispersion relation, which retains the main effects of the chemical process, and then investigate the influence of the chemical reaction on the properties of interest in the problems studied here. We complete our study by developing a rather detailed analysis using the Hydrogen-Chlorine system as reference. Several numerical computations are included illustrating the behavior of the phase velocity and attenuation coefficient in a low-frequency regime and describing the spectrum of the eigenmodes in the small wavenumber limit.

  1. A Solution Methodology and Computer Program to Efficiently Model Thermodynamic and Transport Coefficients of Mixtures

    Science.gov (United States)

    Ferlemann, Paul G.

    2000-01-01

    A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.

  2. Introducing an Evolving Local Neuro-Fuzzy Model--Application to modeling of car-following behavior.

    Science.gov (United States)

    Kazemi, Reza; Abdollahzade, Majid

    2015-11-01

    This paper proposes an Evolving Local Linear Neuro-Fuzzy Model for modeling and identification of nonlinear time-variant systems which change their nature and character over time. The proposed approach evolves through time to follow the structural changes in the time-variant dynamic systems. The evolution process is managed by a distance-based extended hierarchical binary tree algorithm, which decides whether the proposed evolving model should be adapted to the system variations or evolution is necessary. To represent an interesting but challenging example of the systems with changing dynamics, the proposed evolving model is applied to model car-following process in a traffic flow, as an online identification problem. Results of simulations demonstrate effectiveness of the proposed approach in modeling of the time-variant systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Solubility Determination and Modeling and Dissolution Thermodynamic Properties of Raspberry Ketone in Binary Solvent Mixtures of Ethanol and Water

    Science.gov (United States)

    Shu, Min; Zhu, Liang; Wang, Yan-fei; Yang, Jing; Wang, Liyu; Yang, Libin; Zhao, Xiaoyu; Du, Wei

    2018-01-01

    The solubility and dissolution thermodynamic properties of raspberry ketone in a set of binary solvent mixtures (ethanol + water) with different compositions were experimentally determined by static gravimetrical method in the temperature range of 283.15-313.15 K at 0.10 MPa. The solubility of raspberry ketone in this series of ethanol/water binary solvent mixtures was found to increase with a rise in temperature and the rising mole fraction of ethanol in binary solvent mixtures. The van't Hoff, modified Apelblat and 3D Jouyban-Acree-van't Hoff equations were increasingly applied to correlate the solubility in ethanol/water binary solvent mixtures. The former two models could reach better fitting results with the solubility data, while the 3D model can be comprehensively used to estimate the solubility data in all the ratios of ethanol and water in binary solvent mixtures at random temperature. Furthermore, the changes of dissolution thermodynamic properties of raspberry ketone in experimental ethanol/water solvent mixtures were obtained by van't Hoff equation. For all the above experiments, these dissolution processes of raspberry ketone in experimental ethanol/water binary solvent mixtures were estimated to be endothermic and enthalpy-driven.

  4. Nonlinear simulations of solid tumor growth using a mixture model: invasion and branching

    Science.gov (United States)

    Cristini, Vittorio; Li, Xiangrong; Lowengrub, John S.; Wise, Steven M.

    2011-01-01

    We develop a thermodynamically consistent mixture model for avascular solid tumor growth which takes into account the effects of cell-to-cell adhesion, and taxis inducing chemical and molecular species. The mixture model is well-posed and the governing equations are of Cahn–Hilliard type. When there are only two phases, our asymptotic analysis shows that earlier single-phase models may be recovered as limiting cases of a two-phase model. To solve the governing equations, we develop a numerical algorithm based on an adaptive Cartesian block-structured mesh refinement scheme. A centered-difference approximation is used for the space discretization so that the scheme is second order accurate in space. An implicit discretization in time is used which results in nonlinear equations at implicit time levels. We further employ a gradient stable discretization scheme so that the nonlinear equations are solvable for very large time steps. To solve those equations we use a nonlinear multilevel/multigrid method which is of an optimal order O (N) where N is the number of grid points. Spherically symmetric and fully two dimensional nonlinear numerical simulations are performed. We investigate tumor evolution in nutrient-rich and nutrient-poor tissues. A number of important results have been uncovered. For example, we demonstrate that the tumor may suffer from taxis-driven fingering instabilities which are most dramatic when cell proliferation is low, as predicted by linear stability theory. This is also observed in experiments. This work shows that taxis may play a role in tumor invasion and that when nutrient plays the role of a chemoattractant, the diffusional instability is exacerbated by nutrient gradients. Accordingly, we believe this model is capable of describing complex invasive patterns observed in experiments. PMID:18787827

  5. A Proposal of VnR-Based Dynamic Modelling Activities to Introduce Students to Model-Centred Learning

    Science.gov (United States)

    Corni, Federico; Giliberti, Enrico

    2009-01-01

    We propose a laboratory learning pathway, suitable for secondary school up to introductory undergraduate level, employing the VnR dynamic modelling software. It is composed of three increasingly complex activities dealing with experimental work, model design and discussion. (Contains 4 footnotes, 1 table and 5 figures.)

  6. On a model of mixtures with internal variables: Extended Liu procedure for the exploitation of the entropy principle

    Directory of Open Access Journals (Sweden)

    Francesco Oliveri

    2016-01-01

    Full Text Available The exploitation of second law of thermodynamics for a mixture of two fluids with a scalar internal variable and a first order nonlocal state space is achieved by using the extended Liu approach. This method requires to insert as constraints in the entropy inequality either the field equations or their gradient extensions. Consequently, the thermodynamic restrictions imposed by the entropy principle are derived without introducing extra terms neither in the energy balance equation nor in the entropy inequality.

  7. Does Dynamical Downscaling Introduce Novel Information in Climate Model Simulations of Recipitation Change over a Complex Topography Region?

    Science.gov (United States)

    Tselioudis, George; Douvis, Costas; Zerefos, Christos

    2012-01-01

    Current climate and future climate-warming runs with the RegCM Regional Climate Model (RCM) at 50 and 11 km-resolutions forced by the ECHAM GCM are used to examine whether the increased resolution of the RCM introduces novel information in the precipitation field when the models are run for the mountainous region of the Hellenic peninsula. The model results are inter-compared with the resolution of the RCM output degraded to match that of the GCM, and it is found that in both the present and future climate runs the regional models produce more precipitation than the forcing GCM. At the same time, the RCM runs produce increases in precipitation with climate warming even though they are forced with a GCM that shows no precipitation change in the region. The additional precipitation is mostly concentrated over the mountain ranges, where orographic precipitation formation is expected to be a dominant mechanism. It is found that, when examined at the same resolution, the elevation heights of the GCM are lower than those of the averaged RCM in the areas of the main mountain ranges. It is also found that the majority of the difference in precipitation between the RCM and the GCM can be explained by their difference in topographic height. The study results indicate that, in complex topography regions, GCM predictions of precipitation change with climate warming may be dry biased due to the GCM smoothing of the regional topography.

  8. Predicting Complex Organic Mixture Atmospheric Chemistry Using Computer-Generated Reaction Models

    Science.gov (United States)

    Klein, M. T.; Broadbelt, L. J.; Mazurek, M. A.

    2001-12-01

    New measurement and chemical characterization technologies now offer unprecedented capabilities for detecting and describing atmospheric organic matter at the molecular level. As a result, very detailed and extensive chemical inventories are produced routinely in atmospheric field measurements of organic compounds found in the vapor and condensed phases (particles, cloud and fog droplets). Hundreds of organic compounds can constitute the complex chemical mixtures observed for these types of samples, exhibiting a wide spectrum of physical properties such as molecular weight, polarity, pH, and chemical reactivity. The central challenge is describing chemically the complex organic aerosol mixture in a useable fashion that can be linked to predictive models. However, the great compositional complexity of organic aerosols engenders a need for the modeling of the reaction chemistry of these compounds in atmospheric chemical models. On a mechanistic level, atmospheric reactions of organic compounds can involve a network of a very large number of chemical species and reactions. Deriving such large molecular kinetic models by hand is a tedious and time-consuming process. However, such models are usually built upon a few basic chemical principles tempered with the model builder's observations, experience, and intuition that can be summarized as a set of rules. This suggests that given an algorithmic framework, computers (information technology) may be used to apply these chemical principles and rules, thereby building a kinetic model. The framework for this model building process has been developed by means of graph theory. A molecule, which is a set of atoms connected by bonds, may be conceptualized as a set of vertices connected by edges, or to be more precise, a graph. The bond breaking and forming for a reaction can be represented compactly in the form of a matrix operator formally called the "reaction matrix". The addition of the reaction matrix operator to the reduced

  9. Introducing technology learning for energy technologies in a national CGE model through soft links to global and national energy models

    International Nuclear Information System (INIS)

    Martinsen, Thomas

    2011-01-01

    This paper describes a method to model the influence by global policy scenarios, particularly spillover of technology learning, on the energy service demand of the non-energy sectors of the national economy. It is exemplified by Norway. Spillover is obtained from the technology-rich global Energy Technology Perspective model operated by the International Energy Agency. It is provided to a national hybrid model where a national bottom-up Markal model carries forward spillover into a national top-down CGE model at a disaggregated demand category level. Spillover of technology learning from the global energy technology market will reduce national generation costs of energy carriers. This may in turn increase demand in the non-energy sectors of the economy because of the rebound effect. The influence of spillover on the Norwegian economy is most pronounced for the production level of industrial chemicals and for the demand for electricity for residential energy services. The influence is modest, however, because all existing electricity generating capacity is hydroelectric and thus compatible with the low emission policy scenario. In countries where most of the existing generating capacity must be replaced by nascent energy technologies or carbon captured and storage the influence on demand is expected to be more significant. - Highlights: → Spillover of global technology learning may be forwarded into a macroeconomic model. → The national electricity price differs significantly between the different global scenarios. → Soft-linking global and national models facilitate transparency in the technology learning effect chain.

  10. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-09

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  11. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  12. Exploring the quantitative nature of empathy, systemising and autistic traits using factor mixture modelling.

    Science.gov (United States)

    Grove, Rachel; Baillie, Andrew; Allison, Carrie; Baron-Cohen, Simon; Hoekstra, Rosa A

    2015-11-01

    Autism research has previously focused on either identifying a latent dimension or searching for subgroups. Research assessing the concurrently categorical and dimensional nature of autism is needed. To investigate the latent structure of autism and identify meaningful subgroups in a sample spanning the full spectrum of genetic vulnerability. Factor mixture models were applied to data on empathy, systemising and autistic traits from individuals on the autism spectrum, parents and general population controls. A two-factor three-class model was identified, with two factors measuring empathy and systemising. Class one had high systemising and low empathy scores and primarily consisted of individuals with autism. Mainly comprising controls and parents, class three displayed high empathy scores and lower systemising scores, and class two showed balanced scores on both measures of systemising and empathy. Autism is best understood as a dimensional construct, but meaningful subgroups can be identified based on empathy, systemising and autistic traits. © The Royal College of Psychiatrists 2015.

  13. LEARNING VECTOR QUANTIZATION FOR ADAPTED GAUSSIAN MIXTURE MODELS IN AUTOMATIC SPEAKER IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    IMEN TRABELSI

    2017-05-01

    Full Text Available Speaker Identification (SI aims at automatically identifying an individual by extracting and processing information from his/her voice. Speaker voice is a robust a biometric modality that has a strong impact in several application areas. In this study, a new combination learning scheme has been proposed based on Gaussian mixture model-universal background model (GMM-UBM and Learning vector quantization (LVQ for automatic text-independent speaker identification. Features vectors, constituted by the Mel Frequency Cepstral Coefficients (MFCC extracted from the speech signal are used to train the New England subset of the TIMIT database. The best results obtained (90% for gender- independent speaker identification, 97 % for male speakers and 93% for female speakers for test data using 36 MFCC features.

  14. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    Directory of Open Access Journals (Sweden)

    Hiroki Yoshioka

    2011-07-01

    Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.

  15. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  16. Measurement and Modeling of Surface Tensions of Asymmetric Systems: Heptane, Eicosane, Docosane, Tetracosane and their Mixtures

    DEFF Research Database (Denmark)

    Queimada, Antonio; Silva, Filipa A. E.; Caco, Ana I.

    2003-01-01

    To extend the surface tension database for heavy or asymmetric n-alkane mixtures, measurements were performed using the Wilhelmy plate method. Measured systems included the binary mixtures heptane + eicosane, heptane + docosane and heptane + tetracosane and the ternary mixture heptane + eicosane...

  17. Measurement and Modeling of Surface Tensions of Asymmetric Systems: Heptane, Eicosane, Docosane, Tetracosane and their Mixtures

    DEFF Research Database (Denmark)

    Queimada, Antonio; Silva, Filipa A.E; Caco, Ana I.

    2003-01-01

    To extend the surface tension database for heavy or asymmetric n-alkane mixtures, measurements were performed using the Wilhelmy plate method. Measured systems included the binary mixtures heptane + eicosane, heptane + docosane and heptane + tetracosane and the ternary mixture heptane + eicosane ...

  18. The Spatiotemporal Oscillations of Order Parameter for Isothermal Model of the Surface-Directed Spinodal Decomposition in Bounded Binary Mixtures

    Directory of Open Access Journals (Sweden)

    Igor B. Krasnyuk

    2009-01-01

    Full Text Available The asymptotical behavior of order parameter in confined binary mixture is considered in one-dimensional geometry. The interaction between bulk and surface forces in the mixture is investigated. Its established conditions are when the bulk spinodal decomposition may be ignored and when the main role in the process of formation of the oscillating asymptotic periodic spatiotemporal structures plays the surface-directed spinodal decomposition which is modelled by nonlinear dynamical boundary conditions.

  19. Modelling the effects of toxic metal mixtures on the reproduction of Eisenia veneta in different types of soil

    OpenAIRE

    Sdepanian, Stephanie

    2011-01-01

    Previously, toxicity studies have mainly focused on the responses of organisms to single toxicants; however the importance of studying mixtures of toxicants is now being recognised, along with the importance of speciation as a modifier of toxic effect. The aim of this study was to improve our understanding of the toxic response of the compost worm Eisenia veneta to cadmium, copper and zinc by integrating understanding of speciation effects into existing mixture models. Adult earthworm tests w...

  20. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse

    2016-11-15

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers\\' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  1. Determining the source locations of martian meteorites: Hapke mixture models applied to CRISM simulated data of igneous mineral mixtures and martian meteorites

    Science.gov (United States)

    Harris, Jennifer; Grindrod, Peter

    2017-04-01

    At present, martian meteorites represent the only samples of Mars available for study in terrestrial laboratories. However, these samples have never been definitively tied to source locations on Mars, meaning that the fundamental geological context is missing. The goal of this work is to link the bulk mineralogical analyses of martian meteorites to the surface geology of Mars through spectral mixture analysis of hyperspectral imagery. Hapke radiation transfer modelling has been shown to provide accurate (within 5 - 10% absolute error) mineral abundance values from laboratory derived hyperspectral measurements of binary [1] and ternary [2] mixtures of plagioclase, pyroxene and olivine. These three minerals form the vast bulk of the SNC meteorites [3] and the bedrock of the Amazonian provinces on Mars that are inferred to be the source regions for these meteorites based on isotopic aging. Spectral unmixing through the Hapke model could be used to quantitatively analyse the Martian surface and pinpoint the exact craters from which the SNC meteorites originated. However the Hapke model is complex with numerous variables, many of which are determinable in laboratory conditions but not from remote measurements of a planetary surface. Using binary and tertiary spectral mixtures and martian meteorite spectra from the RELAB spectral library, the accuracy of Hapke abundance estimation is investigated in the face of increasing constraints and simplifications to simulate CRISM data. Constraints and simplifications include reduced spectral resolution, additional noise, unknown endmembers and unknown particle physical characteristics. CRISM operates in two spectral resolutions, the Full Resolution Targeted (FRT) with which it has imaged approximately 2% of the martian surface, and the lower spectral resolution MultiSpectral Survey mode (MSP) with which it has covered the vast majority of the surface. On resampling the RELAB spectral mixtures to these two wavelength ranges it was

  2. Mixture model-based atmospheric air mass classification: a probabilistic view of thermodynamic profiles

    Science.gov (United States)

    Pernin, Jérôme; Vrac, Mathieu; Crevoisier, Cyril; Chédin, Alain

    2017-04-01

    Air mass classification has become an important area in synoptic climatology, simplifying the complexity of the atmosphere by dividing the atmosphere into discrete similar thermodynamic patterns. However, the constant growth of atmospheric databases in both size and complexity implies the need to develop new adaptive classifications. Here, we propose a robust unsupervised and supervised classification methodology of a large thermodynamic dataset, on a global scale and over several years, into discrete air mass groups homogeneous in both temperature and humidity that also provides underlying probability laws. Temperature and humidity at different pressure levels are aggregated into a set of cumulative distribution function (CDF) values instead of classical ones. The method is based on a Gaussian mixture model and uses the expectation-maximization (EM) algorithm to estimate the parameters of the mixture. Spatially gridded thermodynamic profiles come from ECMWF reanalyses spanning the period 2000-2009. Different aspects are investigated, such as the sensitivity of the classification process to both temporal and spatial samplings of the training dataset. Comparisons of the classifications made either by the EM algorithm or by the widely used k-means algorithm show that the former can be viewed as a generalization of the latter. Moreover, the EM algorithm delivers, for each observation, the probabilities of belonging to each class, as well as the associated uncertainty. Finally, a decision tree is proposed as a tool for interpreting the different classes, highlighting the relative importance of temperature and humidity in the classification process.

  3. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  4. Introducing the model of cognitive-communication competence: A model to guide evidence-based communication interventions after brain injury.

    Science.gov (United States)

    MacDonald, Sheila

    2017-01-01

    Communication impairments associated with acquired brain injury (ABI) are devastating in their impact on family, community, social, academic, and vocational participation. Despite international evidence-based guidelines for communication interventions, evidence practice gaps include under identification of communication deficits, infrequent referrals, and inadequate treatment to realize functional communication outcomes. Evidence-informed communication intervention requires synthesis of abundant interdisciplinary research. This study describes the development of the model of cognitive-communication competence, a new model that summarizes a complex array of influences on communication to provide a holistic view of communication competence after ABI. A knowledge synthesis approach was employed to integrate interdisciplinary evidence relevant to communication competence. Development of the model included review of the incidence of communication impairments, practice guidelines, and factors relevant to communication competence guided by three key questions. This was followed by expert consultation with researchers, clinicians, and individuals with ABI. The resulting model comprises 7 domains, 7 competencies, and 47 factors related to communication functioning and intervention. This model could bridge evidence to practice by promoting a comprehensive and consistent view of communication competence for evidence synthesis, clinical decision-making, outcome measurement, and interprofessional collaboration.

  5. Solvable Model of a Generic Trapped Mixture of Interacting Bosons: Many-Body and Mean-Field Properties

    Science.gov (United States)

    Klaiman, S.; Streltsov, A. I.; Alon, O. E.

    2018-04-01

    A solvable model of a generic trapped bosonic mixture, N 1 bosons of mass m 1 and N 2 bosons of mass m 2 trapped in an harmonic potential of frequency ω and interacting by harmonic inter-particle interactions of strengths λ 1, λ 2, and λ 12, is discussed. It has recently been shown for the ground state [J. Phys. A 50, 295002 (2017)] that in the infinite-particle limit, when the interaction parameters λ 1(N 1 ‑ 1), λ 2(N 2 ‑ 1), λ 12 N 1, λ 12 N 2 are held fixed, each of the species is 100% condensed and its density per particle as well as the total energy per particle are given by the solution of the coupled Gross-Pitaevskii equations of the mixture. In the present work we investigate properties of the trapped generic mixture at the infinite-particle limit, and find differences between the many-body and mean-field descriptions of the mixture, despite each species being 100%. We compute analytically and analyze, both for the mixture and for each species, the center-of-mass position and momentum variances, their uncertainty product, the angular-momentum variance, as well as the overlap of the exact and Gross-Pitaevskii wavefunctions of the mixture. The results obtained in this work can be considered as a step forward in characterizing how important are many-body effects in a fully condensed trapped bosonic mixture at the infinite-particle limit.

  6. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  7. A Dirichlet Process Mixture Based Name Origin Clustering and Alignment Model for Transliteration

    Directory of Open Access Journals (Sweden)

    Chunyue Zhang

    2015-01-01

    Full Text Available In machine transliteration, it is common that the transliterated names in the target language come from multiple language origins. A conventional maximum likelihood based single model can not deal with this issue very well and often suffers from overfitting. In this paper, we exploit a coupled Dirichlet process mixture model (cDPMM to address overfitting and names multiorigin cluster issues simultaneously in the transliteration sequence alignment step over the name pairs. After the alignment step, the cDPMM clusters name pairs into many groups according to their origin information automatically. In the decoding step, in order to use the learned origin information sufficiently, we use a cluster combination method (CCM to build clustering-specific transliteration models by combining small clusters into large ones based on the perplexities of name language and transliteration model, which makes sure each origin cluster has enough data for training a transliteration model. On the three different Western-Chinese multiorigin names corpora, the cDPMM outperforms two state-of-the-art baseline models in terms of both the top-1 accuracy and mean F-score, and furthermore the CCM significantly improves the cDPMM.

  8. Gasification under CO2–Steam Mixture: Kinetic Model Study Based on Shared Active Sites

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2017-11-01

    Full Text Available In this work, char gasification of two coals (i.e., Shenfu bituminous coal and Zunyi anthracite and a petroleum coke under a steam and CO2 mixture (steam/CO2 partial pressures, 0.025–0.075 MPa; total pressures, 0.100 MPa and CO2/steam chemisorption of char samples were conducted in a Thermogravimetric Analyzer (TGA. Two conventional kinetic models exhibited difficulties in exactly fitting the experimental data of char–steam–CO2 gasification. Hence, a modified model based on Langmuir–Hinshelwood model and assuming that char–CO2 and char–steam reactions partially shared active sites was proposed and had indicated high accuracy for estimating the interactions in char–steam–CO2 reaction. Moreover, it was found that two new model parameters (respectively characterized as the amount ratio of shared active sites to total active sites in char–CO2 and char–steam reactions in the modified model hardly varied with gasification conditions, and the results of chemisorption indicate that these two new model parameters mainly depended on the carbon active sites in char samples.

  9. Isobaric VLE of the mixture {1,8-cineole + ethanol}. EOS analysis and COSMO-RS modeling

    International Nuclear Information System (INIS)

    Torcal, Marcos; Langa, Elisa; Pardo, Juan I.; Mainar, Ana M.; Urieta, José S.

    2016-01-01

    Highlights: • Isobaric VLE for mixture {1,8-cineole + ethanol} at (33.33, 66.66 and 101.33) kPa. • VLE data were correlated by means of Wilson, NRTL and UNIQUAC models. • The mixture shows positive deviations from Raoult law. • Peng–Robinson and SAFT EOS and COSMO-RS model predict quite well dew points. • Predictions of COSMO-RS model are better than those of Peng–Robinson and SAFT EOS. - Abstract: The isobaric (vapor + liquid) equilibrium (VLE) at pressures of (33.33, 66.66 and 101.33) kPa is reported for the mixture {1,8-cineole (1,3,3-trimethyl-2-oxabicycle[2,2,2]octane) + ethanol} in the entire composition range. VLE data were correlated by means of three activity coefficient models (Wilson, NRTL and UNIQUAC) and they were found to be thermodynamically consistent. The mixture exhibits positive deviations from Raoult’s law. Peng–Robinson and SAFT equations of state (EOS) were used first to predict (interaction parameter, k ij , equal to zero), then to correlate (by adjusting the interaction parameter) the VLE. COSMO-RS model has been also used to predict VLE of the mixture. Both EOS and COSMO-RS model show deviations in the prediction of bubble points but reproduce quite well the dew points. COSMO-RS provides the best predictions.

  10. ADAPTIVE BACKGROUND DENGAN METODE GAUSSIAN MIXTURE MODELS UNTUK REAL-TIME TRACKING

    Directory of Open Access Journals (Sweden)

    Silvia Rostianingsih

    2008-01-01

    Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat

  11. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-07-14

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project

  12. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Science.gov (United States)

    McDowell, Ian C; Manandhar, Dinesh; Vockley, Christopher M; Schmid, Amy K; Reddy, Timothy E; Engelhardt, Barbara E

    2018-01-01

    Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  13. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Directory of Open Access Journals (Sweden)

    Ian C McDowell

    2018-01-01

    Full Text Available Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP, which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  14. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior.

    Science.gov (United States)

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A

    2010-10-01

    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. Introducing ZBrush 4

    CERN Document Server

    Keller, Eric

    2011-01-01

    Introducing ZBrush 4 launches readers head-on into fulfilling their artistic potential for sculpting realistic creature, cartoon, and hard surface models in ZBrush. ZBrush's innovative technology and interface can be intimidating to both digital-art beginners as well as veterans who are used to a more conventional modeling environment. This book dispels myths about the difficulty of ZBrush with a thorough tour and exploration of the program's interface. Engaging projects also allow the reader to become comfortable with digital sculpting in with a relaxed and fun book atmosphere. Introducing ZB

  16. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  17. An initiative towards an energy and environment scheme for Iran: Introducing RAISE (Richest Alternatives for Implementation to Supply Energy) model

    International Nuclear Information System (INIS)

    Eshraghi, Hadi; Ahadi, Mohammad Sadegh

    2016-01-01

    Decision making in Iran's energy and environment-related issues has always been tied to complexities. Discussing these complexities and the necessity to deal with them, this paper strives to help the country with a tool by introducing Richest Alternatives for Implementation to Supply Energy (RAISE), a mixed integer linear programming model developed by the means of GNUMathprog mathematical programming language. The paper fully elaborates authors' desired modeling mentality and formulations on which RAISE is programmed to work and verifies its structure by running a widely known sample case named “UTOPIA” and comparing the results with other works including OSeMOSYS and Temoa. The model applies RAISE model to Iranian energy sector to elicit optimal policy without and with a CO 2 emission cap. The results suggest promotion of energy efficiency through investment on combined cycle power plants as the key to optimal policy in power generation sector. Regarding oil refining sector, investment on condensate refineries and advanced refineries equipped with Residual Fluid Catalytic Cracking (RFCC) units are suggested. Results also undermine the prevailing supposition that climate change mitigation deteriorates economic efficiency of energy system and suggest that there is a strong synergy between them. In the case of imposing a CO 2 cap that aims at maintaining CO 2 emissions from electricity production activities at 2012 levels, a shift to renewable energies occurs. - Highlights: • Combined cycle power plant is the best option to meet base load requirements. • There's synergy between climate change mitigation and economic affordability. • Power sector reacts to an emission cap by moving towards renewable energies. • Instead of being exported, condensates should be refined by condensate refineries • Iran's refining sector should be advanced by shifting to RFCC-equipped refineries.

  18. Genomic outlier profile analysis: mixture models, null hypotheses, and nonparametric estimation.

    Science.gov (United States)

    Ghosh, Debashis; Chinnaiyan, Arul M

    2009-01-01

    In most analyses of large-scale genomic data sets, differential expression analysis is typically assessed by testing for differences in the mean of the distributions between 2 groups. A recent finding by Tomlins and others (2005) is of a different type of pattern of differential expression in which a fraction of samples in one group have overexpression relative to samples in the other group. In this work, we describe a general mixture model framework for the assessment of this type of expression, called outlier profile analysis. We start by considering the single-gene situation and establishing results on identifiability. We propose 2 nonparametric estimation procedures that have natural links to familiar multiple testing procedures. We then develop multivariate extensions of this methodology to handle genome-wide measurements. The proposed methodologies are compared using simulation studies as well as data from a prostate cancer gene expression study.

  19. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  20. Improved Emotion Recognition Using Gaussian Mixture Model and Extreme Learning Machine in Speech and Glottal Signals

    Directory of Open Access Journals (Sweden)

    Hariharan Muthusamy

    2015-01-01

    Full Text Available Recently, researchers have paid escalating attention to studying the emotional state of an individual from his/her speech signals as the speech signal is the fastest and the most natural method of communication between individuals. In this work, new feature enhancement using Gaussian mixture model (GMM was proposed to enhance the discriminatory power of the features extracted from speech and glottal signals. Three different emotional speech databases were utilized to gauge the proposed methods. Extreme learning machine (ELM and k-nearest neighbor (kNN classifier were employed to classify the different types of emotions. Several experiments were conducted and results show that the proposed methods significantly improved the speech emotion recognition performance compared to research works published in the literature.

  1. OPTICAL-TO-SAR IMAGE REGISTRATION BASED ON GAUSSIAN MIXTURE MODEL

    Directory of Open Access Journals (Sweden)

    H. Wang

    2012-07-01

    Full Text Available Image registration is a fundamental in remote sensing applications such as inter-calibration and image fusion. Compared to other multi sensor image registration problems such as optical-to-IR, the registration for SAR and optical images has its specials. Firstly, the radiometric and geometric characteristics are different between SAR and optical images. Secondly, the feature extraction methods are heavily suffered with the speckle in SAR images. Thirdly, the structural information is more useful than the point features such as corners. In this work, we proposed a novel Gaussian Mixture Model (GMM based Optical-to-SAR image registration algorithm. The feature of line support region (LSR is used to describe the structural information and the orientation attributes are added into the GMM to avoid Expectation Maximization (EM algorithm falling into local extremum in feature sets matching phase. Through the experiments it proves that our algorithm is very robust for optical-to- SAR image registration problem.

  2. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    Science.gov (United States)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  3. On selecting a prior for the precision parameter of Dirichlet process mixture models

    Science.gov (United States)

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  4. Does Introducing more User-friendly Software Produce better Integrated Learning in an Undergraduate Course on Climate Modelling?

    Science.gov (United States)

    Fletcher, C. G.

    2015-12-01

    Appropriate use of technology in the classroom can help to integrate content and process learning; however, there are multiple barriers to introducing technological aids in the classroom that can reduce their effectiveness. In this pilot study we evaluate the impact on student learning from switching to more user-friendly software for lab assignments in an undergraduate course on climate modelling. Computer programming-based labs are replaced by a menu-driven modelling application called EdGCM, developed by Columbia University and the NASA Goddard Institute for Space Studies. The EdGCM tool allows students to explore and visualize a wider array of aspects of climate variability and change—and in greater depth—than was possible previously. The goals of this study are (i) to make the course material more accessible to students regardless of their computing background; and (ii) to improve the level of students' knowledge retention in core climate science concepts. The impact of the change in software is measured using a combination of class surveys and students' test scores. Differences in the level of integration between content and process learning, compared to previous iterations of the course without EdGCM, are discussed.

  5. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Directory of Open Access Journals (Sweden)

    Arturo Medrano-Soto

    2004-12-01

    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  6. MATHEMATICAL MODEL OF FORECASTING FOR OUTCOMES IN VICTIMS OF METHANE-COAL MIXTURE EXPLOSION

    Directory of Open Access Journals (Sweden)

    E. Y. Fistal

    2016-01-01

    Full Text Available BACKGROUND. The severity of the victims’ state  in the early period after the combined  trauma (with the prevalence of a thermal  injury is associated with the development of numerous  changes  in all organs and systems  which make proper  diagnosis  of complications and estimation of lethal  outcome  probability extremely  difficult to be performed.MATERIAL AND METHODS. The article  presents a mathematical model  for predicting  lethal  outcomes  in victims of methanecoal mixture explosion, based on case histories of 220 miners who were treated at the Donetsk Burn Center in 1994–2012.RESULTS. It was revealed  that  the  probability  of lethal  outcomes  in victims of methane-coal mixture  explosion was statistically significantly affected  with the  area  of deep  burns  (p<0.001, and  the  severe traumatic brain injury (p<0.001. In the probability of lethal  outcomes,  tactics  of surgical treatment for burn wounds in the early hours after the injury was statistically significant (p=0.003. It involves the primary debridement of burn wounds in the period of burn shock with the simultaneous closure of affected  surfaces with temporary biological covering.CONCLUSION. These neural network models are easy to practice and may be created  for the most common pathologic conditions  frequently encountered in clinical practice.

  7. Introducing health gains in location-allocation models: A stochastic model for planning the delivery of long-term care

    International Nuclear Information System (INIS)

    Cardoso, T; Oliveira, M D; Barbosa-Póvoa, A; Nickel, S

    2015-01-01

    Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment. (paper)

  8. Mixture regression models for the gap time distributions and illness-death processes.

    Science.gov (United States)

    Huang, Chia-Hui

    2018-01-27

    The aim of this study is to provide an analysis of gap event times under the illness-death model, where some subjects experience "illness" before "death" and others experience only "death." Which event is more likely to occur first and how the duration of the "illness" influences the "death" event are of interest. Because the occurrence of the second event is subject to dependent censoring, it can lead to bias in the estimation of model parameters. In this work, we generalize the semiparametric mixture models for competing risks data to accommodate the subsequent event and use a copula function to model the dependent structure between the successive events. Under the proposed method, the survival function of the censoring time does not need to be estimated when developing the inference procedure. We incorporate the cause-specific hazard functions with the counting process approach and derive a consistent estimation using the nonparametric maximum likelihood method. Simulations are conducted to demonstrate the performance of the proposed analysis, and its application in a clinical study on chronic myeloid leukemia is reported to illustrate its utility.

  9. Bayesian semiparametric mixture Tobit models with left censoring, skewness, and covariate measurement errors.

    Science.gov (United States)

    Dagne, Getachew A; Huang, Yangxin

    2013-09-30

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.

  10. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    Science.gov (United States)

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  11. IMPLICATIONS FOR ASYMMETRY, NONPROPORTIONALITY, AND HETEROGENEITY IN BRAND SWITCHING FROM PIECE-WISE EXPONENTIAL MIXTURE HAZARD MODELS

    NARCIS (Netherlands)

    WEDEL, M; KAMAKURA, WA; DESARBO, WS; TERHOFSTEDE, F

    1995-01-01

    The authors develop a class of mixtures of piece-wise exponential hazard models for the analysis of brand switching behavior. The models enable the effects of marketing variables to change nonproportionally over time and can, simultaneously, be used to identify segments among which switching and

  12. Introducing litter quality to the ecosystem model LPJ-GUESS: Effects on short- and long-term soil carbon dynamics

    Science.gov (United States)

    Portner, Hanspeter; Wolf, Annett; Rühr, Nadine; Bugmann, Harald

    2010-05-01

    Many biogeochemical models have been applied to study the response of the carbon cycle to changes in climate, whereby the process of carbon uptake (photosynthesis) has usually gained more attention than the equally important process of carbon release by respiration. The decomposition of soil organic matter is driven by a combination of factors like soil temperature, soil moisture and litter quality. We have introduced dependence on litter substrate quality to heterotrophic soil respiration in the ecosystem model LPJ-GUESS [Smith et al.(2001)]. We were interested in differences in model projections before and after the inclusion of the dependency both in respect to short- and long-term soil carbon dynamics. The standard implementation of heterotrophic soil respiration in LPJ-GUESS is a simple carbon three-pool model whose decay rates are dependent on soil temperature and soil moisture. We have added dependence on litter quality by coupling LPJ-GUESS to the soil carbon model Yasso07 [Tuomi et al.(2008)]. The Yasso07 model is based on an extensive number of measurements of litter decomposition of forest soils. Apart from the dependence on soil temperature and soil moisture, the Yasso07 model uses carbon soil pools representing different substrate qualities: acid hydrolyzable, water soluble, ethanol soluble, lignin compounds and humus. Additionally Yasso07 differentiates between woody and non-woody litter. In contrary to the reference implementation of LPJ-GUESS, in the new model implementation, the litter now is divided according to its specific quality and added to the corresponding soil carbon pool. The litter quality thereby differs between litter source (leaves, roots, stems) and plant functional type (broadleaved, needleleaved, grass). The two contrasting model implementations were compared and validated at one specific CarboEuropeIP site (Lägern, Switzerland) and on a broader scale all over Switzerland. Our focus lay on the soil respiration for the years 2006

  13. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  14. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    Science.gov (United States)

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often

  15. Assessing the effect, on animal model, of mixture of food additives, on the water balance.

    Science.gov (United States)

    Friedrich, Mariola; Kuchlewska, Magdalena

    2013-01-01

    The purpose of this study was to determine, on the animal model, the effect of modification of diet composition and administration of selected food additives on water balance in the body. The study was conducted with 48 males and 48 females (separately for each sex) of Wistar strain rats divided into four groups. For drinking, the animals from groups I and III were receiving water, whereas the animals from groups II and IV were administered 5 ml of a solution of selected food additives (potassium nitrate - E 252, sodium nitrite - E 250, benzoic acid - E 210, sorbic acid - E 200, and monosodium glutamate - E 621). Doses of the administered food additives were computed taking into account the average intake by men, expressed per body mass unit. Having drunk the solution, the animals were provided water for drinking. The mixture of selected food additives applied in the experiment was found to facilitate water retention in the body both in the case of both male and female rats, and differences observed between the volume of ingested fluids and the volume of excreted urine were statistically significant in the animals fed the basal diet. The type of feed mixture provided to the animals affected the site of water retention - in the case of animals receiving the basal diet analyses demonstrated a significant increase in water content in the liver tissue, whereas in the animals fed the modified diet water was observed to accumulate in the vascular bed. Taking into account the fact of water retention in the vascular bed, the effects of food additives intake may be more adverse in the case of females.

  16. Constellations of dyadic relationship quality in stepfamilies: A factor mixture model.

    Science.gov (United States)

    Jensen, Todd M

    2017-12-01

    Stepfamilies are an increasingly common family form, marked by distinct challenges, opportunities, and complex networks of dyadic relationships that can transcend single households. There exists a dearth of typological analyses by which constellations of dyadic processes in stepfamilies are holistically analyzed. Factor mixture modeling is used to identify population heterogeneity with respect to features of mother-child, stepfather-child, nonresident father-child, and stepcouple relationships using a representative sample of 1,182 adolescents in mother-stepfather families with living nonresident fathers from Wave I of the National Longitudinal Study of Adolescent to Adult Health. Results favor a 4-class factor-mixture solution with class-specific factor covariance matrices. Class 1 (n = 302, 25.5%), the residence-centered pattern, was marked by high-quality residential relationships. Class 2 (n = 307, 26%), the inclusive pattern, was marked by high-quality relationships across all four dyads, with an especially involved nonresident father-child relationship. Class 3 (n = 350, 29.6%), the unhappy couple pattern, was marked by very low stepcouple relationship quality. Class 4 (n = 223, 18.9%), the parent-child disconnection pattern, was marked by distant relationships between youth and all three parental figures. The residence-centered and inclusive patterns encompassed some positive correlations between dyadic relationships whereas the unhappy couple and parent-child disconnection patterns encompassed some negative correlations between dyadic relationships. The patterns present with differences across sociodemographic and substantive covariates and highlight important opportunities for the development of new and innovative interventions, particularly to meet the needs of stepfamilies that reflect the parent-child disconnection pattern. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  18. A Dedicated Mixture Model for Clustering Smart Meter Data: Identification and Analysis of Electricity Consumption Behaviors

    Directory of Open Access Journals (Sweden)

    Fateh Nassim Melzi

    2017-09-01

    Full Text Available The large amount of data collected by smart meters is a valuable resource that can be used to better understand consumer behavior and optimize electricity consumption in cities. This paper presents an unsupervised classification approach for extracting typical consumption patterns from data generated by smart electric meters. The proposed approach is based on a constrained Gaussian mixture model whose parameters vary according to the day type (weekday, Saturday or Sunday. The proposed methodology is applied to a real dataset of Irish households collected by smart meters over one year. For each cluster, the model provides three consumption profiles that depend on the day type. In the first instance, the model is applied on the electricity consumption of users during one month to extract groups of consumers who exhibit similar consumption behaviors. The clustering results are then crossed with contextual variables available for the households to show the close links between electricity consumption and household socio-economic characteristics. At the second instance, the evolution of the consumer behavior from one month to another is assessed through variations of cluster sizes over time. The results show that the consumer behavior evolves over time depending on the contextual variables such as temperature fluctuations and calendar events.

  19. Simultaneous discovery, estimation and prediction analysis of complex traits using a bayesian mixture model.

    Directory of Open Access Journals (Sweden)

    Gerhard Moser

    2015-04-01

    Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.

  20. Experimental and modeling study on effects of N2 and CO2 on ignition characteristics of methane/air mixture

    Directory of Open Access Journals (Sweden)

    Wen Zeng

    2015-03-01

    Full Text Available The ignition delay times of methane/air mixture diluted by N2 and CO2 were experimentally measured in a chemical shock tube. The experiments were performed over the temperature range of 1300–2100 K, pressure range of 0.1–1.0 MPa, equivalence ratio range of 0.5–2.0 and for the dilution coefficients of 0%, 20% and 50%. The results suggest that a linear relationship exists between the reciprocal of temperature and the logarithm of the ignition delay times. Meanwhile, with ignition temperature and pressure increasing, the measured ignition delay times of methane/air mixture are decreasing. Furthermore, an increase in the dilution coefficient of N2 or CO2 results in increasing ignition delays and the inhibition effect of CO2 on methane/air mixture ignition is stronger than that of N2. Simulated ignition delays of methane/air mixture using three kinetic models were compared to the experimental data. Results show that GRI_3.0 mechanism gives the best prediction on ignition delays of methane/air mixture and it was selected to identify the effects of N2 and CO2 on ignition delays and the key elementary reactions in the ignition chemistry of methane/air mixture. Comparisons of the calculated ignition delays with the experimental data of methane/air mixture diluted by N2 and CO2 show excellent agreement, and sensitivity coefficients of chain branching reactions which promote mixture ignition decrease with increasing dilution coefficient of N2 or CO2.